text
stringlengths
4
2.78M
--- abstract: 'The purpose of this article is to study the problem of finding sharp lower bounds for the norm of the product of polynomials in the ultraproducts of Banach spaces $(X_i)_{\mathfrak U}$. We show that, under certain hypotheses, there is a strong relation between this problem and the same problem for the spaces $X_i$.' address: 'IMAS-CONICET' author: - Jorge Tomás Rodríguez title: On the norm of products of polynomials on ultraproducts of Banach spaces --- Introduction ============ In this article we study the factor problem in the context of ultraproducts of Banach spaces. This problem can be stated as follows: for a Banach space $X$ over a field ${\mathbb K}$ (with ${\mathbb K}={\mathbb R}$ or ${\mathbb K}={\mathbb C}$) and natural numbers $k_1,\cdots, k_n$ find the optimal constant $M$ such that, given any set of continuous scalar polynomials $P_1,\cdots,P_n:X\rightarrow {\mathbb K}$, of degrees $k_1,\cdots,k_n$; the inequality $$\label{problema} M \Vert P_1 \cdots P_n\Vert \ge \, \Vert P_1 \Vert \cdots \Vert P_n \Vert$$ holds, where $\Vert P \Vert = \sup_{\Vert x \Vert_X=1} \vert P(x)\vert$. We also study a variant of the problem in which we require the polynomials to be homogeneous. Recall that a function $P:X\rightarrow {\mathbb K}$ is a continuous $k-$homogeneous polynomial if there is a continuous $k-$linear function $T:X^k\rightarrow {\mathbb K}$ for which $P(x)=T(x,\cdots,x)$. A function $Q:X\rightarrow {\mathbb K}$ is a continuous polynomial of degree $k$ if $Q=\sum_{l=0}^k Q_l$ with $Q_0$ a constant, $Q_l$ ($1\leq l \leq k$) an $l-$homogeneous polynomial and $Q_k \neq 0$ . The factor problem has been studied by several authors. In [@BST], C. Benítez, Y. Sarantopoulos and A. Tonge proved that, for continuous polynomials, inequality (\[problema\]) holds with constant $$M=\frac{(k_1+\cdots + k_n)^{(k_1+\cdots +k_n)}}{k_1^{k_1} \cdots k_n^{k_n}}$$ for any complex Banach space. The authors also showed that this is the best universal constant, since there are polynomials on $\ell_1$ for which equality prevails. For complex Hilbert spaces and homogeneous polynomials, D. Pinasco proved in [@P] that the optimal constant is $$\nonumber M=\sqrt{\frac{(k_1+\cdots + k_n)^{(k_1+\cdots +k_n)}}{k_1^{k_1} \cdots k_n^{k_n}}}.$$ This is a generalization of the result for linear functions obtained by Arias-de-Reyna in [@A]. In [@CPR], also for homogeneous polynomials, D. Carando, D. Pinasco and the author proved that for any complex $L_p(\mu)$ space, with $dim(L_p(\mu))\geq n$ and $1<p<2$, the optimal constant is $$\nonumber M=\sqrt[p]{\frac{(k_1+\cdots + k_n)^{(k_1+\cdots +k_n)}}{k_1^{k_1} \cdots k_n^{k_n}}}.$$ This article is partially motivated by the work of M. Lindström and R. A. Ryan in [@LR]. In that article they studied, among other things, a problem similar to (\[problema\]): finding the so called polarization constant of a Banach space. They found a relation between the polarization constant of the ultraproduct $(X_i)_{\mathfrak U}$ and the polarization constant of each of the spaces $X_i$. Our objective is to do an analogous analysis for our problem (\[problema\]). That is, to find a relation between the factor problem for the space $(X_i)_{\mathfrak U}$ and the factor problem for the spaces $X_i$. In Section 2 we give some basic definitions and results of ultraproducts needed for our discussion. In Section 3 we state and prove the main result of this paper, involving ultraproducts, and a similar result on biduals. Ultraproducts ============= We begin with some definitions, notations and basic results on filters, ultrafilters and ultraproducts. Most of the content presented in this section, as well as an exhaustive exposition on ultraproducts, can be found in Heinrich’s article [@H]. A filter ${\mathfrak U}$ on a family $I$ is a collection of non empty subsets of $I$ closed by finite intersections and inclusions. An ultrafilter is maximal filter. In order to define the ultraproduct of Banach spaces, we are going to need some topological results first. Let ${\mathfrak U}$ be an ultrafilter on $I$ and $X$ a topological space. We say that the limit of $(x_i)_{i\in I} \subseteq X$ respect of ${\mathfrak U}$ is $x$ if for every open neighborhood $U$ of $x$ the set $\{i\in I: x_i \in U\}$ is an element of ${\mathfrak U}$. We denote $$\displaystyle\lim_{i,{\mathfrak U}} x_i = x.$$ The following is Proposition 1.5 from [@H]. \[buenadef\] Let ${\mathfrak U}$ be an ultrafilter on $I$, $X$ a compact Hausdorff space and $(x_i)_{i\in I} \subseteq X$. Then, the limit of $(x_i)_{i\in I}$ respect of ${\mathfrak U}$ exists and is unique. Later on, we are going to need the next basic Lemma about limits of ultraproducts, whose proof is an easy exercise of basic topology and ultrafilters. \[lemlimit\] Let ${\mathfrak U}$ be an ultrafilter on $I$ and $\{x_i\}_{i\in I}$ a family of real numbers. Assume that the limit of $(x_i)_{i\in I} \subseteq {\mathbb R}$ respect of ${\mathfrak U}$ exists and let $r$ be a real number such that there is a subset $U$ of $\{i: r<x_i\}$ with $U\in {\mathfrak U}$. Then $$r \leq \displaystyle\lim_{i,{\mathfrak U}} x_i.$$ We are now able to define the ultraproduct of Banach spaces. Given an ultrafilter ${\mathfrak U}$ on $I$ and a family of Banach spaces $(X_i)_{i\in I}$, take the Banach space $\ell_\infty(I,X_i)$ of norm bounded families $(x_i)_{i\in I}$ with $x_i \in X_i$ and norm $$\Vert (x_i)_{i\in I} \Vert = \sup_{i\in I} \Vert x_i \Vert.$$ The ultraproduct $(X_i)_{\mathfrak U}$ is defined as the quotient space $\ell_\infty(I,X_i)/ \sim $ where $$(x_i)_{i\in I}\sim (y_i)_{i\in I} \Leftrightarrow \displaystyle\lim_{i,{\mathfrak U}} \Vert x_i - y_i \Vert = 0.$$ Observe that Proposition \[buenadef\] assures us that this limit exists for every pair $(x_i)_{i\in I}, (y_i)_{i\in I}\in \ell_\infty(I,X_i)$. We denote the class of $(x_i)_{i\in I}$ in $(X_i)_{\mathfrak U}$ by $(x_i)_{\mathfrak U}$. The following result is the polynomial version of Definition 2.2 from [@H] (see also Proposition 2.3 from [@LR]). The reasoning behind is almost the same. \[pollim\] Given two ultraproducts $(X_i)_{\mathfrak U}$, $(Y_i)_{\mathfrak U}$ and a family of continuous homogeneous polynomials $\{P_i\}_{i\in I}$ of degree $k$ with $$\displaystyle\sup_{i\in I} \Vert P_i \Vert < \infty,$$ the map $P:(X_i)_{\mathfrak U}\longrightarrow (Y_i)_{\mathfrak U}$ defined by $P((x_i)_{\mathfrak U})=(P_i(x_i))_{\mathfrak U}$ is a continuous homogeneous polynomial of degree $k$. Moreover $\Vert P \Vert = \displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. If ${\mathbb K}={\mathbb C}$, the hypothesis of homogeneity can be omitted, but in this case the degree of $P$ can be lower than $k$. Let us start with the homogeneous case. Write $P_i(x)=T_i(x,\cdots,x)$ with $T_i$ a $k-$linear continuous function. Define $T:(X_i)_{\mathfrak U}^k \longrightarrow (Y_i)_{\mathfrak U}$ by $$T((x^1_i)_{\mathfrak U},\cdots,(x^k_i)_{\mathfrak U})=(T_i(x^1_i,\cdots ,x^k_i))_{\mathfrak U}.$$ $T$ is well defined since, by the polarization formula, $ \displaystyle\sup_{i\in I} \Vert T_i \Vert \leq \displaystyle\sup_{i\in I} \frac{k^k}{k!}\Vert P_i \Vert< \infty$. Seeing that for each coordinate the maps $T_i$ are linear, the map $T$ is linear in each coordinate, and thus it is a $k-$linear function. Given that $$P((x_i)_{\mathfrak U})=(P_i(x_i))_{\mathfrak U}=(T_i(x_i,\cdots,x_i))_{\mathfrak U}=T((x_i)_{\mathfrak U},\cdots,(x_i)_{\mathfrak U})$$ we conclude that $P$ is a $k-$homogeneous polynomial. To see the equality of the norms for every $i$ choose a norm $1$ element $x_i\in X_i$ where $P_i$ almost attains its norm, and from there is easy to deduce that $\Vert P \Vert \geq \displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. For the other inequality we use that $$|P((x_i)_{\mathfrak U})|= \displaystyle\lim_{i,{\mathfrak U}}|P_i(x_i)| \leq \displaystyle\lim_{i,{\mathfrak U}}\Vert P_i \Vert \Vert x_i \Vert^k = \left(\displaystyle\lim_{i,{\mathfrak U}}\Vert P_i \Vert \right)\Vert (x_i)_{\mathfrak U}\Vert^k .$$ Now we treat the non homogeneous case. For each $i\in I$ we write $P_i=\sum_{l=0}^kP_{i,l}$, with $P_{i,0}$ a constant and $P_{i,l}$ ($1\leq l \leq k$) an $l-$homogeneous polynomial. Take the direct sum $X_i \oplus_\infty {\mathbb C}$ of $X_i$ and ${\mathbb C}$, endowed with the norm $\Vert (x,\lambda) \Vert =\max \{ \Vert x \Vert, | \lambda| \}$. Consider the polynomial $\tilde{P_i}:X_i \oplus_\infty {\mathbb C}\rightarrow Y_i$ defined by $\tilde{P}_i(x,\lambda)=\sum_{l=0}^k P_{i,l}(x)\lambda^{k-l}$. The polynomial $\tilde{P}_i$ is an homogeneous polynomial of degree $k$ and, using the maximum modulus principle, it is easy to see that $\Vert P_i \Vert = \Vert \tilde{P_i} \Vert $. Then, by the homogeneous case, we have that the polynomial $\tilde{P}:(X_i \oplus_\infty {\mathbb C})_{\mathfrak U}\rightarrow (Y_i)_{\mathfrak U}$ defined as $\tilde{P}((x_i,\lambda_i)_{\mathfrak U})=(\tilde{P}_i(x_i,\lambda_i))_{\mathfrak U}$ is a continuous homogeneous polynomial of degree $k$ and $\Vert \tilde{P} \Vert =\displaystyle\lim_{i,{\mathfrak U}} \Vert \tilde{P}_i \Vert =\displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. Via the identification $(X_i \oplus_\infty {\mathbb C})_{\mathfrak U}=(X_i)_{\mathfrak U}\oplus_\infty {\mathbb C}$ given by $(x_i,\lambda_i)_{\mathfrak U}=((x_i)_{\mathfrak U},\displaystyle\lim_{i,{\mathfrak U}} \lambda_i)$ we have that the polynomial $Q:(X_i)_{\mathfrak U}\oplus_\infty {\mathbb C}\rightarrow {\mathbb C}$ defined as $Q((x_i)_{\mathfrak U},\lambda)=\tilde{P}((x_i,\lambda)_{\mathfrak U})$ is a continuous homogeneous polynomial of degree $k$ and $\Vert Q\Vert =\Vert \tilde{P}\Vert$. Then, the polynomial $P((x_i)_{\mathfrak U})=Q((x_i)_{\mathfrak U},1)$ is a continuous polynomial of degree at most $k$ and $\Vert P\Vert =\Vert Q\Vert =\displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. If $\displaystyle\lim_{i,{\mathfrak U}} \Vert P_{i,k} \Vert =0 $ then the degree of $P$ is lower than $k$. Note that, in the last proof, we can take the same approach used for non homogeneous polynomials in the real case, but we would not have the same control over the norms. Main result ============= This section contains our main result. As mentioned above, this result is partially motivated by Theorem 3.2 from [@LR]. We follow similar ideas for the proof. First, let us fix some notation that will be used throughout this section. In this section, all polynomials considered are continuous scalar polynomials. Given a Banach space $X$, $B_X$ and $S_X$ denote the unit ball and the unit sphere of $X$ respectively, and $X^*$ is the dual of $X$. Given a polynomial $P$ on $X$, $deg(P)$ stands for the degree of $P$. For a Banach space $X$ let $D(X,k_1,\cdots,k_n)$ denote the smallest constant that satisfies (\[problema\]) for polynomials of degree $k_1,\cdots,k_n$. We also define $C(X,k_1,\cdots,k_n)$ as the smallest constant that satisfies (\[problema\]) for homogeneous polynomials of degree $k_1,\cdots,k_n$. Throughout this section most of the results will have two parts. The first involving the constant $C(X,k_1,\cdots,k_n)$ for homogeneous polynomials and the second involving the constant $D(X,k_1,\cdots,k_n)$ for arbitrary polynomials. Given that the proof of both parts are almost equal, we will limit to prove only the second part of the results. Recall that a space $X$ has the $1 +$ uniform approximation property if for all $n\in {\mathbb N}$, exists $m=m(n)$ such that for every subspace $M\subset X$ with $dim(M)=n$ and every $\varepsilon > 0$ there is an operator $T\in \mathcal{L}(X,X)$ with $T|_M=id$, $rg(T)\leq m$ and $\Vert T\Vert \leq 1 + \varepsilon$ (i.e. for every $\varepsilon > 0$ $X$ has the $1+\varepsilon$ uniform approximation property). \[main thm\] If ${\mathfrak U}$ is an ultrafilter on a family $I$ and $(X_i)_{\mathfrak U}$ is an ultraproduct of complex Banach spaces then 1. $C((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(C(X_i,k_1,\cdots,k_n)).$ 2. $D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n)).$ Moreover, if each $X_i$ has the $1+$ uniform approximation property, equality holds in both cases. In order to prove this Theorem some auxiliary lemmas are going to be needed. The first one is due to Heinrich [@H]. \[aprox\] Given an ultraproduct of Banach spaces $(X_i)_{\mathfrak U}$, if each $X_i$ has the $1+$ uniform approximation property then $(X_i)_{\mathfrak U}$ has the metric approximation property. When working with the constants $C(X,k_1,\cdots,k_n)$ and $D(X,k_1,\cdots,k_n)$, the following characterization may result handy. \[alternat\] a) The constant $C(X,k_1,\cdots,k_n)$ is the biggest constant $M$ such that given any $\varepsilon >0$ there exist a set of homogeneous continuous polynomials $\{P_j\}_{j=1}^n$ with $deg(P_j)\leq k_j$ such that $$\label{condition} M\left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon) \prod_{j=1}^{n} \Vert P_j \Vert.$$ b\) The constant $D(X,k_1,\cdots,k_n)$ is the biggest constant satisfying the same for arbitrary polynomials. To prove this Lemma it is enough to see that $D(X,k_1,\cdots,k_n)$ is decreasing as a function of the degrees $k_1,\cdots, k_n$ and use that the infimum is the greatest lower bound. \[rmkalternat\] It is clear that in Lemma \[alternat\] we can take the polynomials $\{P_j\}_{j=1}^n$ with $deg(P_j)= k_j$ instead of $deg(P_j)\leq k_j$. Later on we will use both versions of the Lemma. One last lemma is needed for the proof of the Main Theorem. \[normas\] Let $P$ be a (not necessarily homogeneous) polynomial on a complex Banach space $X$ with $deg(P)=k$. For any point $x\in X$ $$|P(x)|\leq \max\{\Vert x \Vert, 1\}^k \Vert P\Vert . \nonumber$$ If $P$ is homogeneous the result is rather obvious since we have the inequality $$|P(x)|\leq \Vert x \Vert^k \Vert P\Vert . \nonumber$$ Suppose that $P=\sum_{l=0}^k P_l$ with $P_l$ an $l-$homogeneous polynomial. Consider the space $X \oplus_\infty {\mathbb C}$ and the polynomial $\tilde{P}:X \oplus_\infty {\mathbb C}\rightarrow {\mathbb C}$ defined by $\tilde{P}(x,\lambda)=\sum_{l=0}^k P_l(x)\lambda^{k-l}$. The polynomial $\tilde{P}$ is homogeneous of degree $k$ and $\Vert P \Vert = \Vert \tilde{P} \Vert $. Then, using that $\tilde{P}$ is homogeneous we have $$|P(x)|=|\tilde{P} (x,1)| \leq \Vert (x,1) \Vert^k \Vert \tilde{P} \Vert = \max\{\Vert x \Vert, 1\}^k \Vert P\Vert . \nonumber$$ We are now able to prove our main result. Throughout this proof we regard the space $({\mathbb C})_{\mathfrak U}$ as ${\mathbb C}$ via the identification $(\lambda_i)_{\mathfrak U}=\displaystyle\lim_{i,{\mathfrak U}} \lambda_i$. First, we are going to see that $D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n))$. To do this we only need to prove that $\displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n))$ satisfies (\[condition\]). Given $\varepsilon >0$ we need to find a set of polynomials $\{P_{j}\}_{j=1}^n$ on $(X_i)_{\mathfrak U}$ with $deg(P_{j})\leq k_j$ such that $$\displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n)) \left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_j \right \Vert .$$ By Remark \[rmkalternat\] we know that for each $i\in I$ there is a set of polynomials $\{P_{i,j}\}_{j=1}^n$ on $X_i$ with $deg(P_{i,j})=k_j$ such that $$D(X_i,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_{i,j} \right \Vert \leq (1 +\varepsilon)\prod_{j=1}^{n} \left \Vert P_{i,j} \right \Vert.$$ Replacing $P_{i,j}$ with $P_{i,j}/\Vert P_{i,j} \Vert$ we may assume that $\Vert P_{i,j} \Vert =1$. Define the polynomials $\{P_j\}_{j=1}^n$ on $(X_i)_{\mathfrak U}$ by $P_j((x_i)_{\mathfrak U})=(P_{i,j}(x_i))_{\mathfrak U}$. Then, by Proposition \[pollim\], $deg(P_j)\leq k_j$ and $$\begin{aligned} \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n)) \left \Vert \prod_{j=1}^{n} P_{j} \right \Vert &=& \displaystyle\lim_{i,{\mathfrak U}} \left(D(X_i,k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} P_{i,j} \right \Vert \right) \nonumber \\ &\leq& \displaystyle\lim_{i,{\mathfrak U}}\left((1+\varepsilon)\prod_{j=1}^{n}\Vert P_{i,j} \Vert \right)\nonumber \\ &=& (1+\varepsilon)\prod_{j=1}^{n} \Vert P_{j} \Vert \nonumber \nonumber \end{aligned}$$ as desired. To prove that $D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \leq \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n))$ if each $X_i$ has the $1+$ uniform approximation property is not as straightforward. Given $\varepsilon >0$, let $\{P_j\}_{j=1}^n$ be a set of polynomials on $(X_i)_{\mathfrak U}$ with $deg(P_j)=k_j$ such that $$D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon)\prod_{j=1}^{n} \Vert P_j \Vert .$$ Let $K\subseteq B_{(X_i)_{\mathfrak U}}$ be the finite set $K=\{x_1,\cdots, x_n\}$ where $ x_j$ is such that $$|P_j(x_j)| > \Vert P_j\Vert (1- \varepsilon) \mbox{ for }j=1,\cdots, n.$$ Being that each $X_i$ has the $1+$ uniform approximation property, then, by Lemma \[aprox\], $(X_i)_{\mathfrak U}$ has the metric approximation property. Therefore, exist a finite rank operator $S:(X_i)_{\mathfrak U}\rightarrow (X_i)_{\mathfrak U}$ such that $\Vert S\Vert \leq 1 $ and $$\Vert P_j - P_j \circ S \Vert_K< |P_j(x_j)|\varepsilon \mbox{ for }j=1,\cdots, n.$$ Now, define the polynomials $Q_1,\cdots, Q_n$ on $(X_i)_{\mathfrak U}$ as $Q_j=P_j\circ S$. Then $$\left\Vert \prod_{j=1}^n Q_j \right\Vert \leq \left\Vert \prod_{j=1}^n P_j \right\Vert$$ $$\Vert Q_j\Vert_K > | P_j(x_j)|-\varepsilon | P_j(x_j)| =| P_j(x_j)| (1-\varepsilon) \geq \Vert P_j \Vert(1-\varepsilon)^2.$$ The construction of this polynomials is a slight variation of Lemma 3.1 from [@LR]. We have the next inequality for the product of the polynomials $\{Q_j\}_{j=1}^n$ $$\begin{aligned} D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert &\leq& D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} P_{j} \right \Vert \nonumber \\ &\leq& (1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_{j} \right \Vert . \label{desq}\end{aligned}$$ Since $S$ is a finite rank operator, the polynomials $\{ Q_j\}_{j=1}^n$ have the advantage that are finite type polynomials. This will allow us to construct polynomials on $(X_i)_{\mathfrak U}$ which are limit of polynomials on the spaces $X_i$. For each $j$ write $Q_j=\sum_{t=1}^{m_j}(\psi_{j,t})^{r_{j,t}}$ with $\psi_{j,t}\in (X_i)_{\mathfrak U}^*$, and consider the spaces $N=\rm{span} \{x_1,\cdots,x_n\}\subset (X_i)_{\mathfrak U}$ and $M=\rm{span} \{\psi_{j,t} \}\subset (X_i)_{\mathfrak U}^*$. By the local duality of ultraproducts (see Theorem 7.3 from [@H]) exist $T:M\rightarrow (X_i^*)_{\mathfrak U}$ an $(1+\varepsilon)-$isomorphism such that $$JT(\psi)(x)=\psi(x) \mbox{ } \forall x\in N, \mbox{ } \forall \psi\in M$$ where $J:(X_i^*)_{\mathfrak U}\rightarrow (X_i)_{\mathfrak U}^*$ is the canonical embedding. Let $\phi_{j,t}=JT(\psi_{j,t})$ and consider the polynomials $\bar{Q}_1,\cdots, \bar{Q}_n$ on $(X_i)_{\mathfrak U}$ with $\bar{Q}_j=\sum_{t=1}^{m_j}(\phi_{j,t})^{r_{j,t}}$. Clearly $\bar{Q}_j$ is equal to $Q_j$ in $N$ and $K\subseteq N$, therefore we have the following lower bound for the norm of each polynomial $$\Vert \bar{Q}_j \Vert \geq \Vert \bar{Q}_j \Vert_K = \Vert Q_j \Vert_K >\Vert P_j \Vert(1-\varepsilon)^2 \label{desbarq}$$ Now, let us find an upper bound for the norm of the product $\Vert \prod_{j=1}^n \bar{Q}_j \Vert$. Let $x=(x_i)_{\mathfrak U}$ be any point in $B_{(X_i)_{\mathfrak U}}$. Then, we have $$\begin{aligned} \left|\prod_{j=1}^n \bar{Q}_j(x)\right| &=& \left|\prod_{j=1}^n \sum_{t=1}^{m_j}(\phi_{j,t} (x))^{r_{j,t}}\right|=\left|\prod_{j=1}^n \sum_{t=1}^{m_j} (JT\psi_{j,t}(x))^{r_{j,t}} \right| \nonumber \\ &=& \left|\prod_{j=1}^n \sum_{t=1}^{m_j}((JT)^*\hat{x}(\psi_{j,t}))^{r_{j,t}}\right|\nonumber\end{aligned}$$ Since $(JT)^*\hat{x}\in M^*$, $\Vert (JT)^*\hat{x}\Vert =\Vert JT \Vert \Vert x \Vert \leq \Vert J \Vert \Vert T \Vert \Vert x \Vert< 1 + \varepsilon$ and $M^*=\frac{(X_i)_{\mathfrak U}^{**}}{M^{\bot}}$, we can chose $z^{**}\in (X_i)_{\mathfrak U}^{**}$ with $\Vert z^{**} \Vert < \Vert (JT)^*\hat{x}\Vert+\varepsilon < 1+2\varepsilon$, such that $\prod_{j=1}^n \sum_{t=1}^{m_j} ((JT)^*\hat{x}(\psi_{j,t}))^{r_{j,t}}= \prod_{j=1}^n \sum_{t=1}^{m_j} (z^{**}(\psi_{j,t}))^{r_{j,t}}$. By Goldstine’s Theorem exist a net $\{z_\alpha\} \subseteq (X_i)_{\mathfrak U}$ $w^*-$convergent to $z$ in $(X_i)_{\mathfrak U}^{**}$ with $\Vert z_\alpha \Vert = \Vert z^{**}\Vert$. In particular, $ \psi_{j,t}(z_\alpha)$ converges to $z^{**}(\psi_{j,t})$. If we call ${\mathbf k}= \sum k_j$, since $\Vert z_\alpha \Vert< (1+2\varepsilon)$, by Lemma \[normas\], we have $$\left \Vert \prod_{j=1}^{n} Q_j \right \Vert (1+2\varepsilon)^{\mathbf k}\geq \left|\prod_{j=1}^n Q_j(z_\alpha)\right| = \left|\prod_{j=1}^n \sum_{t=1}^{m_j} ((\psi_{j,t})(z_\alpha))^{r_{j,t}}\right| . \label{usecomplex}$$ Combining this with the fact that $$\begin{aligned} \left|\prod_{j=1}^{n} \sum_{t=1}^{m_j} ((\psi_{j,t})(z_\alpha))^{r_{j,t}}\right| &\longrightarrow& \left|\prod_{j=1}^{n} \sum_{t=1}^{m_j} (z^{**}(\psi_{j,t}))^{r_{j,t}}\right|\nonumber\\ &=& \left|\prod_{j=1}^{n} \sum_{t=1}^{m_j} ((JT)^*\hat{x}(\psi_{j,t}))^{r_{j,t}}\right| = \left|\prod_{j=1}^{n} \bar{Q}_j(x)\right|\nonumber\end{aligned}$$ we conclude that $\left \Vert \prod_{j=1}^{n} Q_j \right \Vert (1+2\varepsilon)^{\mathbf k}\geq |\prod_{j=1}^{n} \bar{Q}_j(x)|$. Since the choice of $x$ was arbitrary we arrive to the next inequality $$\begin{aligned} D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} \bar{Q}_j \right \Vert &\leq& (1+2\varepsilon)^{\mathbf k}D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} Q_j \right \Vert \nonumber \\ &\leq& (1+2\varepsilon)^{\mathbf k}(1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_{j} \right \Vert \label{desbarq2} \\ &<& (1+2\varepsilon)^{\mathbf k}(1+\varepsilon) \frac{\prod_{j=1}^{n} \Vert \bar{Q}_j \Vert }{(1-\varepsilon)^{2n}} .\label{desbarq3} \\end{aligned}$$ In (\[desbarq2\]) and (\[desbarq3\]) we use (\[desq\]) and (\[desbarq\]) respectively. The polynomials $\bar{Q}_j$ are not only of finite type, these polynomials are also generated by elements of $(X_i^*)_{\mathfrak U}$. This will allow us to write them as limits of polynomials in $X_i$. For any $i$, consider the polynomials $\bar{Q}_{i,1},\cdots,\bar{Q}_{i,n}$ on $X_i$ defined by $\bar{Q}_{i,j}= \displaystyle\sum_{t=1}^{m_j} (\phi_{i,j,t})^{r_{j,t}}$, where the functionals $\phi_{i,j,t}\in X_i^*$ are such that $(\phi_{i,j,t})_{\mathfrak U}=\phi_{j,t}$. Then $\bar{Q}_j(x)=\displaystyle\lim_{i,{\mathfrak U}} \bar{Q}_{i,j}(x)$ $\forall x \in (X_i)_{\mathfrak U}$ and, by Proposition \[pollim\], $\Vert \bar{Q}_j \Vert = \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert$. Therefore $$\begin{aligned} D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \displaystyle\lim_{i,{\mathfrak U}} \left \Vert \prod_{j=1}^{n} \bar{Q}_{i,j} \right \Vert &=& D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} \bar{Q}_{j} \right \Vert \nonumber \\ &<& \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \Vert \bar{Q}_{j} \Vert \nonumber \\ &=& \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert . \nonumber \end{aligned}$$ To simplify the notation let us call $\lambda = \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} $. Take $L>0$ such that $$D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \displaystyle\lim_{i,{\mathfrak U}} \left \Vert \prod_{j=1}^{n} \bar{Q}_{i,j} \right \Vert < L < \lambda \prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert . \nonumber$$ Since $(-\infty, \frac{L}{D((X_i)_{\mathfrak U},k_1,\cdots,k_n)})$ and $(\frac{L}{\lambda},+\infty)$ are neighborhoods of $\displaystyle\lim_{i,{\mathfrak U}} \left \Vert \prod_{j=1}^{n} \bar{Q}_{i,j} \right \Vert$ and $\prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert$ respectively, and $\prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert= \displaystyle\lim_{i,{\mathfrak U}} \prod_{j=1}^{n} \Vert \bar{Q}_{i,j} \Vert$, by definition of $\displaystyle\lim_{i,{\mathfrak U}}$, the sets $$A=\{i_0: D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} \bar{Q}_{i_0,j} \right \Vert <L\} \mbox{ and }B=\{i_0: \lambda \prod_{j=1}^{n} \Vert \bar{Q}_{i_0,j} \Vert > L \}$$ are elements of ${\mathfrak U}$. Since ${\mathfrak U}$ is closed by finite intersections $A\cap B\in {\mathfrak U}$. If we take any element $i_0 \in A\cap B$ then, for any $\delta >0$, we have that $$D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} \bar{Q}_{i_0,j} \right \Vert \frac{1}{\lambda}\leq \frac{L}{\lambda} \leq \prod_{j=1}^{n} \Vert \bar{Q}_{i_0,j} \Vert < (1+ \delta)\prod_{j= 1}^{n} \Vert \bar{Q}_{i_0,j} \Vert \nonumber$$ Then, since $\delta$ is arbitrary, the constant $D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\frac{1}{\lambda}$ satisfy (\[condition\]) for the space $X_{i_0}$ and therefore, by Lemma \[alternat\], $$\frac{1}{\lambda}D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \leq D(X_{i_0},k_1,\cdots,k_n). \nonumber$$ This holds true for any $i_0$ in $A\cap B$. Since $A\cap B \in {\mathfrak U}$, by Lemma \[lemlimit\], $\frac{1}{\lambda}D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\leq \displaystyle\lim_{i,{\mathfrak U}} D(X_i,k_1,\cdots,k_n) $. Using that $\lambda \rightarrow 1$ when $\varepsilon \rightarrow 0$ we conclude that $D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\leq \displaystyle\lim_{i,{\mathfrak U}} D(X_i,k_1,\cdots,k_n).$ Similar to Corollary 3.3 from [@LR], a straightforward corollary of our main result is that for any complex Banach space $X$ with $1+$ uniform approximation property $C(X,k_1,\cdots,k_n)=C(X^{**},k_1,\cdots,k_n)$ and $D(X,k_1,\cdots,k_n)=D(X^{**},k_1,\cdots,k_n)$ . Using that $X^{**}$ is $1-$complemented in some adequate ultrafilter $(X)_{{\mathfrak U}}$ the result is rather obvious. For a construction of the adequate ultrafilter see [@LR]. But following the previous proof, and using the principle of local reflexivity applied to $X^*$ instead of the local duality of ultraproducts, we can prove the next stronger result. Let $X$ be a complex Banach space. Then 1. $C(X^{**},k_1,\cdots,k_n)\geq C(X,k_1,\cdots,k_n).$ 2. $D(X^{**},k_1,\cdots,k_n \geq D(X,k_1,\cdots,k_n)).$ Moreover, if $X^{**}$ has the metric approximation property, equality holds in both cases. The inequality $D(X^{**},k_1,\cdots,k_n) \geq D(X,k_1,\cdots,k_n)$ is a corollary of Theorem \[main thm\] (using the adequate ultrafilter mentioned above). Let us prove that if $X^{**}$ has the metric approximation property then $D((X^{**},k_1,\cdots,k_n)\geq D(X,k_1,\cdots,k_n)$. Given $\varepsilon >0$, let $\{P_j\}_{j=1}^n$ be a set of polynomials on $X^{**}$ with $deg(P_j)=k_j$ such that $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} P_{j} \right \Vert \leq (1+\varepsilon)\prod_{j=1}^{n} \left \Vert P_{j} \right \Vert .\nonumber$$ Analogous to the proof of Theorem \[main thm\], since $X^{**}$ has the metric approximation, we can construct finite type polynomials $Q_1,\cdots,Q_n$ on $X^{**}$ with $deg(Q_j)=k_j$, $\Vert Q_j \Vert_K \geq \Vert P_j \Vert (1-\varepsilon)^2$ for some finite set $K\subseteq B_{X^{**}}$ and that $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert < (1+\varepsilon)\prod_{j=1}^{n} \left \Vert P_{j} \right \Vert . \nonumber$$ Suppose that $Q_j=\sum_{t=1}^{m_j}(\psi_{j,t})^{r_{j,t}}$ and consider the spaces $N=\rm{span} \{K\}$ and $M=\rm{span} \{\psi_{j,t} \}$. By the principle of local reflexivity (see [@D]), applied to $X^*$ (thinking $N$ as a subspaces of $(X^*)^*$ and $M$ as a subspaces of $(X^*)^{**}$), there is an $(1+\varepsilon)-$isomorphism $T:M\rightarrow X^*$ such that $$JT(\psi)(x)=\psi(x) \mbox{ } \forall x\in N, \mbox{ } \forall \psi\in M\cap X^*=M,$$ where $J:X^*\rightarrow X^{***}$ is the canonical embedding. Let $\phi_{j,t}=JT(\psi_{j,t})$ and consider the polynomials $\bar{Q}_1,\cdots, \bar{Q}_n$ on $X^{**}$ defined by $\bar{Q}_j=\sum_{t=1}^{m_j}(\phi_{j,t})^{r_{j,t}}$. Following the proof of the Main Theorem, one arrives to the inequation $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} \bar{Q_j} \right \Vert < (1+ \delta) \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \Vert \bar{Q_j} \Vert \nonumber$$ for every $\delta >0$. Since each $\bar{Q}_j$ is generated by elements of $J(X^*)$, by Goldstine’s Theorem, the restriction of $\bar{Q}_j$ to $X$ has the same norm and the same is true for $\prod_{j=1}^{n} \bar{Q_j}$. Then $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} \left.\bar{Q_j}\right|_X \right \Vert < (1+ \delta) \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \Vert \left.\bar{Q_j}\right|_X \Vert \nonumber$$ By Lemma \[alternat\] we conclude that $$\frac{(1-\varepsilon)^{2n}}{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}D(X^{**},k_1,\cdots,k_n)\leq D(X,k_1,\cdots,k_n).$$ Given that the choice of $\varepsilon$ is arbitrary and that $\frac{(1-\varepsilon)^{2n}}{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}} $ tends to $1$ when $\varepsilon$ tends to $0$ we conclude that $D(X^{**},k_1,\cdots,k_n)\leq D(X,k_1,\cdots,k_n)$. Note that in the proof of the Main Theorem the only parts where we need the spaces to be complex Banach spaces are at the beginning, where we use Proposition \[pollim\], and in the inequality (\[usecomplex\]), where we use Lemma \[normas\]. But both results holds true for homogeneous polynomials on a real Banach space. Then, copying the proof of the Main Theorem we obtain the following result for real spaces. If ${\mathfrak U}$ is an ultrafilter on a family $I$ and $(X_i)_{\mathfrak U}$ is an ultraproduct of real Banach spaces then $$C((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(C(X_i,k_1,\cdots,k_n)).$$ If in addition each $X_i$ has the $1+$ uniform approximation property, the equality holds. Also we can get a similar result for the bidual of a real space. Let $X$ be a real Banach space. Then 1. $C(X^{**},k_1,\cdots,k_n)\geq C(X,k_1,\cdots,k_n).$ 2. $D(X^{**},k_1,\cdots,k_n) \geq D(X,k_1,\cdots,k_n).$ If $X^{**}$ has the metric approximation property, equality holds in $(a)$. The proof of item $(a)$ is the same that in the complex case, so we limit to prove $D(X^{**},k_1,\cdots,k_n) \geq D(X,k_1,\cdots,k_n))$. To do this we will show that given an arbitrary $\varepsilon >0$, there is a set of polynomials $\{P_{j}\}_{j=1}^n$ on $X^{**}$ with $deg(P_{j})\leq k_j$ such that $$D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_j \right \Vert .$$ Take $\{Q_{j}\}_{j=1}^n$ a set of polynomials on $X$ with $deg(Q_j)=k_j$ such that $$D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert \leq (1 +\varepsilon)\prod_{j=1}^{n} \left \Vert Q_{j} \right \Vert.$$ Consider now the polynomials $P_j=AB(Q_j)$, where $AB(Q_j)$ is the Aron Berner extension of $Q_j$ (for details on this extension see [@AB] or [@Z]). Since $AB\left( \prod_{j=1}^n P_j \right)=\prod_{j=1}^n AB(P_j)$, using that the Aror Berner extension preserves norm (see [@DG]) we have $$\begin{aligned} D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_{j} \right \Vert &=& D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert\nonumber \\ &\leq& (1 +\varepsilon)\prod_{j=1}^{n} \left\Vert Q_{j} \right\Vert \nonumber \\ &=& (1 +\varepsilon)\prod_{j=1}^{n} \left \Vert P_{j} \right \Vert \nonumber \end{aligned}$$ as desired. As a final remark, we mention two types of spaces for which the results on this section can be applied. Corollary 9.2 from [@H] states that any Orlicz space $L_\Phi(\mu)$, with $\mu$ a finite measure and $\Phi$ an Orlicz function with regular variation at $\infty$, has the $1+$ uniform projection property, which is stronger than the $1+$ uniform approximation property. In [@PeR] Section two, A. Pełczyński and H. Rosenthal proved that any ${\mathcal L}_{p,\lambda}-$space ($1\leq \lambda < \infty$) has the $1+\varepsilon-$uniform projection property for every $\varepsilon>0$ (which is stronger than the $1+\varepsilon-$uniform approximation property), therefore, any ${\mathcal L}_{p,\lambda}-$space has the $1+$ uniform approximation property. Acknowledgment {#acknowledgment .unnumbered} ============== I would like to thank Professor Daniel Carando for both encouraging me to write this article, and for his comments and remarks which improved its presentation and content. [HD]{} R. M. J. Arias-de-Reyna. *Gaussian variables, polynomials and permanents*. Linear Algebra Appl. 285 (1998), 107–114. R. M. Aron and P. D. Berner. *A Hahn-Banach extension theorem for analytic mapping*. Bull. Soc. Math. France 106 (1978), 3–24. C. Benítez, Y. Sarantopoulos and A. Tonge. *Lower bounds for norms of products of polynomials*. Math. Proc. Cambridge Philos. Soc. 124 (1998), 395–408. D. Carando, D. Pinasco y J.T. Rodríguez. *Lower bounds for norms of products of polynomials on $L_p$ spaces*. Studia Math. 214 (2013), 157–166. A. M. Davie and T. W. Gamelin. *A theorem on polynomial-star approximation*. Proc. Amer. Math. Soc. 106 (1989) 351–356. D. W. Dean. *The equation $L(E,X^{**})=L(E,X)^{**}$ and the principle of local reflexivity*. Proceedings of the American Mathematical Society. 40 (1973), 146-148. S. Heinrich. *Ultraproducts in Banach space theory*. J. Reine Angew. Math. 313 (1980), 72–104. M. Lindström and R. A. Ryan. *Applications of ultraproducts to infinite dimensional holomorphy*. Math. Scand. 71 (1992), 229–242. A. Pełczyński and H. Rosenthal. *Localization techniques in $L_p$ spaces*. Studia Math. 52 (1975), 265–289. D. Pinasco. *Lower bounds for norms of products of polynomials via Bombieri inequality*. Trans. Amer. Math. Soc. 364 (2012), 3993–4010. I. Zalduendo. *Extending polynomials on Banach Spaces - A survey*. Rev. Un. Mat. Argentina 46 (2005), 45–72.
--- abstract: 'Dark Matter detectors with directional sensitivity have the potential of yielding an unambiguous positive observation of WIMPs as well as discriminating between galactic Dark Matter halo models. In this article, we introduce the motivation for directional detectors, discuss the experimental techniques that make directional detection possible, and review the status of the experimental effort in this field.' address: - 'Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA' - 'Temple University, 1900 N. 13-th Street, Philadelphia, PA 19122, USA' author: - G Sciolla - C J Martoff bibliography: - 'all\_DM.bib' title: Gaseous Dark Matter Detectors --- Introduction ============ Astronomical and cosmological observations have recently shown that Dark Matter (DM) is responsible for 23% of the energy budget of the Universe and 83% of its mass [@Hinshaw2008]. The most promising candidate for Dark Matter is the so-called Weakly Interacting Massive Particle (WIMP). The existence of WIMPs is independently suggested by considerations of Big Bang cosmology and theoretical supersymmetric particle phenomenology [@LeeWeinberg; @Weinberg82; @Jungman1996]. Over the years, many direct detection experiments have been performed to search for nuclear recoils due to elastic scattering of WIMPs off the nuclei in the active volume of the detector. The main challenge for these experiments is to suppress the backgrounds that mimic WIMP-induced nuclear recoils. Today’s leading experiments have achieved excellent rejection of electromagnetic backgrounds, i.e., photons, electrons and alpha particles, that have a distinct signature in the detector. However, there are sources of background for which the detector response is nearly identical to that of a WIMP-induced recoil, such as the coherent scattering of neutrinos from the sun [@Monroe2007], or the elastic scattering of neutrons produced either by natural radioactivity or by high-energy cosmic rays. While neutron and neutrino interactions do not limit today’s experiments, they are expected to become dangerous sources of background when the scale of DM experiments grows to fiducial masses of several tons. In traditional counting experiments, the presence of such backgrounds could undermine the unambiguous identification of a Dark Matter signal because neutrinos are impossible to suppress by shielding and underground neutron backgrounds are notoriously difficult to predict [@Mei2006]. An unambiguous positive identification of a Dark Matter signal even in presence of unknown amounts of irreducible backgrounds could still be achieved if one could correlate the observation of a nuclear recoil in the detector with some unique astrophysical signature which no background could mimic. This is the idea that motivates directional detection of Dark Matter. The Dark Matter Wind --------------------- The observed rotation curve of our Galaxy suggests that at the galactic radius of the sun the galactic potential has a significant contribution from Dark Matter. The Dark Matter distribution in our Galaxy, however, is poorly constrained. A commonly used DM distribution, the standard dark halo model [@SmithLewin1990], assumes a non-rotating, isothermal sphere extending out to 50 kpc from the galactic center. The DM velocity is described by a Maxwell-Boltzmann distribution with dispersion $\sigma_v=155$ km/s. Concentric with the DM halo is the galactic disk of luminous ordinary matter, rotating with respect to the halo, with an average orbital velocity of about 220 km/s at the radius of the solar system. Therefore in this model, an observer on Earth would see a wind of DM particles with average velocity of 220 km/s. The Dark Matter wind creates two observable effects. The first was pointed out in 1986 by Drukier, Freese, and Spergel [@Drukier1986] who predicted that the Earth’s motion relative to the galactic halo leads to an annual modulation of the rates of interactions observed above a certain threshold in direct detection experiments. In its annual rotation around the sun, the Earth’s orbital velocity has a component that is anti-parallel to the DM wind during the summer, and parallel to it during the winter. As a result, the apparent velocity of the DM wind will increase (decrease) by about 10% in summer (winter), leading to a corresponding increase (decrease) of the observed rates in DM detectors. Unfortunately, this effect is difficult to detect because the seasonal modulation is expected to be small (a few %) and very hard to disentangle from other systematic effects, such as the seasonal dependence of background rates. These experimental difficulties cast a shadow on the recent claimed observation of the yearly asymmetry by the DAMA/LIBRA collaboration [@Bernabei2008]. A larger modulation of the WIMP signal was pointed out by Spergel [@Spergel] in 1988. The Earth spins around its axis with a period of 24 sidereal hours. Because its rotation axis is oriented at 48$^\circ$ with respect to the direction of the DM wind, an observer on Earth sees the average direction of the WIMPs change by 96$^\circ$ every 12 sidereal hours. This modulation in arrival direction should be resolvable by a Dark Matter directional detector, e.g., a detector able to determine the direction of the DM particles. Most importantly, no known background is correlated with the direction of the DM wind. Therefore, a directional detector could hold the key to the unambiguous observation of Dark Matter. In addition to background rejection, the determination of the direction of the arrival of Dark Matter particles can discriminate [@Copi1999; @Vergados2003; @Morgan2004; @Freese2005; @Alenazi2008] between various DM halo distributions including the standard dark halo model, models with streams of WIMPs, the Sikivie late-infall halo model [@Sikivie1999; @Tkachev1997; @Sikivie1995], and other anisotropic models. The discrimination power is further enhanced if a determination of the sense as well as the direction of WIMPs is possible [@Green2007]. This capability makes directional detectors unique observatories for underground WIMP astronomy. Directional Dark Matter Detection ----------------------------------- When Dark Matter particles interact with regular matter, they scatter elastically off the atoms and generate nuclear recoils with typical energies $E_R$ of a few tens of keV, as explained in more detail in section \[NuclearRecoils\]. The direction of the recoiling nucleus encodes the direction of the incoming DM particle. To observe the daily modulation in the direction of the DM wind, an angular resolution of 20–30 degrees in the reconstruction of the recoil nucleus is sufficient, because the intrinsic spread in direction of the DM wind is $\approx$ 45 degrees. Assuming that sub-millimeter tracking resolution can be achieved, the length of a recoil track has to be of at least 1–2 mm, which can be obtained by using a very dilute gas as a target material. An ideal directional detector should provide a 3-D vector reconstruction of the recoil track with a spatial resolution of a few hundred microns in each coordinate, and combine a very low energy threshold with an excellent background rejection capability. Such a detector would be able to reject isotropy of the recoil direction, and hence identify the signature of a WIMP wind, with just a handful of events [@Morgan2004]. More recently, Green and Morgan [@Green2007] studied how the number of events necessary to detect the WIMP wind depends on the detector performance in terms of energy threshold, background rates, 2-D versus 3-D reconstruction of the nuclear recoil, and ability to determine the sense of the direction by discriminating between the “head” and “tail” of the recoil track. The default configuration used for this study assumes a CS$_2$ gaseous TPC running at 0.05 bar using 200 $\mu$m pixel readout providing 3-D reconstruction of the nuclear recoil and “head-tail” discrimination. The energy threshold is assumed to be 20 keV, with perfect background rejection. In such a configuration, 7 events would be sufficient to establish observation of the WIMP wind at 90% C.L.. In presence of background with S/N=1, the number of events necessary to reject isotropy would increase by a factor 2. If only 2D reconstruction is available, the required number of events doubles compared to the default configuration. “Head-tail” discrimination turns out to be the most important capability: if the sense cannot be measured, the number of events necessary to observe the effects of the WIMP wind increases by one order of magnitude. Nuclear Recoils in Gaseous Detectors {#NuclearRecoils} ==================================== To optimize the design of gaseous detectors for directional detection of Dark Matter one must be able to calculate the recoil atom energy spectrum expected for a range of WIMP parameters and halo models. The detector response in the relevant energy range must also be predictable. The response will be governed first and foremost by the track length and characteristics (multiple scattering) as a function of recoil atom type and energy. Since gas detectors require ionization for detection, design also requires knowledge of the ionization yield in gas and its distribution along the track as a function of recoil atom type and energy, and possibly electric field. The large momentum transfer necessary to produce a detectable recoil in gas implies that the scattering atom can be treated as a free particle, making calculations of the recoil spectrum essentially independent of whether the target is a solid, liquid, or gas. An estimate of the maximum Dark Matter recoil energy for simple halo models is given by the kinematically allowed energy transfer from an infinitely heavy halo WIMP with velocity equal to the galactic escape speed. This speed is locally about 500-600 km/sec [@rave]; WIMPS with higher velocities than this would not be gravitationally bound in the halo and would presumably be rare. The corresponding maximum energy transfer amounts to $<$ 10 keV/nucleon. The integrated rate will be concentrated at lower energies than this, at least in halo models such as the isothermal sphere. For that model, the recoil energy ($E_R$) distribution [@SmithLewin1990] is proportional to $\exp(-E_R/E_I)$, with $E_I$ a constant that depends on the target and WIMP masses and the halo model. For a 100 GeV WIMP and the isothermal halo model parameters of Ref. [@SmithLewin1990], $E_I / A$ varies from 1.05 to 0.2 keV/nucleon for target mass numbers from 1 to 131. These are very low energy particles, well below the Bragg Peak at $\sim$200–800 keV/A. In this regime dE/dx [*decreases*]{} with decreasing energy, and the efficiency of ionization is significantly reduced. Lindhard Model for Low-Energy Stopping -------------------------------------- The stopping process for such low energy particles in homoatomic[^1] substances was treated by Lindhard, Scharff, and Schiott [@Lindhard1963; @Lindhard-int] (LSS). This treatment has stood the test of time and experiment, making it worthwhile to summarize the results here. As is now well-known, the primary energy loss mechanisms for low energy particles in matter can be divided into “nuclear stopping”, due to atom-atom scattering, and “electronic stopping”, due to atom-electron scattering. These mechanisms refer only to the initial interaction causing the incident particle to lose energy. Nuclear stopping eventually contributes to electronic excitations and ionization, and electronic stopping eventually contributes to thermal excitations [@Lindhard-int]. In Ref. [@Lindhard1963] the stopping is described using a Thomas-Fermi atom model to obtain numerical results for universal stopping-power curves in terms of two variables, the scaled energy $\epsilon=E_R/E_{TF}$, and the scaled range $\rho=R/R_{TF}$, where $E_R$ and $R$ are respectively the energy and the stopping distance of the recoil, and $E_{TF}$ and $R_{TF}$ are scale factors[^2]. In Ref. [@Lindhard1963] it was shown that nuclear stopping dominates in the energy range where most of the rate for Dark Matter detection lies. This can be seen as follows. The scaled variables $\epsilon$ and $\rho$ depend algebraically on the atomic numbers and mass numbers of the incident and target particles. The scale factor $E_{TF}$ corresponds to 0.45 keV/nucleon for homoatomic recoils in Carbon, 1.7 keV/nucleon for Ar in Ar and 6.9 keV/nucleon for Xe in Xe. Nuclear stopping $\frac{d\epsilon_n}{d\rho}$ was found to be larger than the electronic stopping $\frac{d\epsilon_e}{d\rho}$ for $\epsilon < 1.6$, which covers the energy range $0 < E_R < E_I$ where most of the Dark Matter recoil rate can be expected. Because of the dominance of nuclear stopping, detectors can be expected to respond differently to Dark Matter recoils than to radiations such as x-rays or even $\alpha$ particles, for which electronic stopping dominates. Nuclear stopping yields less ionization and electronic excitation per unit energy loss than does electronic stopping, implying that the W factor, defined as the energy loss required to create one ionization electron, will be larger for nuclear recoils. Reference [@Lindhard-int] presents calculations of the ultimate energy loss partitioning between electronic and atomic motion. Experimenters use empirical “quenching factors" to describe the variation of energy per unit of ionization (the “W" parameter) compared to that from x-rays. The different microscopic distribution of ionization in tracks dominated by nuclear stopping can also lead to unexpected changes in the interactions of ionized and electronically excited target atoms (e.g., dimer formation, recombination). Such interactions are important for particle identification signatures such as the quantity and pulse shape of scintillation light output, the variation of scintillation pulse shape with applied electric field, and the field variation of ionization charge collection efficiency. Such effects are observed in gases [@White2007; @Martin2009], and even more strongly in liquid and solid targets [@Aprile2006]. Electronic stopping [@Lindhard1963] was found to vary as $\frac{d\epsilon_e}{d\rho} = k \sqrt{\epsilon}$ with the parameter $k$ varying only from 0.13 to 0.17 for homonuclear recoils in A=1 to 131[^3]. Let us define the total stopping as $\frac{d\epsilon}{d\rho}= \frac{d\epsilon_n}{d\rho} + \frac{d\epsilon_e}{d\rho}$ and the total scaled range as $\rho_o = \int _0 ^\epsilon \frac{d\epsilon}{(\frac{d\epsilon}{d\rho})}$. The relatively small contribution of electronic stopping and the small variation in $k$ for homoatomic recoils, makes the total scaled range for this case depend on the target and projectile almost entirely through $E_{TF}$. Predictions for the actual range of homoatomic recoils can be obtained from the nearly-universal scaled range curve as follows. Numerically integrating the stopping curves of Ref. [@Lindhard1963] with $k$ set to 0.15 gives a scaled range curve that fits the surprisingly simple expression $$\rho_o \stackrel{.}{=} 2.04 \epsilon + 0.04 \label{eq:range}$$ with accuracy better than 10% for $0.12 < \epsilon < 10 $. According to the formula given earlier, the scale factor $R_{TF}$ lies between 1 and 4 $\times$ 10$^{17}$ atoms/cm$^2$ for homoatomic recoils in targets with $12 \leq A \leq 131$. Thus the model predicts ranges of several times 10$^{17}$ atoms/cm$^2$ at $E_R = E_I$. This is of the order of a few mm for a monoatomic gas at 0.05 bar. As a consequence, tracking devices for Dark Matter detection must provide accurate reconstruction of tracks with typical lengths between 1 and a few mm while operating at pressures of a small fraction of an atmosphere. When comparing LSS predictions with experimental results, two correction factors must be considered. First, the widely-used program SRIM [@SRIM] produces range-energy tables which contain the “projected range", while LSS calculate the path length along the track. On the other hand, many older experiments report “extrapolated ranges", which are closer in magnitude to the path length than to the “projected range". To compare the SRIM tables with LSS, the projected range should be multiplied by a factor [@Lindhard1963] $(1+\frac{M_T}{3M_P})$ where $M_T$ and $M_P$ are the target and projectile masses. This correction has generally been applied in the next section, where experimental data are discussed. In addition, it must be noted that the LSS calculations described above were obtained for solids. Therefore, one should consider a gas-solid correction in ranges and stopping powers, as discussed by Bohr, Lindhard and Dan [@BLD]. In condensed phases, the higher collision frequency results in a higher probability for stripping of excited electrons before they can relax, which leads to a higher energy loss rate than for gases. This correction is rather uncertain and has generally not been applied in the following section of this paper. Finally, numerical calculations to extend the LSS model to the case of targets of mixed atomic number are given in Ref. [@Hitachi2008]. Experimental Data on Low Energy Stopping in Gases ------------------------------------------------- The literature of energy loss and stopping of fast particles in matter is vast and still growing [@Ziegler1985; @Sigmund1998]. However, there is not a lot of experimental data available for particle ranges and ionization yields in gas at the very low energies typical of Dark Matter recoils, where E/A $\sim$ 1 keV per nucleon. Comprehensive collections of citations for all energies are available [@SRIM; @MSTAR], upon which the widely-used theory-guided-fitting computer programs SRIM and MSTAR [@MSTAR] are based. Several older references [@Evans1953; @Lassen1964; @Cano1968] still appear representative of the available direct measurements at very low energy. More recent studies [@SnowdenIfft2003] provide indirect information based on large detector simulations. Both references [@Evans1953] and [@Lassen1964] used accelerated beams of He, N, Ne, Ar and $^{24}$Na, $^{66}$Ga, and $^{198}$Au in differentially pumped gas target chambers filled with pure-element gases. In [@Evans1953] the particles were detected with an ionization chamber, while in [@Lassen1964] radioactive beams were used. The stopped particles were collected on segmented walls of the target chamber and later counted. Typical results were ranges of 2(3.2) $\times$ 10$^{17}$ atoms/cm$^2$ for 26(40) keV Ar$^+$ in Argon. The fit to LSS theory given above predicts ranges that are shorter than the experimental results by 10-40%, which is consistent with experimental comparisons given by LSS. Accuracy of agreement with the prediction from the SRIM code is about the same. As in all other cases discussed below, the direction of the deviation from LSS is as expected from the gas-solid effect mentioned in the previous section. In Ref. [@SnowdenIfft2003] nuclear recoils from $^{252}$Cf neutrons were recorded by a Negative Ion Time Projection Chamber (NITPC) filled with 40 Torr CS$_2$. The device was simulated fitting the observed pulse height and event size distributions. The best fit range curves given for C and S recoils in the gas are 10-20% higher at 25-100 keV than LSS predictions computed by the present authors by assuming simple additivity of stopping powers for the constituent atoms of the polyatomic gas target. Ionization Yields ----------------- Tracking readouts in gas TPC detectors are sensitive only to ionization of the gas. As noted above, both nuclear and electronic stopping eventually contribute to both electronic excitations (including ionization) and to kinetic energy of target atoms, as primary and subsequent generations of collision products interact further with the medium. Some guidance useful for design purposes is available from Ref. [@Lindhard-int], where the energy cascade was treated numerically using integral equations. In terms of the scaled energy $\epsilon$ and the electronic stopping coefficient $k$ introduced above, the (scaled) energy $\eta$ ultimately transferred to electrons was found to be well approximated [@SmithLewin1996] by $\eta = \frac{\epsilon}{1+\frac{1}{k\dot g(\epsilon)}}$ with $g(\epsilon)= \epsilon + 3 \epsilon^{0.15} + 0.7 \epsilon^{0.6}$. This function interpolates smoothly from $\eta = 0$ at $\epsilon = 0$ to $\eta = \epsilon$ for $\epsilon \rightarrow \infty$, giving $\eta = 0.4$ at $\epsilon = 1$. In other words, this theory predicts only about 40% as much ionization per unit of energy deposited by Dark Matter recoils as by low LET radiation such as electrons ejected by x-rays. Several direct measurements of total ionization by very low energy particles are available in literature. Many of these results are for recoil nuclei from alpha decays [@Cano1965; @Cano1968; @Stone1957]. These $\sim$ 100 keV, A $\sim$ 200 recoils are of interest as backgrounds in Dark Matter experiments, but their scaled energy $\epsilon \cong 0.07$ is below the range of interest for most WIMP recoils. Measured ionization yield parameters W were typically 100-120 eV/ion pair, in good agreement with the approximate formula for $\eta$ given above. Data more applicable to Dark Matter recoils are given in Refs. [@Phipps1964; @Boring1965; @McDonald1969; @Price1993]. Some representative results from these works include [@Boring1965] W = 91 (65) eV/IP for 25 (100) keV Ar in Ar, both values about 20% higher than would be predicted by the preceding approximate LSS expression. Higher W for gases than for condensed media is expected [@BLD] as mentioned above. Ref. [@McDonald1969] measured total ionization from particles with 1 $<$ Z $<22$ in methane. While in principle the LSS treatment does not apply to heteroatomic gases, using the LSS prescription to predict the W factor for a carbon target (rather than methane) yields a value that is 15% lower than the experimental results. The authors of Ref. [@SnowdenIfft2003] also fit their data to derive W-values for C and S recoils. Their best-fit values are again 10-25% higher than an LSS-based estimate by the present author using additivity. To summarize, most of the Dark Matter recoils expected from an isothermal galactic halo have very low energies, and therefore nuclear stopping plays an important role. The sparse available experimental data on track lengths and ionization yields agrees at the $\sim$20% level with simple approximate formulas based on the Lindhard model. Without applying any gas-phase correction, LSS-based estimates for range tend to be slightly longer than those experimentally measured in gases. The predicted ionization parameter W also tends to be slightly lower than the experimental data. This situation is adequate for initial design of detectors, but with the present literature base, each individual experiment will require its own dedicated calibration measurements. Considerations for Directional Detector Design ============================================== Detector Architecture --------------------- From the range-energy discussion in the previous section, we infer that track lengths of typical Dark Matter recoils will be only of the order of 0.1 $\mu$m in condensed matter, while track lengths of up to a few millimeters are expected in gas at a tenth of the atmospheric pressure. Several techniques relevant to direction-sensitive detection using condensed matter targets have been reported, including track-etch analysis of ancient mica [@Bander1995], bolometric detection of surface sputtered atoms [@Martoff1996], and use of nuclear emulsions [@Natsume2007]. The ancient mica etch pit technique was actually used to obtain Dark Matter limits. However, recently the focus of directional Dark Matter detection has shifted to low-pressure gas targets, and that is the topic of the present review. The TPC [@NygrenTPC; @Fancher1979] is the natural detector architecture for gaseous direction-sensitive Dark Matter detectors, and essentially all experiments use this configuration. The active target volume contains only the active gas, free of background-producing material. Only one wall of the active volume requires a readout system, leading to favorable cost-volume scaling. TPCs with nearly 100 m$^3$ of active volume have been built for high energy physics, showing the possibility of large active masses. Background Rejection Capabilities ---------------------------------- Gaseous DM detectors have excellent background rejection capability for different kinds of backgrounds. First and foremost, direction sensitivity gives gas detectors the capability of statistically rejecting neutron and neutrino backgrounds. In addition, tracking also leads to extremely effective discrimination against x-ray and $\gamma$-ray backgrounds [@Snowden-Ifft:PRD2000; @Sciolla:2009fb]. The energy loss rates for recoils discussed in the previous section are hundreds of times larger than those of electrons with comparable total energy. The resulting much longer electron tracks are easily identified and rejected in any direction-sensitive detector. Finally, the measured rejection factors for gamma rays vs. nuclear recoils varies between 10$^4$ and 10$^6$ depending on the experiment [@Miuchi2007-58; @SnowdenIfft2003; @Dujmic2008-58]. Choice of Pressure ------------------ It can be shown that there is an optimum pressure for operation of any given direction sensitive WIMP recoil detector. This optimum pressure depends on the fill gas, the halo parameter set and WIMP mass, and the expected track length threshold for direction measurement. The total sensitive mass, and hence the total number of expected events, increases proportionally to the product of the pressure $P$ and the active volume $V$. Equation \[eq:range\] above shows that the range in atoms/cm$^2$ for WIMP recoils is approximately proportional to their energy. Since the corresponding range in cm is inversely proportional to the pressure ($R \propto E_r/P$), the energy threshold imposed by a particular minimum track length $E_{r,min}$ will scale down linearly with decreasing pressure, $E_{r,min} \propto R_{min} P$, where $R_{min}$ is the shortest detectable track length. For the exponentially falling recoil energy spectrum of the isothermal halo [@SmithLewin1996] the fraction of recoils above a given energy threshold is proportional to $\exp(-E_{min}/E_0 r)$. Hence the rate of tracks longer than the tracking threshold R$_{min}$ will scale as $N \propto PV \exp(-\xi R_{min}P)$, with $\xi$ a track length factor depending on the target gas, WIMP mass, halo model, etc., and the track length threshold $R_{min}$ depending on the readout technology and the drift distance. This expression has a maximum at $P_{opt} = 1/[\xi R_{min}]$, which shows that the highest event rate is obtained by taking advantage of improvement in tracking threshold to run at higher target pressure. Operating at this optimum pressure, the track-able event rate still scales as $P_{opt}V$, which increases linearly as the tracking threshold decreases. Achieving the shortest possible tracking threshold $R_{min}$ is seen to be the key to sensitive experiments of this type. Tracking Limit due to Diffusion ------------------------------- Diffusion of track charge during its drift to the readout plane sets the ultimate limit on how short a track can be measured in a TPC. Diffusion in gases has a rich phenomenology for which only a simplified discussion is given here. More complete discussion with references to the literature is given by Rolandi and Blum [@RnB]. For low values of electric fields, elementary kinetic theory arguments predict equal transverse and longitudinal diffusion to the drift field $E_d$, with the rms diffusion spread $\delta$ given by $$\label{eq:diff} \delta = \sqrt{\frac{2kTL}{eE_d}} = 0.7 mm \sqrt{\frac{[L/1m]}{[E_d/1 kV/cm]}}.$$ Here $k$ is the Boltzmann constant, $T$ the gas temperature, and $L$ the drift distance. No pressure or gas dependence appears in this equation. The diffusion decreases inversely as the square root of the applied drift field. Increasing the drift field would appear to allow diffusion to be reduced as much as desired, allowing large detectors to be built while preserving good tracking resolution. However, in reality diffusion is not so easily controlled. The low-field approximation given by Equation \[eq:diff\] holds only below a certain maximum drift field value $E_d^{max}$, which depends on the pressure and target gas. The drift field must not violate the condition $eE_d^{max} \lambda << kT$, where the effective mean free path $\lambda = 1/f n \sigma$ decreases inversely as the pressure. Here $\sigma$ is the average total cross section for scattering of the drifting species on the fill gas molecules, $n$ is the number density of molecules, and $f$ is an energy-exchange-efficiency factor for the scattering of charge carriers from gas molecules. This condition amounts to requiring that the work done by the drift field on a charge carrier between collisions and not lost to collisions, must be much smaller than the carrier’s thermal energy. If this condition is fulfilled it will ensure that the drifting carriers’ random (thermal) velocity remains consistent with the bulk gas temperature. A larger scattering cross section $\sigma$ or a more effective energy exchange due to strong inelastic scattering processes will lead to a shorter effective mean free path and a larger value of $E_d^{max}$. Importantly, $E_d^{max}$ for electrons in a given gas generally scales inversely as the pressure, as would be expected from the presence of the mean free path in the “low field" condition. If the drift field exceeds $E_d^{max}$, the energy gained from the drift field becomes non-negligible. The average energy of drifting charge carriers begins to increase appreciably, giving them an effective temperature $T_{eff}$ which can be orders of magnitude larger than that of the bulk gas. Under these conditions, the kinetic theory arguments underlying equation \[eq:diff\] remain approximately valid if the gas temperature $T$ is replaced by $T_{eff}$. Diffusion stops dropping with increasing drift field and may rapidly [ *increase*]{} in this regime, with longitudinal diffusion increasing more rapidly than transverse. Values of $E_d^{max}/P$ for electrons drifting in various gases and gas mixtures vary from $\sim$0.1–1 V/cm/Torr at 300 K [@SauliBible; @Caldwell]. With drift fields limited to this range and a gas pressure of $\sim$ 50 Torr, the rms diffusion for a 1 meter drift distance would be several mm, severely degrading the tracking resolution. Effects of diffusion can be significantly reduced by drifting negative ions instead of electrons [@Martoff2000; @Martoff2009; @Ohnuki:NIMA2001]. Electronegative vapors have been found which, when mixed into detector gases, reversibly capture primary ionization electrons within $\sim$ 100 $\mu$m of their creation. The resulting negative ions drift to the gain region of the chamber, where collisional processes free the electrons and initiate normal Townsend avalanches [@Dion2009]. Ions have E$_d^{max}$ values corresponding to E/P = 20 V/cm Torr and higher. This is because the ions’ masses are comparable to the gas molecules, so the energy-exchange-efficiency factor $f$ which determines $E_d^{max}$ is much larger than for electrons. Ion-molecule scattering cross sections also tend to be larger than electron-molecule cross sections. The use of negative ion drift in TPCs would allow sub-millimeter rms diffusion for drift distances of 1 meter or larger, although total drift voltage differences in the neighborhood of 100 kV would be required. The above outline shows that diffusion places serious constraints on the design of detectors with large sensitive mass and millimeter track resolution, particularly when using a conventional electron drift TPC. Challenges of Directional Detection ------------------------------------ The current limits on spin-independent interactions of WIMPs in the 60 GeV/c$^2$ mass range have been set using 300-400 kg-day exposures, for example by the XENON10 [@XENON2008] and CDMS [@CDMS2009] experiments. Next generation non-directional experiments are being planned to achieve zero background with hundreds or thousands of times larger exposures [@Arisaka2009]. To be competitive, directional detectors should be able to use comparable exposures. However, integrating large exposures is particularly difficult for low-pressure gaseous detectors. A fiducial mass of a few tons will be necessary to observe DM-induced nuclear recoils for much of the theoretically-favored range of parameter space [@Jungman1996]. This mass of low-pressure gas would occupy thousands of cubic meters. It is, therefore, key to the success of the directional DM program to develop detectors with a low cost per unit volume. Since for standard gaseous detectors the largest expense is represented by the cost of the readout electronics, it follows that a low-cost read-out is essential to make DM directional detectors financially viable. Dark Matter TPC Experiments =========================== Early History of Direction-Sensitive WIMP Detectors --------------------------------------------------- As early as 1990, Gerbier [*et al.*]{} [@Gerbier1990] discussed using a hydrogen-filled TPC at 0.02 bar, drifting electrons in a 0.1 T magnetic field to detect proton recoils from Dark Matter collisions. This proposal was made in the context of the “cosmion", a then-current WIMP candidate with very large (10$^{-36}$ cm$^2$) cross section for scattering on protons. These authors explicitly considered the directional signature, but they did not publish any experimental findings. A few years later, the UCSD group led by Masek [@Buckland1994] published results of early trials of the first detector system specifically designed for a direction-sensitive Dark Matter search. This pioneering work used optical readout of light produced in a parallel plate avalanche counter (PPAC) located at the readout plane of a low-pressure TPC. The minimum discernible track length was about 5 mm. Electron diffusion at low pressures and its importance for the performance of gas detectors was also studied [@MattDiff]. This early work presaged some of the most recent developments in the field, described in section \[DMTPC\]. DRIFT ----- The DRIFT-I collaboration [@Snowden-Ifft:PRD2000] mounted the first underground experiment designed for direction sensitive WIMP recoil detection [@Alner2004]. Re-designed detectors were built and further characterization measurements were performed by the DRIFT-II [@Lawson2005] collaboration. Both DRIFT detectors were cubical 1 m$^3$ negative-ion-drifting TPCs with two back-to-back 0.5 m drift spaces. To minimize material possibly contributing radioactive backgrounds, the central drift cathode was designed as a plane of 20 micron wires on 2 mm pitch. The endcap MWPCs used 20 $\mu$m anode wires on 2 mm-pitch, read out with transient digitizers. In DRIFT-II the induced signals on grid wires between the MWPC anode and the drift space were also digitized. DRIFT-I had an amplifier- and digitizer-per-wire readout, while DRIFT-II signals were cyclically grouped onto a small number of amplifiers and digitizers. Both detectors used the negative ion drift gas CS$_2$ at nominally 40 Torr, about one eighth of the atmospheric pressure. The 1 m$^3$ volume gave approximately 170 grams of target mass per TPC. The CS$_2$ gas fill allowed diffusion suppression by running with very high drift fields despite the low pressure. DRIFT-II used drift fields up to 624 V/cm (16 V/cm/Torr). The detectors were calibrated with alpha particles, $^{55}$Fe x-rays and $^{252}$Cf neutrons. Alpha particle Bragg peaks and neutron recoil events from sources were quickly seen after turn-on of DRIFT-I underground in 2001. Neutron exposures gave energy spectra in agreement with simulations when the energy per ion pair W was adjusted in accordance with the discussion of ionization yields given above. Simulations of DRIFT-II showed that the detector and software analysis chain had about 94% efficiency for detection of those $^{252}$Cf neutron recoils producing between 1000 and 6000 primary ion pairs, and a $^{60}$Co gamma-ray rejection ratio better than a few times 10$^{-6} $ [@drift_II_n]. A study of the direction sensitivity of DRIFT-II for neutron recoils [@driftIIfb] showed that a statistical signal distinguishing the beginning and end of Sulfur recoil tracks (“head-tail discrimination") was available, though its energy range and statistical power was limited by the 2 mm readout pitch. At present two 1 m$^3$ DRIFT-II modules are operating underground. Backgrounds due to radon daughters implanted in the internal surfaces of the detector [@drift_II_n] are under study and methods for their mitigation are being developed. The absence of nonzero spin nuclides in the CS$_2$ will require a very large increase in target mass or a change of gas fill in order to detect WIMPs with this device. Dark Matter Searches Using Micropattern Gas-Gain Devices -------------------------------------------------------- It was shown above that the event rate and therefore the sensitivity of an optimized tracking detector improves linearly as the track length threshold gets smaller. In recent years there has been widespread development of gas detectors achieving very high spatial resolution by using micropatterned gain elements in place of wires. For a recent overview of micropattern detector activity, see Ref. [@pos-sens]. These devices typically have 2-D arrays of individual gain elements on a pitch of $\sim$ 0.1 mm. Rows of elements [@Black2007] or individual gain elements can be read out by suitable arrangements of pickup electrodes separate from the gain structures, or by amplifier-per-pixel electronics integrated with the gain structure [@medipix]. Gain-producing structures known as GEM (Gas Electron Multiplier [@gem]) and MicroMegas (MICRO-MEsh GAseous Structure [@Giomataris1996]) have found particularly wide application. The gas CF$_4$ also figures prominently in recent micropattern Dark Matter search proposals. This gas was used for low background work in the MUNU experiment [@munu] and has the advantage of high $E_d^{max}$, allowing relatively low diffusion for electron drift at high drift field and reduced pressure [@Dujmic2008-327; @Christo1996; @Caldwell], though it does not approach negative ions in this regard. Containing the odd-proton nuclide $^{19}$F is also an advantage since it confers sensitivity to purely spin-coupled WIMPs [@Ellis1991], allowing smaller active mass experiments to be competitive. Another attractive feature of CF$_4$ is that its Townsend avalanches copiously emit visible and near infrared light [@Pansky1995; @Kaboth2008; @Fraga2003], allowing optical readout as in the DMTPC detector discussed in section \[DMTPC\]. The ultraviolet part of the spectrum may also be seen by making use of a wavelength shifter. Finally, CF$_4$ is non-flammable and non-toxic, and, therefore, safe to operate underground. The NEWAGE project is a current Dark Matter search program led by a Kyoto University group. This group has recently published the first limit on Dark Matter interactions derived from the absence of a directional modulation during a 0.15 kg-day exposure [@Miuchi2007-58]. NEWAGE uses CF$_4$-filled TPCs with a microwell gain structure [@Miuchi2003; @Tanimori2004; @Miuchi2007-43]. The detector had an active volume of 23 x 28 x 30 cm$^3$ and contained CF$_4$ at 150 Torr. Operation at higher-than-optimal gas pressure was chosen to enhance the HV stability of the gain structure. The chamber was read out by a single detector board referred to as a “$\mu$-PIC", preceded by a GEM for extra gas gain. The $\mu$-PIC has a micro-well gain structure produced using multi-layer printed circuit board technology. It is read out on two orthogonal, 400 micron-pitch arrays of strips. One array is connected to the central anode dots of the micro-well gain structure, and the other array to the surrounding cathodes. The strip amplifiers and position decoding electronics are on-board with the gain structures themselves, using an 8 layer PCB structure. The detector was calibrated with a $^{252}$Cf neutron source. Nuclear recoils were detected and compared to a simulation, giving a detection efficiency rising from zero at 50 keV to 90% near 250 keV. For comparison, the maximum energy of a $^{19}$F recoil from an infinitely heavy WIMP with the galactic escape speed is about 180 keV. The measured rejection factor for $^{137}$Cs gamma rays was about 10$^{-4}$. The angular resolution was reported as 25$^{\circ}$ HWHM. Measurement of the forward/backward sense of the tracks (“head-tail" discrimination) was not reported. Another gaseous Dark Matter search collaboration known as MIMAC [@santos2006] is led by a group at IPN Grenoble, and has reported work toward an electronically read-out direction sensitive detector. They proposed the use of $^3$He mixtures with isobutane near 1 bar, and also CF$_4$ gas fills to check the dependence on the atomic number A of any candidate Dark Matter signal. The advantages claimed for $^3$He as a Dark Matter search target include nonzero nuclear spin, low mass and hence sensitivity to low WIMP masses, and a very low Compton cross section which suppresses backgrounds from gamma rays. The characteristic (n,p) capture interaction with slow neutrons gives a strong signature for the presence of slow neutrons. The ionization efficiency of $\sim$ 1 keV $^3$He recoils is also expected to be very high, allowing efficient detection of the small energy releases expected for this target and for light WIMPs. A micropattern TPC with $\sim$ 350 $\mu$m anode pitch was proposed to obtain the desired electron rejection factor at a few keV. The MIMAC collaboration uses an ion source to generate monoenergetic $^3$He and F ions for measuring the ionization yield in their gas mixtures [@Guillaudin:2009fp]. DMTPC {#DMTPC} ----- The Dark Matter Time Projection Chamber (DMTPC) collaboration has developed a new detector concept [@Sciolla:2009fb] that addresses the issue of scalability of directional Dark Matter detectors by using optical readout, a potentially very inexpensive readout solution. The DMTPC detector [@Sciolla:2008ak; @Sciolla:2008mpla] is a low-pressure TPC filled with CF$_4$ at a nominal pressure of 50 torr. The detector is read out by an array of CCD cameras and photomultipliers (PMTs) mounted outside the vessel to reduce the amount of radioactive material in the active volume. The CCD cameras image the visible and near infrared photons that are produced by the avalanche process in the amplification region, providing a projection of the 3-D nuclear recoil on the 2-D amplification plane. The 3-D track length and direction of the recoiling nucleus is reconstructed by combining the measurement of the projection along the amplification plane (from pattern recognition in the CCD) with the projection along the direction of drift, determined from the waveform of the signal from the PMTs. The sense of the recoil track is determined by measuring dE/dx along the length of the track. The correlation between the energy of the recoil, proportional to the number of photons collected in the CCD, and the length of the recoil track provides an excellent rejection of all electromagnetic backgrounds. Several alternative implementations of the amplification region [@Dujmic2008-58] were developed. In a first design, the amplification was obtained by applying a large potential difference ($\Delta$V = 0.6–1.1 kV) between a copper plate and a conductive woven mesh kept at a uniform distance of 0.5 mm. The copper or stainless steel mesh was made of 28 $\mu$m wire with a pitch of 256 $\mu$m. In a second design the copper plate was replaced with two additional woven meshes. This design has the advantage of creating a transparent amplification region, which allows a substantial cost reduction since a single CCD camera can image tracks originating in two drift regions located on either side of a single amplification region. The current DMTPC prototype [@dujmicICHEP] consists of two optically independent regions contained in one stainless steel vessel. Each region is a cylinder with 30 cm diameter and 20 cm height contained inside a field cage. Gas gain is obtained using the mesh-plate design described above. The detector is read out by two CCD cameras, each imaging one drift region. Two f/1.2 55 mm Nikon photographic lenses focus light onto two commercial Apogee U6 CCD cameras equipped with Kodak 1001E CCD chips. Because the total area imaged is $16\times16$ cm$^2$, the detector has an active volume of about 10 liters. For WIMP-induced nuclear recoils of 50 keV, the energy and angular resolutions obtained with the CCD readout were estimated to be $\approx$ 15% and 25$^{\circ}$, respectively. This apparatus is currently being operated above ground with the goal of characterizing the detector response and understanding its backgrounds. A second 10-liter module is being constructed for underground operations at the Waste Isolation Pilot Plant (WIPP) in New Mexico. A 5.5 MeV alpha source from $^{241}$Am is used to study the gain of the detector as a function of the voltage and gas pressure, as well as to measure the resolution as a function of the drift distance of the primary electrons to quantify the effect of the transverse diffusion. These studies [@Dujmic2008-327; @Caldwell] show that the transverse diffusion allows for a sub-millimeter spatial resolution in the reconstruction of the recoil track for drift distances up to 20–25 cm. The gamma ray rejection factor, measured using a $^{137}$Cs source, is better than 2 parts per million [@Dujmic2008-327]. The performance of the DMTPC detector in determining the sense and direction of nuclear recoils has been evaluated by studying the recoil of fluorine nuclei in interaction with low-energy neutrons. The initial measurements were obtained running the chamber at 280 Torr and using 14 MeV neutrons from a deuteron-triton generator and a $^{252}$Cf source. The “head-tail” effect was clearly observed [@Dujmic2008-327; @Dujmic:2008iq] for nuclear recoils with energy between 200 and 800 keV. Better sensitivity to lower energy thresholds was achieved by using higher gains and lowering the CF$_4$ pressure to 75 torr. These measurements demonstrated [@Dujmic2008-58] “head-tail” discrimination for recoils above 100 keV, and reported a good agreement with the predictions of the SRIM [@SRIM] simulation. “Head-tail” discrimination is expected to extend to recoils above 50 keV when the detector is operated at a pressure of 50 torr. To evaluate the event-by-event “head-tail” capability of the detector as a function of the energy of the recoil, the DMTPC collaboration introduced a quality factor $Q(E_R) = \epsilon(E_R) \times (1 - 2 w(E_R))^2$, where $\epsilon$ is the recoil reconstruction efficiency and $w$ is the fraction of wrong “head-tail” assignments. The $Q$ factor represents the effective fraction of reconstructed recoils with “head-tail” information, and the error on the “head-tail” asymmetry scales as $1/\sqrt(Q)$. Early measurements demonstrated a $Q$ factor of 20% at 100 keV and 80% at 200 keV [@Dujmic2008-58]. The DMTPC collaboration is currently designing a 1-m$^3$ detector. The apparatus consists of a stainless steel vessel of 1.3 m diameter and 1.2 m height. Nine CCD cameras and nine PMTs are mounted on each of the top and bottom plates of the vessel, separated from the active volume of the detector by an acrylic window. The detector consists of two optically separated regions. Each of these regions is equipped with a triple-mesh amplification device, located between two symmetric drift regions. Each drift region has a diameter of 1.2 m and a height of 25 cm, for a total active volume of 1 m$^3$. A field cage made of stainless steel rings keeps the uniformity of the electric field within 1% in the fiducial volume. A gas system recirculates and purifies the CF$_4$. When operating the detector at a pressure of 50 torr, a 1 m$^3$ module will contain 250 g of CF$_4$. Assuming a detector threshold of 30 keVee (electron-equivalent energy, corresponding to nuclear recoil energy threshold $\sim$ 50 keV), and an overall data-taking efficiency of 50%, a one-year underground run will yield an exposure of 45 kg-days. Assuming negligible backgrounds, such an exposure will allow the DMTPC collaboration to improve the current limits on spin-dependent interactions on protons by about a factor of 50 [@Dujmic2008-58]. Conclusion ============ Directional detectors can provide an unambiguous positive observation of Dark Matter particles even in presence of insidious backgrounds, such as neutrons or neutrinos. Moreover, the dynamics of the galactic Dark Matter halo will be revealed by measuring the direction of the incoming WIMPs, opening the path to WIMP astronomy. In the past decade, several groups have investigated new ideas to develop directional Dark Matter detectors. Low-pressure TPCs are best suited for this purpose if an accurate (sub-millimeter) 3-D reconstruction of the nuclear recoil can be achieved. A good tracking resolution also allows for an effective rejection of all electromagnetic backgrounds, in addition to statistical discrimination against neutrinos and neutrons based on the directional signature. The choice of different gaseous targets makes these detectors well suited for the study of both spin-dependent (CS$_2$) or spin-independent (CF$_4$ and $^3$He) interactions. A vigorous R&D program has explored both electronic and optical readout solutions, demonstrating that both technologies can effectively and efficiently reconstruct the energy and vector direction of the nuclear recoils expected from Dark Matter interactions. The challenge for the field of directional Dark Matter detection is now to develop and deploy very sensitive and yet inexpensive readout solutions, which will make large directional detectors financially viable. Acknowledgments {#acknowledgments .unnumbered} =============== The authors are grateful to D. Dujmic and M. Morii for useful discussions and for proofreading the manuscript. G. S. is supported by the M.I.T. Physics Department and the U.S. Department of Energy (contract number DE-FG02-05ER41360). C. J. M. is supported by Fermilab and Temple University. References {#references .unnumbered} ========== [^1]: A homoatomic molecular entity is a molecular entity consisting of one or more atoms of the same element. [^2]: The scale factors are (in cgs-Gaussian units): $E_{TF} = \frac{e^2}{a} Z_i Z_T \frac{M_i +M_T}{M_T}$, $R_{TF} = \frac{1}{4 \pi a^2 N} \frac{(M_i + M_T)^2}{M_i M_T}$. Here, $N$= number density of target atoms, subscripts i and T refer to the incident particle and the target substance, and $a = a_0 \frac{.8853}{\sqrt{Z_i ^{2/3} + Z_T ^{2/3}}} $, with $a_0$ the Bohr radius. [^3]: The parameter $k \stackrel{.}{=} \frac{0.0793Z_1^{1/6}}{(Z_1^{2/3} + Z_2^{2/3})^{3/4}} \left[\frac{Z_1Z_2(A_1+A_2)^3}{A_1^3A_2}\right] ^{1/2}$ becomes substantially larger only for light recoils in heavy targets.
--- abstract: 'The aim of this paper is to establish a global asymptotic equivalence between the experiments generated by the discrete (high frequency) or continuous observation of a path of a Lévy process and a Gaussian white noise experiment observed up to a time $T$, with $T$ tending to $\infty$. These approximations are given in the sense of the Le Cam distance, under some smoothness conditions on the unknown Lévy density. All the asymptotic equivalences are established by constructing explicit Markov kernels that can be used to reproduce one experiment from the other.' address: - '*Laboratoire LJK, Université Joseph Fourier UMR 5224 51, Rue des Mathématiques, Saint Martin d’Hères BP 53 38041 Grenoble Cedex 09*' - 'Corresponding Author, Ester.Mariucci@imag.fr' author: - Ester Mariucci bibliography: - 'refs.bib' title: Asymptotic equivalence for pure jump Lévy processes with unknown Lévy density and Gaussian white noise --- Nonparametric experiments,Le Cam distance,asymptotic equivalence,Lévy processes. 62B15,(62G20,60G51). Introduction ============ Lévy processes are a fundamental tool in modelling situations, like the dynamics of asset prices and weather measurements, where sudden changes in values may happen. For that reason they are widely employed, among many other fields, in mathematical finance. To name a simple example, the price of a commodity at time $t$ is commonly given as an exponential function of a Lévy process. In general, exponential Lévy models are proposed for their ability to take into account several empirical features observed in the returns of assets such as heavy tails, high-kurtosis and asymmetry (see [@tankov] for an introduction to financial applications). From a mathematical point of view, Lévy processes are a natural extension of the Brownian motion which preserves the tractable statistical properties of its increments, while relaxing the continuity of paths. The jump dynamics of a Lévy process is dictated by its Lévy density, say $f$. If $f$ is continuous, its value at a point $x_0$ determines how frequent jumps of size close to $x_0$ are to occur per unit time. Concretely, if $X$ is a pure jump Lévy process with Lévy density $f$, then the function $f$ is such that $$\int_Af(x)dx=\frac{1}{t}{\ensuremath {\mathbb{E}}}\bigg[\sum_{s\leq t}{\ensuremath {\mathbb{I}}}_A(\Delta X_s)\bigg],$$ for any Borel set $A$ and $t>0$. Here, $\Delta X_s\equiv X_s-X_{s^-}$ denotes the magnitude of the jump of $X$ at time $s$ and ${\ensuremath {\mathbb{I}}}_A$ is the characteristic function. Thus, the Lévy measure $$\nu(A):=\int_A f(x)dx,$$ is the average number of jumps (per unit time) whose magnitudes fall in the set $A$. Understanding the jumps behavior, therefore requires to estimate the Lévy measure. Several recent works have treated this problem, see e.g. [@bel15] for an overview. When the available data consists of the whole trajectory of the process during a time interval $[0,T]$, the problem of estimating $f$ may be reduced to estimating the intensity function of an inhomogeneous Poisson process (see, e.g. [@fig06; @rey03]). However, a continuous-time sampling is never available in practice and thus the relevant problem is that of estimating $f$ based on discrete sample data $X_{t_0},\dots,X_{t_n}$ during a time interval $[0,T_n]$. In that case, the jumps are latent (unobservable) variables and that clearly adds to the difficulty of the problem. From now on we will place ourselves in a high-frequency setting, that is we assume that the sampling interval $\Delta_n=t_i-t_{i-1}$ tends to zero as $n$ goes to infinity. Such a high-frequency based statistical approach has played a central role in the recent literature on nonparametric estimation for Lévy processes (see e.g. [@fig09; @comte10; @comte11; @bec12; @duval12]). Moreover, in order to make consistent estimation possible, we will also ask the observation time $T_n$ to tend to infinity in order to allow the identification of the jump part in the limit. Our aim is to prove that, under suitable hypotheses, estimating the Lévy density $f$ is equivalent to estimating the drift of an adequate Gaussian white noise model. In general, asymptotic equivalence results for statistical experiments provide a deeper understanding of statistical problems and allow to single out their main features. The idea is to pass via asymptotic equivalence to another experiment which is easier to analyze. By definition, two sequences of experiments ${\ensuremath {\mathscr{P}}}_{1,n}$ and ${\ensuremath {\mathscr{P}}}_{2,n}$, defined on possibly different sample spaces, but with the same parameter set, are asymptotically equivalent if the Le Cam distance $\Delta({\ensuremath {\mathscr{P}}}_{1,n},{\ensuremath {\mathscr{P}}}_{2,n})$ tends to zero. For ${\ensuremath {\mathscr{P}}}_{i}=({\ensuremath {\mathscr{X}}}_i,{\ensuremath {\mathscr{A}}}_i, \big(P_{i,\theta}:\theta\in\Theta)\big)$, $i=1,2$, $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)$ is the symmetrization of the deficiency $\delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)$ where $$\delta({\ensuremath {\mathscr{P}}}_{1},{\ensuremath {\mathscr{P}}}_{2})=\inf_K\sup_{\theta\in\Theta}\big\|KP_{1,\theta}-P_{2,\theta}\big\|_{TV}.$$ Here the infimum is taken over all randomizations from $({\ensuremath {\mathscr{X}}}_1,{\ensuremath {\mathscr{A}}}_1)$ to $({\ensuremath {\mathscr{X}}}_2,{\ensuremath {\mathscr{A}}}_2)$ and $\| \cdot \|_{TV}$ denotes the total variation distance. Roughly speaking, the Le Cam distance quantifies how much one fails to reconstruct (with the help of a randomization) a model from the other one and vice versa. Therefore, we say that $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)=0$ can be interpreted as “the models ${\ensuremath {\mathscr{P}}}_1$ and ${\ensuremath {\mathscr{P}}}_2$ contain the same amount of information about the parameter $\theta$.” The general definition of randomization is quite involved but, in the most frequent examples (namely when the sample spaces are Polish and the experiments dominated), it reduces to that of a Markov kernel. One of the most important feature of the Le Cam distance is that it can be also interpreted in terms of statistical decision theory (see [@lecam; @LC2000]; a short review is presented in the Appendix). As a consequence, saying that two statistical models are equivalent means that any statistical inference procedure can be transferred from one model to the other in such a way that the asymptotic risk remains the same, at least for bounded loss functions. Also, as soon as two models, ${\ensuremath {\mathscr{P}}}_{1,n}$ and ${\ensuremath {\mathscr{P}}}_{2,n}$, that share the same parameter space $\Theta$ are proved to be asymptotically equivalent, the same result automatically holds for the restrictions of both ${\ensuremath {\mathscr{P}}}_{1,n}$ and ${\ensuremath {\mathscr{P}}}_{2,n}$ to a smaller subclass of $\Theta$. Historically, the first results of asymptotic equivalence in a nonparametric context date from 1996 and are due to [@BL] and [@N96]. The first two authors have shown the asymptotic equivalence of nonparametric regression and a Gaussian white noise model while the third one those of density estimation and white noise. Over the years many generalizations of these results have been proposed such as [@regression02; @GN2002; @ro04; @C2007; @cregression; @R2008; @C2009; @R2013; @schmidt14] for nonparametric regression or [@cmultinomial; @j03; @BC04] for nonparametric density estimation models. Another very active field of study is that of diffusion experiments. The first result of equivalence between diffusion models and Euler scheme was established in 1998, see [@NM]. In later papers generalizations of this result have been considered (see [@C14; @esterdiffusion]). Among others we can also cite equivalence results for generalized linear models [@GN], time series [@GN2006; @NM], diffusion models [@D; @CLN; @R2006; @rmultidimensionale], GARCH model [@B], functional linear regression [@M2011], spectral density estimation [@GN2010] and volatility estimation [@R11]. Negative results are somewhat harder to come by; the most notable among them are [@sam96; @B98; @wang02]. There is however a lack of equivalence results concerning processes with jumps. A first result in this sense is [@esterESAIM] in which global asymptotic equivalences between the experiments generated by the discrete or continuous observation of a path of a Lévy process and a Gaussian white noise experiment are established. More precisely, in that paper, we have shown that estimating the drift function $h$ from a continuously or discretely (high frequency) time inhomogeneous jump-diffusion process: $$\label{ch4X} X_t=\int_0^th(s)ds+\int_0^t\sigma(s)dW_s +\sum_{i=1}^{N_t}Y_i,\quad t\in[0,T_n],$$ is asymptotically equivalent to estimate $h$ in the Gaussian model: $$ dy_t=h(t)dt+\sigma(t)dW_t, \quad t\in[0,T_n].$$ Here we try to push the analysis further and we focus on the case in which the considered parameter is the Lévy density and $X=(X_t)$ is a pure jump Lévy process (see [@carr02] for the interest of such a class of processes when modelling asset returns). More in details, we consider the problem of estimating the Lévy density (with respect to a fixed, possibly infinite, Lévy measure $\nu_0$ concentrated on $I\subseteq {\ensuremath {\mathbb{R}}}$) $f:=\frac{d\nu}{d\nu_0}:I\to {\ensuremath {\mathbb{R}}}$ from a continuously or discretely observed pure jump Lévy process $X$ with possibly infinite Lévy measure. Here $I\subseteq {\ensuremath {\mathbb{R}}}$ denotes a possibly infinite interval and $\nu_0$ is supposed to be absolutely continuous with respect to Lebesgue with a strictly positive density $g:=\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}}$. In the case where $\nu$ is of finite variation one may write: $$\label{eqn:ch4Levy} X_t=\sum_{0<s\leq t}\Delta X_s$$ or, equivalently, $X$ has a characteristic function given by: $${\ensuremath {\mathbb{E}}}\big[e^{iuX_t}\big]=\exp\bigg(-t\bigg(\int_{I}(1-e^{iuy})\nu(dy)\bigg)\bigg).$$ We suppose that the function $f$ belongs to some a priori set ${\ensuremath {\mathscr{F}}}$, nonparametric in general. The discrete observations are of the form $X_{t_i}$, where $t_i=T_n\frac{i}{n}$, $i=0,\dots,n$ with $T_n=n\Delta_n\to \infty$ and $\Delta_n\to 0$ as $n$ goes to infinity. We will denote by ${\ensuremath {\mathscr{P}}}_n^{\nu_0}$ the statistical model associated with the continuous observation of a trajectory of $X$ until time $T_n$ (which is supposed to go to infinity as $n$ goes to infinity) and by ${\ensuremath {\mathscr{Q}}}_n^{\nu_0}$ the one associated with the observation of the discrete data $(X_{t_i})_{i=0}^n$. The aim of this paper is to prove that, under adequate hypotheses on ${\ensuremath {\mathscr{F}}}$ (for example, $f$ must be bounded away from zero and infinity; see Section \[subsec:ch4parameter\] for a complete definition), the models ${\ensuremath {\mathscr{P}}}_n^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_n^{\nu_0}$ are both asymptotically equivalent to a sequence of Gaussian white noise models of the form: $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n}}\frac{dW_t}{\sqrt{g(t)}},\quad t\in I.$$ As a corollary, we then get the asymptotic equivalence between ${\ensuremath {\mathscr{P}}}_n^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_n^{\nu_0}$. The main results are precisely stated as Theorems \[ch4teo1\] and \[ch4teo2\]. A particular case of special interest arises when $X$ is a compound Poisson process, $\nu_0\equiv {\ensuremath{\textnormal{Leb}}}([0,1])$ and ${\ensuremath {\mathscr{F}}}\subseteq {\ensuremath {\mathscr{F}}}_{(\gamma,K,\kappa,M)}^I$ where, for fixed $\gamma\in (0,1]$ and $K,\kappa, M$ strictly positive constants, ${\ensuremath {\mathscr{F}}}_{(\gamma,K,\kappa,M)}^I$ is a class of continuously differentiable functions on $I$ defined as follows: $$\label{ch4:fholder} {\ensuremath {\mathscr{F}}}_{(\gamma,K,\kappa,M)}^I=\Big\{f: \kappa\leq f(x)\leq M, \ |f'(x)-f'(y)|\leq K|x-y|^{\gamma},\ \forall x,y\in I\Big\}.$$ In this case, the statistical models ${\ensuremath {\mathscr{P}}}_n^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_n^{\nu_0}$ are both equivalent to the Gaussian white noise model: $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n}}dW_t,\quad t\in [0,1].$$ See Example \[ex:ch4CPP\] for more details. By a theorem of Brown and Low in [@BL], we obtain, a posteriori, an asymptotic equivalence with the regression model $$Y_i=\sqrt{f\Big(\frac{i}{T_n}\Big)}+\frac{1}{2\sqrt{T_n}}\xi_i, \quad \xi_i\sim{\ensuremath {\mathscr{Nn}}}(0,1), \quad i=1,\dots, [T_n].$$ Note that a similar form of a Gaussian shift was found to be asymptotically equivalent to a nonparametric density estimation experiment, see [@N96]. Let us mention that we also treat some explicit examples where $\nu_0$ is neither finite nor compactly-supported (see Examples \[ch4ex2\] and \[ex3\]). Without entering into any detail, we remark here that the methods are very different from those in [@esterESAIM]. In particular, since $f$ belongs to the discontinuous part of a Lévy process, rather then its continuous part, the Girsanov-type changes of measure are irrelevant here. We thus need new instruments, like the Esscher changes of measure. Our proof is based on the construction, for any given Lévy measure $\nu$, of two adequate approximations $\hat \nu_m$ and $\bar \nu_m$ of $\nu$: the idea of discretizing the Lévy density already appeared in an earlier work with P. Étoré and S. Louhichi, [@etore13]. The present work is also inspired by the papers [@cmultinomial] (for a multinomial approximation), [@BC04] (for passing from independent Poisson variables to independent normal random variables) and [@esterESAIM] (for a Bernoulli approximation). This method allows us to construct explicit Markov kernels that lead from one model to the other; these may be applied in practice to transfer minimax estimators. The paper is organized as follows: Sections \[subsec:ch4parameter\] and \[subsec:ch4experiments\] are devoted to make the parameter space and the considered statistical experiments precise. The main results are given in Section \[subsec:ch4mainresults\], followed by Section \[sec:ch4experiments\] in which some examples can be found. The proofs are postponed to Section \[sec:ch4proofs\]. The paper includes an Appendix recalling the definition and some useful properties of the Le Cam distance as well as of Lévy processes. Assumptions and main results ============================ The parameter space {#subsec:ch4parameter} ------------------- Consider a (possibly infinite) Lévy measure $\nu_0$ concentrated on a possibly infinite interval $I\subseteq{\ensuremath {\mathbb{R}}}$, admitting a density $g>0$ with respect to Lebesgue. The parameter space of the experiments we are concerned with is a class of functions ${\ensuremath {\mathscr{F}}}={\ensuremath {\mathscr{F}}}^{\nu_0,I}$ defined on $I$ that form a class of Lévy densities with respect to $\nu_0$: For each $f\in{\ensuremath {\mathscr{F}}}$, let $\nu$ (resp. $\hat \nu_m$) be the Lévy measure having $f$ (resp. $\hat f_m$) as a density with respect to $\nu_0$ where, for every $f\in{\ensuremath {\mathscr{F}}}$, $\hat f_m(x)$ is defined as follows. Suppose first $x>0$. Given a positive integer depending on $n$, $m=m_n$, let $J_j:=(v_{j-1},v_j]$ where $v_1=\varepsilon_m\geq 0$ and $v_j$ are chosen in such a way that $$\label{eq:ch4Jj} \mu_m:=\nu_0(J_j)=\frac{\nu_0\big((I\setminus[0,\varepsilon_m])\cap {\ensuremath {\mathbb{R}}}_+\big)}{m-1},\quad \forall j=2,\dots,m.$$ In the sequel, for the sake of brevity, we will only write $m$ without making explicit the dependence on $n$. Define $x_j^*:=\frac{\int_{J_j}x\nu_0(dx)}{\mu_m}$ and introduce a sequence of functions $0\leq V_j\leq \frac{1}{\mu_m}$, $j=2,\dots,m$ supported on $[x_{j-1}^*, x_{j+1}^*]$ if $j=3,\dots,m-1$, on $[\varepsilon_m, x_3^*]$ if $j=2$ and on $(I\setminus [0,x_{m-1}^*])\cap {\ensuremath {\mathbb{R}}}_+$ if $j=m$. The $V_j$’s are defined recursively in the following way. - $V_2$ is equal to $\frac{1}{\mu_m}$ on the interval $(\varepsilon_m, x_2^*]$ and on the interval $(x_2^*,x_3^*]$ it is chosen so that it is continuous (in particular, $V_2(x_2^*)=\frac{1}{\mu_m}$), $\int_{x_2^*}^{x_3^*}V_2(y)\nu_0(dy)=\frac{\nu_0((x_2^*, v_2])}{\mu_m}$ and $V_2(x_3^*)=0$. - For $j=3,\dots,m-1$ define $V_j$ as the function $\frac{1}{\mu_m}-V_{j-1}$ on the interval $[x_{j-1}^*,x_j^*]$. On $[x_j^*,x_{j+1}^*]$ choose $V_j$ continuous and such that $\int_{x_j^*}^{x_{j+1}^*}V_j(y)\nu_0(dy)=\frac{\nu_0((x_j^*,v_j])}{\mu_m}$ and $V_j(x_{j+1}^*)=0$. - Finally, let $V_m$ be the function supported on $(I\setminus [0,x_{m-1}^*]) \cap {\ensuremath {\mathbb{R}}}_+$ such that $$\begin{aligned} V_m(x)&=\frac{1}{\mu_m}-V_{m-1}(x), \quad\text{for } x \in [x_{m-1}^*,x_m^*],\\ V_m(x)&=\frac{1}{\mu_m}, \quad\text{for } x \in (I\setminus [0,x_m^*])\cap {\ensuremath {\mathbb{R}}}_+.\end{aligned}$$ (It is immediate to check that such a choice is always possible). Observe that, by construction, $$\sum_{j=2}^m V_j(x)\mu_m=1, \quad \forall x\in (I\setminus[0,\varepsilon_m])\cap {\ensuremath {\mathbb{R}}}_+ \quad \textnormal{and} \quad \int_{(I\setminus[0,\varepsilon_m])\cap {\ensuremath {\mathbb{R}}}_+}V_j(y)\nu_0(dy)=1.$$ Analogously, define $\mu_m^-=\frac{\nu_0\big((I\setminus[-\varepsilon_m,0])\cap {\ensuremath {\mathbb{R}}}_-\big)}{m-1}$ and $J_{-m},\dots,J_{-2}$ such that $\nu_0(J_{-j})=\mu_m^-$ for all $j$. Then, for $x<0$, $x_{-j}^*$ is defined as $x_j^*$ by using $J_{-j}$ and $\mu_m^-$ instead of $J_j$ and $\mu_m$ and the $V_{-j}$’s are defined with the same procedure as the $V_j$’s, starting from $V_{-2}$ and proceeding by induction. Define $$\label{eq:ch4hatf} \hat f_m(x)={\ensuremath {\mathbb{I}}}_{[-\varepsilon_m,\varepsilon_m]}(x)+\sum_{j=2}^m \bigg(V_j(x)\int_{J_j} f(y)\nu_0(dy)+V_{-j}(x)\int_{J_{-j}} f(y)\nu_0(dy)\bigg).$$ The definitions of the $V_j$’s above are modeled on the following example: \[ex:Vj\] Let $\nu_0$ be the Lebesgue measure on $[0,1]$ and $\varepsilon_m=0$. Then $v_j=\frac{j-1}{m-1}$ and $x_j^*=\frac{2j-3}{2m-2}$, $j=2,\dots,m$. The standard choice for $V_j$ (based on the construction by [@cmultinomial]) is given by the piecewise linear functions interpolating the values in the points $x_j^*$ specified above: The function $\hat f_m$ has been defined in such a way that the rate of convergence of the $L_2$ norm between the restriction of $f$ and $\hat f_m$ on $I\setminus[-\varepsilon_m,\varepsilon_m]$ is compatible with the rate of convergence of the other quantities appearing in the statements of Theorems \[ch4teo1\] and \[ch4teo2\]. For that reason, as in [@cmultinomial], we have not chosen a piecewise constant approximation of $f$ but an approximation that is, at least in the simplest cases, a piecewise linear approximation of $f$. Such a choice allows us to gain an order of magnitude on the convergence rate of $\|f-\hat f_m\|_{L_2(\nu_0|{I\setminus{[-\varepsilon_m,\varepsilon_m]}})}$ at least when ${\ensuremath {\mathscr{F}}}$ is a class of sufficiently smooth functions. We now explain the assumptions we will need to make on the parameter $f \in {\ensuremath {\mathscr{F}}}= {\ensuremath {\mathscr{F}}}^{\nu_0, I}$. The superscripts $\nu_0$ and $I$ will be suppressed whenever this can lead to no confusion. We require that: 1. There exist constants $\kappa, M >0$ such that $\kappa\leq f(y)\leq M$, for all $y\in I$ and $f\in {\ensuremath {\mathscr{F}}}$. For every integer $m=m_n$, we can consider $\widehat{\sqrt{f}}_m$, the approximation of $\sqrt{f}$ constructed as $\hat f_m$ above, i.e. $\widehat{\sqrt{f}}_m(x)=\displaystyle{{\ensuremath {\mathbb{I}}}_{[-\varepsilon_m,\varepsilon_m]}(x)+\sum_{\substack{j=-m\dots,m\\ j\neq -1,0,1.}}V_j(x)\int_{J_j} \sqrt{f(y)}\nu_0(dy)}$, and introduce the quantities: $$\begin{aligned} A_m^2(f)&:= \int_{I\setminus \big[-\varepsilon_m,\varepsilon_m\big]}\Big(\widehat{\sqrt {f}}_m(y)-\sqrt{f(y)}\Big)^2\nu_0(dy),\\ B_m^2(f)&:= \sum_{\substack{j=-m\dots,m\\ j\neq -1,0,1.}}\bigg(\int_{J_j}\frac{\sqrt{f(y)}}{\sqrt{\nu_0(J_j)}}\nu_0(dy)-\sqrt{\nu(J_j)}\bigg)^2,\\ C_m^2(f)&:= \int_{-\varepsilon_m}^{\varepsilon_m}\big(\sqrt{f(t)}-1\big)^2\nu_0(dt). \end{aligned}$$ The conditions defining the parameter space ${\ensuremath {\mathscr{F}}}$ are expressed by asking that the quantities introduced above converge quickly enough to zero. To state the assumptions of Theorem \[ch4teo1\] precisely, we will assume the existence of sequences of discretizations $m = m_n\to\infty$, of positive numbers $\varepsilon_m=\varepsilon_{m_n}\to 0$ and of functions $V_j$, $j = \pm 2, \dots, \pm m$, such that: 1. \[cond:ch4hellinger\] $\lim\limits_{n \to \infty}n\Delta_n\sup\limits_{f \in{\ensuremath {\mathscr{F}}}}\displaystyle{\int_{I\setminus(-\varepsilon_m,\varepsilon_m)}}\Big(f(x)-\hat f_m(x)\Big)^2 \nu_0(dx) = 0$. 2. \[cond:ch4ABC\]$\lim\limits_{n \to \infty}n\Delta_n\sup\limits_{f \in{\ensuremath {\mathscr{F}}}} \big(A_m^2(f)+B_m^2(f)+C_m^2(f)\big)=0$. Remark in particular that Condition (C\[cond:ch4ABC\]) implies the following: 1. $\displaystyle \sup_{f\in{\ensuremath {\mathscr{F}}}}\int_I (\sqrt{f(y)}-1)^2 \nu_0(dy) \leq L,$ where $L = \sup_{f \in {\ensuremath {\mathscr{F}}}} \int_{-\varepsilon_m}^{\varepsilon_m} (\sqrt{f(x)}-1)^2\nu_0(dx) + (\sqrt{M}+1)^2\nu_0\big(I\setminus (-\varepsilon_m, \varepsilon_m)\big)$, for any choice of $m$ such that the quantity in the limit appearing in Condition (C\[cond:ch4ABC\]) is finite. Theorem \[ch4teo2\] has slightly stronger hypotheses, defining possibly smaller parameter spaces: We will assume the existence of sequences $m_n$, $\varepsilon_m$ and $V_j$, $j = \pm 2, \dots, \pm m$ (possibly different from the ones above) such that Condition (C1) is verified and the following stronger version of Condition (C2) holds: 1. $\lim\limits_{n \to \infty}n\Delta_n\sup\limits_{f \in{\ensuremath {\mathscr{F}}}} \big(A_m^2(f)+B_m^2(f)+nC_m^2(f)\big)=0$. Finally, some of our results have a more explicit statement under the hypothesis of finite variation which we state as: - $\int_I (|x|\wedge 1)\nu_0(dx)<\infty$. The Condition (C1) and those involving the quantities $A_m(f)$ and $B_m(f)$ all concern similar but slightly different approximations of $f$. In concrete examples, they may all be expected to have the same rate of convergence but to keep the greatest generality we preferred to state them separately. On the other hand, conditions on the quantity $C_m(f)$ are purely local around zero, requiring the parameters $f$ to converge quickly enough to 1. \[ex:ch4esempi\] To get a grasp on Conditions (C1), (C2) we analyze here three different examples according to the different behavior of $\nu_0$ near $0\in I$. In all of these cases the parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, I}$ will be a subclass of ${\ensuremath {\mathscr{F}}}_{(\gamma,K,\kappa,M)}^I$ defined as in . Recall that the conditions (C1), (C2) and (C2’) depend on the choice of sequences $m_n$, $\varepsilon_m$ and functions $V_j$. For the first two of the three examples, where $I = [0,1]$, we will make the standard choice for $V_j$ of triangular and trapezoidal functions, similarly to those in Example \[ex:Vj\]. Namely, for $j = 3, \dots, m-1$ we have $$\label{eq:ch4vj} V_j(x) = {\ensuremath {\mathbb{I}}}_{(x_{j-1}^*, x_j^*]}(x) \frac{x-x_{j-1}^*}{x_j^*-x_{j-1}^*} \frac{1}{\mu_m} + {\ensuremath {\mathbb{I}}}_{(x_{j}^*, x_{j+1}^*]}(x) \frac{x_{j+1}^*-x}{x_{j+1}^*-x_{j}^*} \frac{1}{\mu_m};$$ the two extremal functions $V_2$ and $V_m$ are chosen so that $V_2 \equiv \frac{1}{\mu_m}$ on $(\varepsilon_m, x_2^*]$ and $V_m \equiv \frac{1}{\mu_m}$ on $(x_m^*, 1]$. In the second example, where $\nu_0$ is infinite, one is forced to take $\varepsilon_m > 0$ and to keep in mind that the $x_j^*$ are not uniformly distributed on $[\varepsilon_m,1]$. Proofs of all the statements here can be found in Section \[subsec:esempi\]. **1. The finite case:** $\nu_0\equiv {\ensuremath{\textnormal{Leb}}}([0,1])$. In this case we are free to choose ${\ensuremath {\mathscr{F}}}^{{\ensuremath{\textnormal{Leb}}}, [0,1]} = {\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^{[0,1]}$. Indeed, as $\nu_0$ is finite, there is no need to single out the first interval $J_1=[0,\varepsilon_m]$, so that $C_m(f)$ does not enter in the proofs and the definitions of $A_m(f)$ and $B_m(f)$ involve integrals on the whole of $[0,1]$. Also, the choice of the $V_j$’s as in guarantees that $\int_0^1 V_j(x) dx = 1$. Then, the quantities $\|f-\hat f_m\|_{L_2([0,1])}$, $A_m(f)$ and $B_m(f)$ all have the same rate of convergence, which is given by: $$\sqrt{\int_0^1\Big(f(x)-\hat f_m(x)\Big)^2 \nu_0(dx)}+A_m(f)+B_m(f)=O\Big(m^{-\gamma-1}+m^{-\frac{3}{2}}\Big),$$ uniformly on $f$. See Section \[subsec:esempi\] for a proof. **2. The finite variation case:** $\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}}(x)=x^{-1}{\ensuremath {\mathbb{I}}}_{[0,1]}(x)$. In this case, the parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, [0,1]}$ is a proper subset of ${\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^{[0,1]}$. Indeed, as we are obliged to choose $\varepsilon_m > 0$, we also need to impose that $C_m(f) = o\big(\frac{1}{n\sqrt{\Delta_n}}\big)$, with uniform constants with respect to $f$, that is, that all $f \in {\ensuremath {\mathscr{F}}}$ converge to 1 quickly enough as $x \to 0$. Choosing $\varepsilon_m = m^{-1-\alpha}$, $\alpha> 0$ we have that $\mu_m=\frac{\ln (\varepsilon_m^{-1})}{m-1}$, $v_j =\varepsilon_m^{\frac{m-j}{m-1}}$ and $x_j^* =\frac{(v_{j}-v_{j-1})}{\mu_m}$. In particular, $\max_j|v_{j-1}-v_j|=|v_m-v_{m-1}|=O\Big(\frac{\ln m}{m}\Big)$. Also in this case one can prove that the standard choice of $V_j$ described above leads to $\int_{\varepsilon_m}^1 V_j(x) \frac{dx}{x} = 1$. Again, the quantities $\|f-\hat f_m\|_{L_2(\nu_0|{I\setminus{[0,\varepsilon_m]}})}$, $A_m(f)$ and $B_m(f)$ have the same rate of convergence given by: $$\label{eq:ch4ex2} \sqrt{\int_{\varepsilon_m}^1\Big(f(x)-\hat f_m(x)\Big)^2 \nu_0(dx)} +A_m(f)+B_m(f)=O\bigg(\bigg(\frac{\ln m}{m}\bigg)^{\gamma+1} \sqrt{\ln (\varepsilon_m^{-1})}\bigg),$$ uniformly on $f$. The condition on $C_m(f)$ depends on the behavior of $f$ near $0$. For example, it is ensured if one considers a parametric family of the form $f(x)=e^{-\lambda x}$ with a bounded $\lambda > 0$. See Section \[subsec:esempi\] for a proof. **3. The infinite variation, non-compactly supported case:** $\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}}(x)=x^{-2}{\ensuremath {\mathbb{I}}}_{{\ensuremath {\mathbb{R}}}_+}(x)$. This example involves significantly more computations than the preceding ones, since the classical triangular choice for the functions $V_j$ would not have integral equal to 1 (with respect to $\nu_0$), and the support is not compact. The parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, [0, \infty)}$ can still be chosen as a proper subclass of ${\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^{[0,\infty)}$, again by imposing that $C_m(f)$ converges to zero quickly enough (more details about this condition are discussed in Example \[ex3\]). We divide the interval $[0, \infty)$ in $m$ intervals $J_j = [v_{j-1}, v_j)$ with: $$v_0 = 0; \quad v_1 = \varepsilon_m; \quad v_j = \frac{\varepsilon_m(m-1)}{m-j};\quad v_m = \infty; \quad \mu_m = \frac{1}{\varepsilon_m(m-1)}.$$ To deal with the non-compactness problem, we choose some “horizon” $H(m)$ that goes to infinity slowly enough as $m$ goes to infinity and we bound the $L_2$ distance between $f$ and $\hat f_m$ for $x > H(m)$ by $2\sup\limits_{x\geq H(m)}\frac{f(x)^2}{H(m)}$. We have: $$\|f-\hat f_m\|_{L_2(\nu_0|{I\setminus{[0,\varepsilon_m]}})}^2+A_m^2(f)+B_m^2(f)=O\bigg(\frac{H(m)^{3+4\gamma}}{(\varepsilon_m m)^{2+2\gamma}}+\sup_{x\geq H(m)}\frac{f(x)^2}{H(m)}\bigg).$$ In the general case where the best estimate for $\displaystyle{\sup_{x\geq H(m)}f(x)^2}$ is simply given by $M^2$, an optimal choice for $H(m)$ is $\sqrt{\varepsilon_m m}$, that gives a rate of convergence: $$\|f-\hat f_m\|_{L_2(\nu_0|{I\setminus{[0,\varepsilon_m]}})}^2+A_m^2(f)+B_m^2(f) =O\bigg( \frac{1}{\sqrt{\varepsilon_m m}}\bigg),$$ independently of $\gamma$. See Section \[subsec:esempi\] for a proof. Definition of the experiments {#subsec:ch4experiments} ----------------------------- Let $(x_t)_{t\geq 0}$ be the canonical process on the Skorokhod space $(D,{\ensuremath {\mathscr{D}}})$ and denote by $P^{(b,0,\nu)}$ the law induced on $(D,{\ensuremath {\mathscr{D}}})$ by a Lévy process with characteristic triplet $(b,0,\nu)$. We will write $P_t^{(b,0,\nu)}$ for the restriction of $P^{(b,0,\nu)}$ to the $\sigma$-algebra ${\ensuremath {\mathscr{D}}}_t$ generated by $\{x_s:0\leq s\leq t\}$ (see \[sec:ch4levy\] for the precise definitions). Let $Q_t^{(b,0,\nu)}$ be the marginal law at time $t$ of a Lévy process with characteristic triplet ${(b,0,\nu)}$. In the case where $\int_{|y|\leq 1}|y|\nu(dy)<\infty$ we introduce the notation $\gamma^{\nu}:=\int_{|y|\leq 1}y\nu(dy)$; then, Condition (H2) guarantees the finiteness of $\gamma^{\nu-\nu_0}$ (see Remark 33.3 in [@sato] for more details). Recall that we introduced the discretization $t_i=T_n\frac{i}{n}$ of $[0,T_n]$ and denote by $\textbf Q_n^{(\gamma^{\nu-\nu_0},0,\nu)}$ the laws of the $n+1$ marginals of $(x_t)_{t\geq 0}$ at times $t_i$, $i=0,\dots,n$. We will consider the following statistical models, depending on a fixed, possibly infinite, Lévy measure $\nu_0$ concentrated on $I$ (clearly, the models with the subscript $FV$ are meaningful only under the assumption (FV)): $$\begin{aligned} {\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0}&=\bigg(D,{\ensuremath {\mathscr{D}}}_{T_n},\Big\{P_{T_n}^{(\gamma^{\nu},0,\nu)}:f:=\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}}^{\nu_0,I}\Big\}\bigg),\\ {\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0}&=\bigg({\ensuremath {\mathbb{R}}}^{n+1},{\ensuremath {\mathscr{B}}}({\ensuremath {\mathbb{R}}}^{n+1}),\Big\{ \textbf Q_{n}^{(\gamma^{\nu},0,\nu)}:f:=\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}}^{\nu_0,I}\Big\}\bigg),\\ {\ensuremath {\mathscr{P}}}_{n}^{\nu_0}&=\bigg(D,{\ensuremath {\mathscr{D}}}_{T_n},\Big\{P_{T_n}^{(\gamma^{\nu-\nu_0},0,\nu)}:f:=\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}}^{\nu_0,I}\Big\}\bigg),\\ {\ensuremath {\mathscr{Q}}}_{n}^{\nu_0}&=\bigg({\ensuremath {\mathbb{R}}}^{n+1},{\ensuremath {\mathscr{B}}}({\ensuremath {\mathbb{R}}}^{n+1}),\Big\{\textbf Q_{n}^{(\gamma^{\nu-\nu_0},0,\nu)}:f:=\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}}^{\nu_0,I}\Big\}\bigg). \end{aligned}$$ Finally, let us introduce the Gaussian white noise model that will appear in the statement of our main results. For that, let us denote by $(C(I),{\ensuremath {\mathscr{C}}})$ the space of continuous mappings from $I$ into ${\ensuremath {\mathbb{R}}}$ endowed with its standard filtration, by $g$ the density of $\nu_0$ with respect to the Lebesgue measure. We will require $g>0$ and let $\mathbb W_n^f$ be the law induced on $(C(I),{\ensuremath {\mathscr{C}}})$ by the stochastic process satisfying: $$\begin{aligned} \label{eqn:ch4Wf} dy_t=\sqrt{f(t)}dt+\frac{dW_t}{2\sqrt{T_n}\sqrt{g(t)}}, \quad t\in I,\end{aligned}$$ where $(W_t)_{t\in{\ensuremath {\mathbb{R}}}}$ denotes a Brownian motion on ${\ensuremath {\mathbb{R}}}$ with $W_0=0$. Then we set: $${\ensuremath {\mathscr{W}}}_n^{\nu_0}=\Big(C(I),{\ensuremath {\mathscr{C}}},\{\mathbb W_n^{f}:f\in{\ensuremath {\mathscr{F}}}^{\nu_0,I}\}\Big).$$ Observe that when $\nu_0$ is a finite Lévy measure, then ${\ensuremath {\mathscr{W}}}_n^{\nu_0}$ is equivalent to the statistical model associated with the continuous observation of a process $(\tilde y_t)_{t\in I}$ defined by: $$\begin{aligned} d\tilde y_t=\sqrt{f(t)g(t)}dt+\frac{d W_t}{2\sqrt{T_n}}, \quad t\in I.\end{aligned}$$ Main results {#subsec:ch4mainresults} ------------ Using the notation introduced in Section \[subsec:ch4parameter\], we now state our main results. For brevity of notation, we will denote by $H(f,\hat f_m)$ (resp. $L_2(f,\hat f_m)$) the Hellinger distance (resp. the $L_2$ distance) between the Lévy measures $\nu$ and $\hat\nu_m$ restricted to $I\setminus{[-\varepsilon_m,\varepsilon_m]}$, i.e.: $$\begin{aligned} H^2(f,\hat f_m)&:=\int_{I\setminus{[-\varepsilon_m,\varepsilon_m]}}\Big(\sqrt{f(x)}-\sqrt{\hat f_m(x)}\Big)^2 \nu_0(dx),\\ L_2(f,\hat f_m)^2&:=\int_{I\setminus{[-\varepsilon_m,\varepsilon_m]}}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy).\end{aligned}$$ Observe that Condition (H1) implies (see Lemma \[lemma:ch4hellinger\]) $$\frac{1}{4M}L_2(f,\hat f_m)^2\leq H^2(f,\hat f_m)\leq \frac{1}{4\kappa}L_2(f,\hat f_m)^2.$$ \[ch4teo1\] Let $\nu_0$ be a known Lévy measure concentrated on a (possibly infinite) interval $I\subseteq {\ensuremath {\mathbb{R}}}$ and having strictly positive density with respect to the Lebesgue measure. Let us choose a parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, I}$ such that there exist a sequence $m = m_n$ of integers, functions $V_j$, $j = \pm 2, \dots, \pm m$ and a sequence $\varepsilon_m \to 0$ as $m \to \infty$ such that Conditions [(H1), (C1), (C2)]{.nodecor} are satisfied for ${\ensuremath {\mathscr{F}}}= {\ensuremath {\mathscr{F}}}^{\nu_0, I}$. Then, for $n$ big enough we have: $$\begin{aligned} \Delta({\ensuremath {\mathscr{P}}}_n^{\nu_0}, {\ensuremath {\mathscr{W}}}_n^{\nu_0}) &= O\bigg(\sqrt{n\Delta_n}\sup_{f\in {\ensuremath {\mathscr{F}}}}\Big(A_m(f)+B_m(f)+C_m(f)\Big)\bigg) \nonumber \\ & +O\bigg(\sqrt{n\Delta_n}\sup_{f\in{\ensuremath {\mathscr{F}}}}L_2(f, \hat f_m)+\sqrt{\frac{m}{n\Delta_n}\Big(\frac{1}{\mu_m}+\frac{1}{\mu_m^-}\Big)}\bigg). \label{eq:teo1}\end{aligned}$$ \[ch4teo2\] Let $\nu_0$ be a known Lévy measure concentrated on a (possibly infinite) interval $I\subseteq {\ensuremath {\mathbb{R}}}$ and having strictly positive density with respect to the Lebesgue measure. Let us choose a parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, I}$ such that there exist a sequence $m = m_n$ of integers, functions $V_j$, $j = \pm 2, \dots, \pm m$ and a sequence $\varepsilon_m \to 0$ as $m \to \infty$ such that Conditions [(H1), (C1), (C2’)]{.nodecor} are satisfied for ${\ensuremath {\mathscr{F}}}= {\ensuremath {\mathscr{F}}}^{\nu_0, I}$. Then, for $n$ big enough we have: $$\begin{aligned} \Delta({\ensuremath {\mathscr{Q}}}_n^{\nu_0}, {\ensuremath {\mathscr{W}}}_n^{\nu_0})& = O\bigg( \nu_0\Big(I\setminus[-\varepsilon_m,\varepsilon_m]\Big)\sqrt{n\Delta_n^2}+\frac{m\ln m}{\sqrt{n}}+\sqrt{n\sqrt{\Delta_n}\sup_{f\in{\ensuremath {\mathscr{F}}}}C_m(f)}\bigg) \nonumber \\ &+O\bigg(\sqrt{n\Delta_n}\sup_{f\in{\ensuremath {\mathscr{F}}}}\Big(A_m(f)+B_m(f)+H(f,\hat f_m)\Big)\bigg).\label{eq:teo2}\end{aligned}$$ \[cor:ch4generale\] Let $\nu_0$ be as above and let us choose a parameter space ${\ensuremath {\mathscr{F}}}^{\nu_0, I}$ so that there exist sequences $m_n'$, $\varepsilon_m'$, $V_j'$ and $m_n''$, $\varepsilon_m''$, $V_j''$ such that: - Conditions (H1), (C1) and (C2) hold for $m_n'$, $\varepsilon_m'$, $V_j'$, and $\frac{m'}{n\Delta_n}\Big(\frac{1}{\mu_{m'}}+\frac{1}{\mu_{m'}^-}\Big)$ tends to zero. - Conditions (H1), (C1) and (C2’) hold for $m_n''$, $\varepsilon_m''$, $V_j''$, and $\nu_0\Big(I\setminus[-\varepsilon_{m''},\varepsilon_{m''}]\Big)\sqrt{n\Delta_n^2}+\frac{m''\ln m''}{\sqrt{n}}$ tends to zero. Then the statistical models ${\ensuremath {\mathscr{P}}}_{n}^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_{n}^{\nu_0}$ are asymptotically equivalent: $$\lim_{n\to\infty}\Delta({\ensuremath {\mathscr{P}}}_{n}^{\nu_0},{\ensuremath {\mathscr{Q}}}_{n}^{\nu_0})=0,$$ If, in addition, the Lévy measures have finite variation, i.e. if we assume (FV), then the same results hold replacing ${\ensuremath {\mathscr{P}}}_{n}^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_{n}^{\nu_0}$ by ${\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0}$ and ${\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0}$, respectively (see Lemma \[ch4LC\]). Examples {#sec:ch4experiments} ======== We will now analyze three different examples, underlining the different behaviors of the Lévy measure $\nu_0$ (respectively, finite, infinite with finite variation and infinite with infinite variation). The three chosen Lévy measures are ${\ensuremath {\mathbb{I}}}_{[0,1]}(x) dx$, ${\ensuremath {\mathbb{I}}}_{[0,1]}(x) \frac{dx}{x}$ and ${\ensuremath {\mathbb{I}}}_{{\ensuremath {\mathbb{R}}}_+}(x)\frac{dx}{x^2}$. In all three cases we assume the parameter $f$ to be uniformly bounded and with uniformly $\gamma$-Hölder derivatives: We will describe adequate subclasses ${\ensuremath {\mathscr{F}}}^{\nu_0, I} \subseteq {\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^I$ defined as in . It seems very likely that the same results that are highlighted in these examples hold true for more general Lévy measures; however, we limit ourselves to these examples in order to be able to explicitly compute the quantities involved ($v_j$, $x_j^*$, etc.) and hence estimate the distance between $f$ and $\hat f_m$ as in Examples \[ex:ch4esempi\]. In the first of the three examples, where $\nu_0$ is the Lebesgue measure on $I=[0,1]$, we are considering the statistical models associated with the discrete and continuous observation of a compound Poisson process with Lévy density $f$. Observe that ${\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}}$ reduces to the statistical model associated with the continuous observation of a trajectory from: $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n}}dW_t,\quad t\in [0,1].$$ In this case we have: \[ex:ch4CPP\](Finite Lévy measure). Let $\nu_0$ be the Lebesgue measure on $I=[0,1]$ and let ${\ensuremath {\mathscr{F}}}= {\ensuremath {\mathscr{F}}}^{{\ensuremath{\textnormal{Leb}}}, [0,1]}$ be any subclass of ${\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^{[0,1]}$ for some strictly positive constants $K$, $\kappa$, $M$ and $\gamma\in(0,1]$. Then: $$\lim_{n\to\infty}\Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}},{\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}})=0 \ \textnormal{ and } \ \lim_{n\to\infty}\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}},{\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}})=0.$$ More precisely, $$\Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}},{\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}})=\begin{cases}O\Big((n\Delta_n)^{-\frac{\gamma}{4+2\gamma}}\Big)\quad \textnormal{if } \ \gamma\in\big(0,\frac{1}{2}\big],\\ O\Big((n \Delta_n)^{-\frac{1}{10}}\Big)\quad \textnormal{if } \ \gamma\in\big(\frac{1}{2},1\big]. \end{cases}$$ In the case where $\Delta_n = n^{-\beta}$, $\frac{1}{2} < \beta < 1$, an upper bound for the rate of convergence of $\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}}, {\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}})$ is $$\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}}, {\ensuremath {\mathscr{W}}}_n^{{\ensuremath{\textnormal{Leb}}}})=\begin{cases} O\Big(n^{-\frac{\gamma+\beta}{4+2\gamma}}\ln n\Big)\quad \textnormal{if } \ \gamma\in\big(0,\frac{1}{2}\big) \text{ and }\frac{2+2\gamma}{3+2\gamma} \leq \beta < 1,\\ O\Big(n^{\frac{1}{2}-\beta}\ln n\Big)\quad \textnormal{if } \ \gamma\in\big(0,\frac{1}{2}\big) \text{ and } \frac{1}{2} < \beta < \frac{2+2\gamma}{3+2\gamma},\\ O\Big(n^{-\frac{2\beta+1}{10}}\ln n\Big)\quad \textnormal{if } \ \gamma\in\big[\frac{1}{2},1\big] \text{ and } \frac{3}{4} \leq \beta < 1,\\ O\Big(n^{\frac{1}{2}-\beta}\ln n\Big)\quad \textnormal{if } \ \gamma\in\big[\frac{1}{2},1\big] \text{ and } \frac{1}{2} < \beta < \frac{3}{4}. \end{cases}$$ See Section \[subsec:ch4ex1\] for a proof. \[ch4ex2\](Infinite Lévy measure with finite variation). Let $X$ be a truncated Gamma process with (infinite) Lévy measure of the form: $$\nu(A)=\int_A \frac{e^{-\lambda x}}{x}dx,\quad A\in{\ensuremath {\mathscr{B}}}([0,1]).$$ Here ${\ensuremath {\mathscr{F}}}^{\nu_0, I}$ is a 1-dimensional parametric family in $\lambda$, assuming that there exists a known constant $\lambda_0$ such that $0<\lambda\leq \lambda_0<\infty$, $f(t) = e^{-\lambda t}$ and $d\nu_0(x)=\frac{1}{x}dx$. In particular, the $f$ are Lipschitz, i.e. ${\ensuremath {\mathscr{F}}}^{\nu_0, [0,1]} \subset {\ensuremath {\mathscr{F}}}_{(\gamma = 1, K, \kappa, M)}^{[0,1]}$. The discrete or continuous observation (up to time $T_n$) of $X$ are asymptotically equivalent to ${\ensuremath {\mathscr{W}}}_n^{\nu_0}$, the statistical model associated with the observation of a trajectory of the process $(y_t)$: $$dy_t=\sqrt{f(t)}dt+\frac{\sqrt tdW_t}{2\sqrt{T_n}},\quad t\in[0,1].$$ More precisely, in the case where $\Delta_n = n^{-\beta}$, $\frac{1}{2} < \beta < 1$, an upper bound for the rate of convergence of $\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0}, {\ensuremath {\mathscr{W}}}_n^{\nu_0})$ is $$\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0}) = \begin{cases} O\big(n^{\frac{1}{2}-\beta} \ln n\big) & \text{if } \frac{1}{2} < \beta \leq \frac{9}{10}\\ O\big(n^{-\frac{1+2\beta}{7}} \ln n\big) & \text{if } \frac{9}{10} < \beta < 1. \end{cases}$$ Concerning the continuous setting we have: $$\Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\Big(n^{\frac{\beta-1}{6}} \big(\ln n\big)^{\frac{5}{2}}\Big) = O\Big(T_n^{-\frac{1}{6}} \big(\ln T_n\big)^\frac{5}{2}\Big).$$ See Section \[subsec:ch4ex2\] for a proof. \[ex3\](Infinite Lévy measure, infinite variation). Let $X$ be a pure jump Lévy process with infinite Lévy measure of the form: $$\nu(A)=\int_A \frac{2-e^{-\lambda x^3}}{x^2}dx,\quad A\in{\ensuremath {\mathscr{B}}}({\ensuremath {\mathbb{R}}}^+).$$ Again, we are considering a parametric family in $\lambda > 0$, assuming that the parameter stays bounded below a known constant $\lambda_0$. Here, $f(t) =2- e^{-\lambda t^3}$, hence $1\leq f(t)\leq 2$, for all $t\geq 0$, and $f$ is Lipschitz, i.e. ${\ensuremath {\mathscr{F}}}^{\nu_0, {\ensuremath {\mathbb{R}}}_+} \subset {\ensuremath {\mathscr{F}}}_{(\gamma = 1, K, \kappa, M)}^{{\ensuremath {\mathbb{R}}}_+}$. The discrete or continuous observations (up to time $T_n$) of $X$ are asymptotically equivalent to the statistical model associated with the observation of a trajectory of the process $(y_t)$: $$dy_t=\sqrt{f(t)}dt+\frac{tdW_t}{2\sqrt{T_n}},\quad t\geq 0.$$ More precisely, in the case where $\Delta_n = n^{-\beta}$, $0 < \beta < 1$, an upper bound for the rate of convergence of $\Delta({\ensuremath {\mathscr{Q}}}_{n}^{\nu_0}, {\ensuremath {\mathscr{W}}}_n^{\nu_0})$ is $$\Delta({\ensuremath {\mathscr{Q}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0}) = \begin{cases} O\big(n^{\frac{1}{2} - \frac{2}{3}\beta}\big)& \text{if } \frac{3}{4} < \beta < \frac{12}{13}\\ O\big(n^{-\frac{1}{6}+\frac{\beta}{18}} (\ln n)^{\frac{7}{6}}\big) &\text{if } \frac{12}{13}\leq \beta<1. \end{cases}$$ In the continuous setting, we have $$\Delta({\ensuremath {\mathscr{P}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\big(n^{\frac{3\beta-3}{34}}(\ln n)^{\frac{7}{6}}\big) = O\big(T_n^{-\frac{3}{34}} (\ln T_n)^{\frac{7}{6}}\big).$$ See Section \[subsec:ch4ex3\] for a proof. Proofs of the main results {#sec:ch4proofs} ========================== In order to simplify notations, the proofs will be presented in the case $I\subseteq {\ensuremath {\mathbb{R}}}^+$. Nevertheless, this allows us to present all the main difficulties, since they can only appear near 0. To prove Theorems \[ch4teo1\] and \[ch4teo2\] we need to introduce several intermediate statistical models. In that regard, let us denote by $Q_j^f$ the law of a Poisson random variable with mean $T_n\nu(J_j)$ (see for the definition of $J_{j}$). We will denote by $\mathscr{L}_m$ the statistical model associated with the family of probabilities $\big\{\bigotimes_{j=2}^m Q_j^f:f\in{\ensuremath {\mathscr{F}}}\big\}$: $$\label{eq:ch4l} \mathscr{L}_m=\bigg(\bar{{\ensuremath {\mathbb{N}}}}^{m-1},\mathcal P(\bar{{\ensuremath {\mathbb{N}}}}^{m-1}), \bigg\{\bigotimes_{j=2}^m Q_j^f:f\in{\ensuremath {\mathscr{F}}}\bigg\}\bigg).$$ By $N_{j}^f$ we mean the law of a Gaussian random variable ${\ensuremath {\mathscr{Nn}}}(2\sqrt{T_n\nu(J_j)},1)$ and by $\mathscr{N}_m$ the statistical model associated with the family of probabilities $\big\{\bigotimes_{j=2}^m N_j^f:f\in{\ensuremath {\mathscr{F}}}\big\}$: $$\label{eq:ch4n} \mathscr{N}_m=\bigg({\ensuremath {\mathbb{R}}}^{m-1},\mathscr B({\ensuremath {\mathbb{R}}}^{m-1}), \bigg\{\bigotimes_{j=2}^m N_j^f:f\in{\ensuremath {\mathscr{F}}}\bigg\}\bigg).$$ For each $f\in{\ensuremath {\mathscr{F}}}$, let $\bar \nu_m$ be the measure having $\bar f_m$ as a density with respect to $\nu_0$ where, for every $f\in{\ensuremath {\mathscr{F}}}$, $\bar f_m$ is defined as follows. $$\label{eq:ch4barf} \bar f_m(x):= \begin{cases} \quad 1 & \textnormal{if } x\in J_1,\\ \frac{\nu(J_j)}{{\nu_0}(J_{j})} & \textnormal{if } x\in J_{j}, \quad j = 2,\dots,m. \end{cases}$$ Furthermore, define $$\label{eq:ch4modellobar} \bar{\ensuremath {\mathscr{P}}}_{n}^{\nu_0}=\bigg(D,{\ensuremath {\mathscr{D}}}_{T_n},\Big\{P_{T_n}^{(\gamma^{\bar \nu_m-\nu_0},0,\bar\nu_m)}:\frac{d\bar\nu_m}{d\nu_0}\in{\ensuremath {\mathscr{F}}}\Big\}\bigg).$$ Proof of Theorem \[ch4teo1\] ---------------------------- We begin by a series of lemmas that will be needed in the proof. Before doing so, let us underline the scheme of the proof. We recall that the goal is to prove that estimating $f=\frac{d\nu}{d\nu_0}$ from the continuous observation of a Lévy process $(X_t)_{t\in[0,T_n]}$ without Gaussian part and having Lévy measure $\nu$ is asymptotically equivalent to estimating $f$ from the Gaussian white noise model: $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n g(t)}}dW_t,\quad g=\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}},\quad t\in I.$$ Also, recall the definition of $\hat \nu_m$ given in and read ${\ensuremath {\mathscr{P}}}_1 \overset{\Delta} \Longleftrightarrow {\ensuremath {\mathscr{P}}}_2$ as ${\ensuremath {\mathscr{P}}}_1$ is asymptotically equivalent to ${\ensuremath {\mathscr{P}}}_2$. Then, we can outline the proof in the following way. - Step 1: $P_{T_n}^{(\gamma^{\nu-\nu_0},0,\nu)} \overset{\Delta} \Longleftrightarrow P_{T_n}^{(\gamma^{\hat\nu_m-\nu_0},0,\hat\nu_m)}$; - Step 2: $P_{T_n}^{(\gamma^{\hat\nu_m-\nu_0},0,\hat\nu_m)} \overset{\Delta} \Longleftrightarrow \bigotimes_{j=2}^m {\ensuremath {\mathscr{P}}}(T_n\nu(J_j))$ (Poisson approximation). Here $\bigotimes_{j=2}^m {\ensuremath {\mathscr{P}}}(T_n\nu(J_j))$ represents a statistical model associated with the observation of $m-1$ independent Poisson r.v. of parameters $T_n\nu(J_j)$; - Step 3: $\bigotimes_{j=2}^m {\ensuremath {\mathscr{P}}}(T_n \nu(J_j)) \overset{\Delta} \Longleftrightarrow \bigotimes_{j=2}^m {\ensuremath {\mathscr{Nn}}}(2\sqrt{T_n\nu(J_j)},1)$ (Gaussian approximation); - Step 4: $\bigotimes_{j=2}^m {\ensuremath {\mathscr{Nn}}}(2\sqrt{T_n\nu(J_j)},1)\overset{\Delta} \Longleftrightarrow (y_t)_{t\in I}$. Lemmas \[lemma:ch4poisson\]–\[lemma:ch4kernel\], below, are the key ingredients of Step 2. \[lemma:ch4poisson\] Let $\bar{\ensuremath {\mathscr{P}}}_{n}^{\nu_0}$ and $\mathscr{L}_m$ be the statistical models defined in and , respectively. Under the Assumption (H2) we have: $$\Delta(\bar{\ensuremath {\mathscr{P}}}_{n}^{\nu_0}, \mathscr{L}_m)=0, \textnormal{ for all } m.$$ Denote by $\bar {\ensuremath {\mathbb{N}}}={\ensuremath {\mathbb{N}}}\cup \{\infty\}$ and consider the statistics $S:(D,{\ensuremath {\mathscr{D}}}_{T_n})\to \big(\bar{\ensuremath {\mathbb{N}}}^{m-1},\mathcal{P}(\bar{\ensuremath {\mathbb{N}}}^{m-1})\big)$ defined by $$\label{eq:ch4S} S(x)=\Big(N_{T_n}^{x;\,2},\dots,N_{T_n}^{x;\,m}\bigg)\quad \textnormal{with} \quad N_{T_n}^{x;\,j}=\sum_{r\leq T_n}{\ensuremath {\mathbb{I}}}_{J_{j}}(\Delta x_r).$$ An application of Theorem \[ch4teosato\] to $P_{T_n}^{(\gamma^{\bar \nu_m-\nu_0},0,\bar \nu_m)}$ and $P_{T_n}^{(0,0,\nu_0)}$, yields $$\frac{d P_{T_n}^{(\gamma^{\bar \nu_m-\nu_0},0,\bar \nu_m)}}{dP_{T_n}^{(0,0,\nu_0)}}(x)=\exp\bigg(\sum_{j=2}^m \bigg(\ln\Big(\frac{\nu(J_j)}{\nu_0(J_j)}\Big)\bigg) N_{T_n}^{x;j}-T_n\int_I(\bar f_m(y)-1)\nu_0(dy)\bigg).$$ Hence, by means of the Fisher factorization theorem, we conclude that $S$ is a sufficient statistics for $\bar{\ensuremath {\mathscr{P}}}_{n}^{\nu_0}$. Furthermore, under $P_{T_n}^{(\gamma^{\bar \nu_m-\nu_0},0,\bar \nu_m)}$, the random variables $N_{T_n}^{x;j}$ have Poisson distributions $Q_{j}^f$ with means $T_n\nu(J_j)$. Then, by means of Property \[ch4fatto3\], we get $\Delta(\bar{\ensuremath {\mathscr{P}}}_{n}^{\nu_0}, \mathscr{L}_m)=0, \textnormal{ for all } m.$ Let us denote by $\hat Q_j^f$ the law of a Poisson random variable with mean $T_n\int_{J_j}\hat f_m(y)\nu_0(dy)$ and let $\hat{\mathscr{L}}_m$ be the statistical model associated with the family of probabilities $\{\bigotimes_{j=2}^m \hat Q_j^f:f\in {\ensuremath {\mathscr{F}}}\}$. \[lemma:ch4poissonhatf\] $$\Delta(\mathscr L_m,\hat{\mathscr{L}}_m)\leq \sup_{f\in {\ensuremath {\mathscr{F}}}}\sqrt{\frac{T_n}{\kappa}\int_{I\setminus[0,\varepsilon_m]}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy)}.$$ By means of Facts \[ch4h\]–\[fact:ch4hellingerpoisson\], we get: $$\begin{aligned} \Delta(\mathscr L_m,\hat{\mathscr{L}}_m)&\leq \sup_{f\in{\ensuremath {\mathscr{F}}}}H\bigg(\bigotimes_{j=2}^m Q_j^f,\bigotimes_{j=2}^m \hat Q_j^f\bigg)\\ &\leq \sup_{f\in{\ensuremath {\mathscr{F}}}}\sqrt{\sum_{j=2}^m 2 H^2(Q_j^f,\hat Q_j^f)}\\ & =\sup_{f\in{\ensuremath {\mathscr{F}}}}\sqrt 2\sqrt{\sum_{j=2}^m\bigg(1-\exp\bigg(-\frac{T_n}{2}\bigg[\sqrt{\int_{J_j}\hat f(y)\nu_0(dy)}-\sqrt{\int_{J_j} f(y)\nu_0(dy)}\bigg]^2\bigg)\bigg)}.\end{aligned}$$ By making use of the fact that $1-e^{-x}\leq x$ for all $x\geq 0$ and the equality $\sqrt a-\sqrt b= \frac{a-b}{\sqrt a+\sqrt b}$ combined with the lower bound $f\geq \kappa$ (that also implies $\hat f_m\geq \kappa$) and finally the Cauchy-Schwarz inequality, we obtain: $$\begin{aligned} &1-\exp\bigg(-\frac{T_n}{2}\bigg[\sqrt{\int_{J_j}\hat f(y)\nu_0(dy)}-\sqrt{\int_{J_j} f(y)\nu_0(dy)}\bigg]^2\bigg)\\ &\leq \frac{T_n}{2}\bigg[\sqrt{\int_{J_j}\hat f(y)\nu_0(dy)}-\sqrt{\int_{J_j} f(y)\nu_0(dy)}\bigg]^2\\ & \leq \frac{T_n}{2} \frac{\bigg(\int_{J_j}(f(y)-\hat f_m(y))\nu_0(dy)\bigg)^2}{\kappa \nu_0(J_j)}\\ &\leq \frac{T_n}{2\kappa} \int_{J_j}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy). \end{aligned}$$ Hence, $$H\bigg(\bigotimes_{j=2}^m Q_j^f,\bigotimes_{j=2}^m \hat Q_j^f\bigg)\leq \sqrt{\frac{T_n}{\kappa}\int_{I\setminus[0,\varepsilon_m]}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy)}.$$ \[lemma:ch4kernel\] Let $\hat\nu_m$ and $\bar \nu_m$ the Lévy measures defined as in and , respectively. For every $f\in {\ensuremath {\mathscr{F}}}$, there exists a Markov kernel $K$ such that $$KP_{T_n}^{(\gamma^{\bar\nu_m-\nu_0},0,\bar\nu_m)}=P_{T_n}^{(\gamma^{\hat \nu_m-\nu_0},0,\hat \nu_m)}.$$ By construction, $\bar\nu_m$ and $\hat\nu_m$ coincide on $[0,\varepsilon_m]$. Let us denote by $\bar \nu_m^{\textnormal{res}}$ and $\hat\nu_m^{\textnormal{res}}$ the restriction on $I\setminus[0,\varepsilon_m]$ of $\bar\nu_m$ and $\hat\nu_m$ respectively, then it is enough to prove: $KP_{T_n}^{(\gamma^{\bar\nu_m^{\textnormal{res}}-\nu_0},0,\bar\nu_m^{\textnormal{res}})}=P_{T_n}^{(\gamma^{\hat \nu_m^{\textnormal{res}}-\nu_0},0,\hat \nu_m^{\textnormal{res}})}.$ First of all, let us observe that the kernel $M$: $$M(x,A)=\sum_{j=2}^m{\ensuremath {\mathbb{I}}}_{J_j}(x)\int_A V_j(y)\nu_0(dy),\quad x\in I\setminus[0,\varepsilon_m],\quad A\in{\ensuremath {\mathscr{B}}}(I\setminus[0,\varepsilon_m])$$ is defined in such a way that $M \bar\nu_m^{\textnormal{res}} = \hat \nu_m^{\textnormal{res}}$. Indeed, for all $A\in{\ensuremath {\mathscr{B}}}(I\setminus[0,\varepsilon_m])$, $$\begin{aligned} M\bar\nu_m^{\textnormal{res}}(A)&=\sum_{j=2}^m\int_{J_j}M(x,A)\bar\nu_m^{\textnormal{res}}(dx)=\sum_{j=2}^m \int_{J_j}\bigg(\int_A V_j(y)\nu_0(dy)\bigg)\bar\nu_m^{\textnormal{res}}(dx)\nonumber\\ &=\sum_{j=2}^m \bigg(\int_A V_j(y)\nu_0(dy)\bigg)\nu(J_j)=\int_A \hat f_m(y)\nu_0(dy)=\hat \nu_m^{\textnormal{res}}(A). \label{eqn:M} \end{aligned}$$ Observe that $(\gamma^{\bar\nu_m^{\textnormal{res}}-\nu_0},0,\bar\nu_m^{\textnormal{res}})$ and $(\gamma^{\hat \nu_m^{\textnormal{res}}-\nu_0},0,\hat \nu_m^{\textnormal{res}})$ are Lévy triplets associated with compound Poisson processes since $\bar\nu_m^{\textnormal{res}}$ and $\hat \nu_m^{\textnormal{res}}$ are finite Lévy measures. The Markov kernel $K$ interchanging the laws of the Lévy processes is constructed explicitly in the case of compound Poisson processes. Indeed if $\bar X$ is the compound Poisson process having Lévy measure $\bar\nu_m^{\textnormal{res}}$, then $\bar X_{t} = \sum_{i=1}^{N_t} \bar Y_{i}$, where $N_t$ is a Poisson process of intensity $\iota_m:=\bar\nu_m^{\textnormal{res}}(I\setminus [0,\varepsilon_m])$ and the $\bar Y_{i}$ are i.i.d. random variables with probability law $\frac{1}{\iota_m}\bar\nu_m^{\textnormal{res}}$. Moreover, given a trajectory of $\bar X$, both the trajectory $(n_t)_{t\in[0,T_n]}$ of the Poisson process $(N_t)_{t\in[0,T_n]}$ and the realizations $\bar y_i$ of $\bar Y_i$, $i=1,\dots,n_{T_n}$ are uniquely determined. This allows us to construct $n_{T_n}$ i.i.d. random variables $\hat Y_i$ as follows: For every realization $\bar y_i$ of $\bar Y_i$, we define the realization $\hat y_i$ of $\hat Y_i$ by throwing it according to the probability law $M(\bar y_i,\cdot)$. Hence, thanks to , $(\hat Y_i)_i$ are i.i.d. random variables with probability law $\frac{1}{\iota_m} \hat \nu_m^{\text{res}}$. The desired Markov kernel $K$ (defined on the Skorokhod space) is then given by: $$K : (\bar X_{t})_{t\in[0,T_n]} \longmapsto \bigg(\hat X_{t} := \sum_{i=1}^{N_t} \hat Y_{i}\bigg)_{t\in[0,T_n]}.$$ Finally, observe that, since $$\begin{aligned} \iota_m=\int_{I\setminus[0,\varepsilon_m]}\bar f_m(y)\nu_0(dy)&=\int_{I\setminus[0,\varepsilon_m]} f(y)\nu_0(dy)=\int_{I\setminus[0,\varepsilon_m]}\hat f_m(y)\nu_0(dy), \end{aligned}$$ $(\hat X_t)_{t\in[0,T_n]}$ is a compound Poisson process with Lévy measure $\hat\nu_m^{\textnormal{res}}.$ Let us now state two lemmas needed to understand Step 4. \[lemma:ch4wn\] Denote by ${\ensuremath {\mathscr{W}}}_m^\#$ the statistical model associated with the continuous observation of a trajectory from the Gaussian white noise: $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n}\sqrt{g(t)}}dW_t,\quad t\in I\setminus [0,\varepsilon_m].$$ Then, according with the notation introduced in Section \[subsec:ch4parameter\] and at the beginning of Section \[sec:ch4proofs\], we have $$\Delta(\mathscr{N}_m,{\ensuremath {\mathscr{W}}}_m^\#)\leq 2\sqrt{T_n}\sup_{f\in {\ensuremath {\mathscr{F}}}} \big(A_m(f)+B_m(f)\big).$$ As a preliminary remark observe that ${\ensuremath {\mathscr{W}}}_m^\#$ is equivalent to the model that observes a trajectory from: $$d\bar y_t=\sqrt{f(t)}g(t)dt+\frac{\sqrt{g(t)}}{2\sqrt{T_n}}dW_t,\quad t\in I\setminus [0,\varepsilon_m].$$ Let us denote by $\bar Y_j$ the increments of the process $(\bar y_t)$ over the intervals $J_j$, $j=2,\dots,m$, i.e. $$\bar Y_j:=\bar y_{v_j}-\bar y_{v_{j-1}}\sim{\ensuremath {\mathscr{Nn}}}\bigg(\int_{J_j}\sqrt{f(y)}\nu_0(dy),\frac{\nu_0(J_j)}{4T_n}\bigg)$$ and denote by $\bar{\mathscr{N}}_m$ the statistical model associated with the distributions of these increments. As an intermediate result, we will prove that $$\label{eq:ch4normali} \Delta(\mathscr{N}_m,\bar{\mathscr{N}}_m)\leq 2\sqrt{T_n} \sup_{f\in {\ensuremath {\mathscr{F}}}} B_m(f), \ \textnormal{ for all m}.$$ To that aim, remark that the experiment $\bar{\mathscr{N}}_m$ is equivalent to observing $m-1$ independent Gaussian random variables of means $\frac{2\sqrt{T_n}}{\sqrt{\nu_0(J_j)}}\int_{J_j}\sqrt{f(y)}\nu_0(dy)$, $j=2,\dots,m$ and variances identically $1$, name this last experiment $\mathscr{N}^{\#}_m$. Hence, using also Property \[ch4delta0\], Facts \[ch4h\] and \[fact:ch4gaussiane\] we get: $$\begin{aligned} \Delta(\mathscr{N}_m, \bar{\mathscr{N}}_m)\leq\Delta(\mathscr{N}_m, \mathscr{N}^{\#}_m)&\leq \sqrt{\sum_{j=2}^m\bigg(\frac{2\sqrt{T_n}}{\sqrt{\nu_0(J_j)}}\int_{J_j}\sqrt{f(y)}\nu_0(dy)-2\sqrt{T_n\nu(J_j)}\bigg)^2}.\end{aligned}$$ Since it is clear that $\delta({\ensuremath {\mathscr{W}}}_m^\#,\bar{\mathscr{N}}_m)=0$, in order to bound $\Delta(\mathscr{N}_m,{\ensuremath {\mathscr{W}}}_m^\#)$ it is enough to bound $\delta(\bar{\mathscr{N}}_m,{\ensuremath {\mathscr{W}}}_m^\#)$. Using similar ideas as in [@cmultinomial] Section 8.2, we define a new stochastic process as: $$Y_t^*=\sum_{j=2}^m\bar Y_j\int_{\varepsilon_m}^t V_j(y)\nu_0(dy)+\frac{1}{2\sqrt{T_n}}\sum_{j=2}^m\sqrt{\nu_0(J_j)}B_j(t),\quad t\in I\setminus [0,\varepsilon_m],$$ where the $(B_j(t))$ are independent centered Gaussian processes independent of $(W_t)$ and with variances $$\textnormal{Var}(B_j(t))=\int_{\varepsilon_m}^tV_j(y)\nu_0(dy)-\bigg(\int_{\varepsilon_m}^tV_j(y)\nu_0(dy)\bigg)^2.$$ These processes can be constructed from a standard Brownian bridge $\{B(s), s\in[0,1]\}$, independent of $(W_t)$, via $$B_i(t)=B\bigg(\int_{\varepsilon_m}^t V_i(y)\nu_0(dy)\bigg).$$ By construction, $(Y_t^*)$ is a Gaussian process with mean and variance given by, respectively: $$\begin{aligned} {\ensuremath {\mathbb{E}}}[Y_t^*]&=\sum_{j=2}^m{\ensuremath {\mathbb{E}}}[\bar Y_j]\int_{\varepsilon_m}^t V_j(y)\nu_0(dy)=\sum_{j=2}^m\bigg(\int_{J_j}\sqrt{f(y)}\nu_0(dy)\bigg)\int_{\varepsilon_m}^t V_j(y)\nu_0(dy),\\ \textnormal{Var}[Y_t^*]&=\sum_{j=2}^m\textnormal{Var}[\bar Y_j]\bigg(\int_{\varepsilon_m}^t V_j(y)\nu_0(dy)\bigg)^2+\frac{1}{4T_n}\sum_{j=2}^m \nu_0(J_j)\textnormal{Var}(B_j(t))\\ &= \frac{1}{4T_n}\int_{\varepsilon_m}^t \sum_{j=2}^m \nu_0(J_j) V_j(y)\nu_0(dy)= \frac{1}{4T_n}\int_{\varepsilon_m}^t \nu_0(dy)=\frac{\nu_0([\varepsilon_m,t])}{4T_n}.\end{aligned}$$ One can compute in the same way the covariance of $(Y_t^*)$ finding that $$\textnormal{Cov}(Y_s^*,Y_t^*)=\frac{\nu_0([\varepsilon_m,s])}{4 T_n}, \ \forall s\leq t.$$ We can then deduce that $$Y^*_t=\int_{\varepsilon_m}^t \widehat{\sqrt {f}}_m(y)\nu_0(dy)+\int_{\varepsilon_m}^t\frac{\sqrt{g(s)}}{2\sqrt{T_n}}dW^*_s,\quad t\in I\setminus [0,\varepsilon_m],$$ where $(W_t^*)$ is a standard Brownian motion and $$\widehat{\sqrt {f}}_m(x):=\sum_{j=2}^m\bigg(\int_{J_j}\sqrt{f(y)}\nu_0(dy)\bigg)V_j(x).$$ Applying Fact \[fact:ch4processigaussiani\], we get that the total variation distance between the process $(Y_t^*)_{t\in I\setminus [0,\varepsilon_m]}$ constructed from the random variables $\bar Y_j$, $j=2,\dots,m$ and the Gaussian process $(\bar y_t)_{t\in I\setminus [0,\varepsilon_m]}$ is bounded by $$\sqrt{4 T_n\int_{I\setminus [0,\varepsilon_m]}\big(\widehat{\sqrt {f}}_m-\sqrt{f(y)}\big)^2\nu_0(dy)},$$ which gives the term in $A_m(f)$. \[lemma:ch4limitewn\] In accordance with the notation of Lemma \[lemma:ch4wn\], we have: $$\label{eq:ch4wn} \Delta({\ensuremath {\mathscr{W}}}_m^\#,{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\bigg(\sup_{f\in{\ensuremath {\mathscr{F}}}}\sqrt{T_n\int_0^{\varepsilon_m}\big(\sqrt{f(t)}-1\big)^2\nu_0(dt)}\bigg).$$ Clearly $\delta({\ensuremath {\mathscr{W}}}_n^{\nu_0},{\ensuremath {\mathscr{W}}}_m^\#)=0$. To show that $\delta({\ensuremath {\mathscr{W}}}_m^\#,{\ensuremath {\mathscr{W}}}_n^{\nu_0})\to 0$, let us consider a Markov kernel $K^\#$ from $C(I\setminus [0,\varepsilon_m])$ to $C(I)$ defined as follows: Introduce a Gaussian process, $(B_t^m)_{t\in[0,\varepsilon_m]}$ with mean equal to $t$ and covariance $$\textnormal{Cov}(B_s^m,B_t^m)=\int_0^{\varepsilon_m}\frac{1}{4 T_n g(s)}{\ensuremath {\mathbb{I}}}_{[0,s]\cap [0,t]}(z)dz.$$ In particular, $$\textnormal{Var}(B_t^m)=\int_0^t\frac{1}{4 T_n g(s)}ds.$$ Consider it as a process on the whole of $I$ by defining $B_t^m=B_{\varepsilon_m}^m$ $\forall t>\varepsilon_m$. Let $\omega_t$ be a trajectory in $C(I\setminus [0,\varepsilon_m])$, which again we constantly extend to a trajectory on the whole of $I$. Then, we define $K^\#$ by sending the trajectory $\omega_t$ to the trajectory $\omega_t + B_t^m$. If we define $\mathbb{\tilde W}_n$ as the law induced on $C(I)$ by $$d\tilde{y}_t = h(t) dt + \frac{dW_t}{2\sqrt{T_n g(t)}}, \quad t \in I,\quad h(t) = \begin{cases} 1 & t \in [0, \varepsilon_m]\\ \sqrt{f(t)} & t \in I\setminus [0,\varepsilon_m], \end{cases}$$ then $K^\# \mathbb{W}_n^f|_{I\setminus [0,\varepsilon_m]} = \mathbb{\tilde W}_n$, where $\mathbb{W}_n^f$ is defined as in . By means of Fact \[fact:ch4processigaussiani\] we deduce . The proof of the theorem follows by combining the previous lemmas together: - Step 1: Let us denote by $\hat{\ensuremath {\mathscr{P}}}_{n,m}^{\nu_0}$ the statistical model associated with the family of probabilities $(P_{T_n}^{(\gamma^{\hat\nu_m-\nu_0},0,\hat\nu_m)}:\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}})$. Thanks to Property \[ch4delta0\], Fact \[ch4h\] and Theorem \[teo:ch4bound\] we have that $$\Delta({\ensuremath {\mathscr{P}}}_n^{\nu_0},\hat{\ensuremath {\mathscr{P}}}_{n,m}^{\nu_0})\leq \sqrt{\frac{T_n}{2}}\sup_{f\in {\ensuremath {\mathscr{F}}}}H(f,\hat f_m).$$ - Step 2: On the one hand, thanks to Lemma \[lemma:ch4poisson\], one has that the statistical model associated with the family of probability $(P_{T_n}^{(\gamma^{\bar \nu_m-\nu_0},0,\bar\nu_m)}:\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}})$ is equivalent to $\mathscr{L}_m$. By means of Lemma \[lemma:ch4poissonhatf\] we can bound $\Delta(\mathscr{L}_m,\hat{\mathscr{L}}_m)$. On the other hand it is easy to see that $\delta(\hat{\ensuremath {\mathscr{P}}}_{n,m}^{\nu_0}, \hat{\mathscr{L}}_m)=0$. Indeed, it is enough to consider the statistics $$S: x \mapsto \bigg(\sum_{r\leq T_n}{\ensuremath {\mathbb{I}}}_{J_2}(\Delta x_r),\dots,\sum_{r\leq T_n}{\ensuremath {\mathbb{I}}}_{J_m}(\Delta x_r)\bigg)$$ since the law of the random variable $\sum_{r\leq T_n}{\ensuremath {\mathbb{I}}}_{J_j}(\Delta x_r)$ under $P_{T_n}^{(\gamma^{\hat\nu_m-\nu_0},0,\hat\nu_m)}$ is Poisson of parameter $T_n\int_{J_j}\hat f_m(y)\nu_0(dy)$ for all $j=2,\dots,m$. Finally, Lemmas \[lemma:ch4poisson\] and \[lemma:ch4kernel\] allows us to conclude that $\delta(\mathscr{L}_m,\hat{\ensuremath {\mathscr{P}}}_{n,m}^{\nu_0})=0$. Collecting all the pieces together, we get $$\Delta(\hat{\ensuremath {\mathscr{P}}}_{n,m}^{\nu_0},\mathscr{L}_m)\leq \sup_{f\in {\ensuremath {\mathscr{F}}}}\sqrt{\frac{T_n}{\kappa}\int_{I\setminus[0,\varepsilon_m]}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy)}.$$ - Step 3: Applying Theorem \[ch4teomisto\] and Fact \[ch4hp\] we can pass from the Poisson approximation given by $\mathscr{L}_m$ to a Gaussian one obtaining $$\Delta(\mathscr{L}_m,\mathscr{N}_m)=C\sup_{f\in {\ensuremath {\mathscr{F}}}}\sqrt{\sum_{j=2}^m\frac{2}{T_n\nu(J_j)}}\leq C\sqrt{\sum_{j=2}^m\frac{2\kappa}{T_n\nu_0(J_j)}}=C\sqrt{\frac{(m-1)2\kappa}{T_n\mu_m}}.$$ - Step 4: Finally, Lemmas \[lemma:ch4wn\] and \[lemma:ch4limitewn\] allow us to conclude that: $$\begin{aligned} \Delta({\ensuremath {\mathscr{P}}}_n^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})&=O\bigg(\sqrt{T_n}\sup_{f\in {\ensuremath {\mathscr{F}}}}\big(A_m(f)+B_m(f)+C_m\big)\bigg)\\ & \quad + O\bigg(\sqrt{T_n}\sup_{f\in {\ensuremath {\mathscr{F}}}}\sqrt{\int_{I\setminus{[0,\varepsilon_m]}}\big(f(y)-\hat f_m(y)\big)^2\nu_0(dy)}+\sqrt{\frac{m}{T_n\mu_m}}\bigg).\end{aligned}$$ Proof of Theorem \[ch4teo2\] ---------------------------- Again, before stating some technical lemmas, let us highlight the main ideas of the proof. We recall that the goal is to prove that estimating $f=\frac{d\nu}{d\nu_0}$ from the discrete observations $(X_{t_i})_{i=0}^n$ of a Lévy process without Gaussian component and having Lévy measure $\nu$ is asymptotically equivalent to estimating $f$ from the Gaussian white noise model $$dy_t=\sqrt{f(t)}dt+\frac{1}{2\sqrt{T_n g(t)}}dW_t,\quad g=\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}},\quad t\in I.$$ Reading ${\ensuremath {\mathscr{P}}}_1 \overset{\Delta} \Longleftrightarrow {\ensuremath {\mathscr{P}}}_2$ as ${\ensuremath {\mathscr{P}}}_1$ is asymptotically equivalent to ${\ensuremath {\mathscr{P}}}_2$, we have: - Step 1. Clearly $(X_{t_i})_{i=0}^n \overset{\Delta} \Longleftrightarrow (X_{t_i}-X_{t_{i-1}})_{i=1}^n$. Moreover, $(X_{t_i}-X_{t_{i-1}})_i\overset{\Delta} \Longleftrightarrow (\epsilon_iY_i)$ where $(\epsilon_i)$ are i.i.d Bernoulli r.v. with parameter $\alpha=\iota_m \Delta_n e^{-\iota_m\Delta_n}$, $\iota_m:=\int_{I\setminus [0,\varepsilon_m]} f(y)\nu_0(dy)$ and $(Y_i)_i$ are i.i.d. r.v. independent of $(\epsilon_i)_{i=1}^n$ and of density $\frac{ f}{\iota_m}$ with respect to ${\nu_0}_{|_{I\setminus [0,\varepsilon_m]}}$; - Step 2. $(\epsilon_iY_i)_i \overset{\Delta} \Longleftrightarrow \mathcal M(n;(\gamma_j)_{j=1}^m)$, where $\mathcal M(n;(\gamma_j)_{j=1}^m)$ is a multinomial distribution with $\gamma_1=1-\alpha$ and $\gamma_i:=\alpha\nu(J_i)$ $i=2,\dots,m$; - Step 3. Gaussian approximation: $\mathcal M(n;(\gamma_1,\dots\gamma_m)) \overset{\Delta} \Longleftrightarrow \bigotimes_{j=2}^m {\ensuremath {\mathscr{Nn}}}(2\sqrt{T_n\nu(J_j)},1)$; - Step 4. $\bigotimes_{j=2}^m {\ensuremath {\mathscr{Nn}}}(2\sqrt{T_n\nu(J_j)},1)\overset{\Delta} \Longleftrightarrow (y_t)_{t\in I}$. \[lemma:ch4discreto\] Let $\nu_i$, $i=1,2$, be Lévy measures such that $\nu_1\ll\nu_2$ and $b_1-b_2=\int_{|y|\leq 1}y(\nu_1-\nu_2)(dy)<\infty$. Then, for all $0<t<\infty$, we have: $$\Big\|Q_t^{(b_1,0,\mu_1)}-Q_t^{(b_2,0,\mu_2)}\Big\|_{TV}\leq \sqrt \frac{t}{2} H(\nu_1,\nu_2).$$ For all given $t$, let $K_t$ be the Markov kernel defined as $K_t(\omega,A):={\ensuremath {\mathbb{I}}}_A(\omega_t)$, $\forall \ A\in{\ensuremath {\mathscr{B}}}({\ensuremath {\mathbb{R}}})$, $\forall \ \omega\in D$. Then we have: $$\begin{aligned} \big\|Q_t^{(b_1,0,\nu_1)}-Q_t^{(b_2,0,\nu_2)}\big\|_{TV}&=\big\|K_tP_t^{(b_1,0,\nu_1)}-K_tP_t^{(b_2,0,\nu_2)}\big\|_{TV}\\ &\leq \big\|P_t^{(b_1,0,\nu_1)}-P_t^{(b_2,0,\nu_2)}\big\|_{TV}\\ &\leq \sqrt \frac{t}{2} H(\nu_1,\nu_2), \end{aligned}$$ where we have used that Markov kernels reduce the total variation distance and Theorem \[teo:ch4bound\]. \[lemma:ch4bernoulli\] Let $(P_i)_{i=1}^n$, $(Y_i)_{i=1}^n$ and $(\epsilon_i)_{i=1}^n$ be samples of, respectively, Poisson random variables ${\ensuremath {\mathscr{P}}}(\lambda_i)$, random variables with common distribution and Bernoulli random variables of parameters $\lambda_i e^{-\lambda_i}$, which are all independent. Let us denote by $Q_{(Y_i,P_i)}$ (resp. $Q_{(Y_i,\epsilon_i)}$) the law of $\sum_{j=1}^{P_i} Y_j$ (resp., $\epsilon_i Y_i$). Then: $$\label{eq:ch4lambda} \Big\|\bigotimes_{i=1}^n Q_{(Y_i,P_i)}-\bigotimes_{i=1}^n Q_{(Y_i,\epsilon_i)}\Big\|_{TV}\leq 2\sqrt{\sum_{i=1}^n\lambda_i^2}.$$ The proof of this Lemma can be found in [@esterESAIM], Section 2.1. \[lemma:ch4troncatura\] Let $f_m^{\textnormal{tr}}$ be the truncated function defined as follows: $$f_m^{\textnormal{tr}}(x)=\begin{cases} 1 &\mbox{ if } x\in[0,\varepsilon_m]\\ f(x) &\mbox{ otherwise} \end{cases}$$ and let $\nu_m^{\textnormal{tr}}$ (resp. $\nu_m^{\textnormal{res}}$) be the Lévy measure having $f_m^{\textnormal{tr}}$ (resp. ${f|_{I\setminus [0,\varepsilon_m]}}$) as a density with respect to $\nu_0$. Denote by ${\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{tr},\nu_0}$ the statistical model associated with the family of probabilities $\Big(\bigotimes_{i=1}^nQ_{t_i-t_{i-1}}^{(\gamma^{\nu_m^{\textnormal{tr}}-\nu_0},0,\nu_m^{\textnormal{tr}})}:\frac{d\nu_m^{\textnormal{tr}}}{d\nu_0}\in{\ensuremath {\mathscr{F}}}\Big)$ and by ${\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0}$ the model associated with the family of probabilities $\Big(\bigotimes_{i=1}^nQ_{t_i-t_{i-1}}^{(\gamma^{\nu_m^{\textnormal{res}}-\nu_0},0,\nu_m^{\textnormal{res}})}:\frac{d\nu_m^{\textnormal{res}}}{d\nu_0}\in{\ensuremath {\mathscr{F}}}\Big)$. Then: $$\Delta({\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{tr},\nu_0},{\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0})=0.$$ Let us start by proving that $\delta({\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{tr},\nu_0},{\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0})=0.$ For that, let us consider two independent Lévy processes, $X^{\textnormal{tr}}$ and $X^0$, of Lévy triplets given by $\big(\gamma^{\nu_m^{\textnormal{tr}}-\nu_0},0,\nu_m^{\textnormal{tr}-\nu_0}\big)$ and $\big(0,0,\nu_0|_{[0,\varepsilon_m]}\big)$, respectively. Then it is clear (using the *Lévy-Khintchine formula*) that the random variable $X_t^{\textnormal{tr}}- X_t^0$ is a randomization of $X_t^{\textnormal{tr}}$ (since the law of $X_t^0$ does not depend on $\nu$) having law $Q_t^{(\gamma^{\nu_m^{\textnormal{res}}-\nu_0},0,\nu_m^{\textnormal{res}})}$, for all $t\geq 0$. Similarly, one can prove that $\delta({\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0},{\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{tr},\nu_0})=0.$ As a preliminary remark, observe that the model ${\ensuremath {\mathscr{Q}}}_n^{\nu_0}$ is equivalent to the one that observes the increments of $\big((x_t),P_{T_n}^{(\gamma^{\nu-\nu_0},0,\nu)}\big)$, that is, the model $\tilde{\ensuremath {\mathscr{Q}}}_n^{\nu_0}$ associated with the family of probabilities $\Big(\bigotimes_{i=1}^nQ_{t_i-t_{i-1}}^{(\gamma^{\nu-\nu_0},0,\nu)}:\frac{d\nu}{d\nu_0}\in{\ensuremath {\mathscr{F}}}\Big)$. - Step 1: Facts \[ch4h\]–\[ch4hp\] and Lemma \[lemma:ch4discreto\] allow us to write $$\begin{aligned} &\Big\|\bigotimes_{i=1}^nQ_{\Delta_n}^{(\gamma^{\nu-\nu_0},0,\nu)}-\bigotimes_{i=1}^nQ_{\Delta_n}^{(\gamma^{\nu_m^{\textnormal{tr}}-\nu_0},0, \nu_m^{\textnormal{tr}})}\Big\|_{TV}\leq \sqrt{n\sqrt\frac{\Delta_n}{2}H(\nu,\nu_m^{\textnormal{tr}})}\\&=\sqrt{n\sqrt\frac{\Delta_n}{2}\sqrt{\int_0^{\varepsilon_m}\big(\sqrt{f(y)}-1\big)^2\nu_0(dy)}}.\end{aligned}$$ Using this bound together with Lemma \[lemma:ch4troncatura\] and the notation therein, we get $\Delta({\ensuremath {\mathscr{Q}}}_n^{\nu_0}, {\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0})\leq \sqrt{n\sqrt\frac{\Delta_n}{2}\sup_{f\in {\ensuremath {\mathscr{F}}}}H(f, f_m^{\textnormal{tr}})}$. Observe that $\nu_m^{\textnormal{res}}$ is a finite Lévy measure, hence $\Big((x_t),P_{T_n}^{(\gamma^{\nu_m^{\textnormal{res}}},0,\nu_m^{\textnormal{res}})}\Big)$ is a compound Poisson process with intensity equal to $\iota_m:=\int_{I\setminus [0,\varepsilon_m]} f(y)\nu_0(dy)$ and jumps size density $\frac{ f(x)g(x)}{\iota_m}$, for all $x\in I\setminus [0,\varepsilon_m]$ (recall that we are assuming that $\nu_0$ has a density $g$ with respect to Lebesgue). In particular, this means that $Q_{\Delta_n}^{(\gamma^{\nu_m^{\textnormal{res}}},0,\nu_m^{\textnormal{res}})}$ can be seen as the law of the random variable $\sum_{j=1}^{P_i}Y_j$ where $P_i$ is a Poisson variable of mean $\iota_m \Delta_n$, independent from $(Y_i)_{i\geq 0}$, a sequence of i.i.d. random variables with density $\frac{ fg}{\iota_m}{\ensuremath {\mathbb{I}}}_{I\setminus[0,\varepsilon_m]}$ with respect to Lebesgue. Remark also that $\iota_m$ is confined between $\kappa \nu_0\big(I\setminus [0,\varepsilon_m]\big)$ and $M\nu_0\big(I\setminus [0,\varepsilon_m] \big)$. Let $(\epsilon_i)_{i\geq 0}$ be a sequence of i.i.d. Bernoulli variables, independent of $(Y_i)_{i\geq 0}$, with mean $\iota_m \Delta_n e^{-\iota_m\Delta_n}$. For $i=1,\dots,n$, denote by $Q_i^{\epsilon,f}$ the law of the variable $\epsilon_iY_i$ and by ${\ensuremath {\mathscr{Q}}}_n^{\epsilon}$ the statistical model associated with the observations of the vector $(\epsilon_1Y_1,\dots,\epsilon_nY_n)$, i.e. $${\ensuremath {\mathscr{Q}}}_n^{\epsilon}=\bigg(I^n,{\ensuremath {\mathscr{B}}}(I^n),\bigg\{\bigotimes_{i=1}^n Q_i^{\epsilon,f}:f\in{\ensuremath {\mathscr{F}}}\bigg\}\bigg).$$ Furthermore, denote by $\tilde Q_i^f$ the law of $\sum_{j=1}^{P_i}Y_j$. Then an application of Lemma \[lemma:ch4bernoulli\] yields: $$\begin{aligned} \Big\|\bigotimes_{i=1}^n\tilde Q_i^f&-\bigotimes_{i=1}^nQ_i^{\epsilon,f}\Big\|_{TV} \leq 2\iota_m\sqrt{n\Delta_n^2}\leq 2M\nu_0\big(I\setminus [0,\varepsilon_m]\big)\sqrt{n\Delta_n^2}.\end{aligned}$$ Hence, we get: $$\label{eq:ch4bernoulli} \Delta({\ensuremath {\mathscr{Q}}}_{n}^{\textnormal{res},\nu_0},{\ensuremath {\mathscr{Q}}}_n^{\epsilon})=O\bigg(\nu_0\big(I\setminus [0,\varepsilon_m]\big)\sqrt{n\Delta_n^2}\bigg).$$ Here the O depends only on $M$. - Step 2: Let us introduce the following random variables: $$Z_1=\sum_{j=1}^n{\ensuremath {\mathbb{I}}}_{\{0\}}(\epsilon_jY_j); \quad Z_i=\sum_{j=1}^n{\ensuremath {\mathbb{I}}}_{J_i}(\epsilon_jY_j),\ i=2,\dots,m.$$ Observe that the law of the vector $(Z_1,\dots,Z_m)$ is multinomial $\mathcal M(n;\gamma_1,\dots,\gamma_m)$ where $$\gamma_1=1-\iota_m \Delta_n e^{-\iota_m \Delta_n},\quad \gamma_i=\Delta_n e^{-\iota_m \Delta_n}\nu(J_i),\quad i=2,\dots,m.$$ Let us denote by $\mathcal M_n$ the statistical model associated with the observation of $(Z_1,\dots,Z_m)$. Clearly $\delta({\ensuremath {\mathscr{Q}}}_n^{\epsilon},\mathcal M_n)=0$. Indeed, $\mathcal M_n$ is the image experiment by the random variable $S:I^n\to\{1,\dots,n\}^{m}$ defined as $$S(x_1,\dots,x_n)=\Big(\#\{j: x_j=0\}; \#\big\{j: x_j\in J_2\big\};\dots;\#\big\{j: x_j\in J_m\big\}\Big),$$ where $\# A$ denotes the cardinal of the set $A$. We shall now prove that $\delta(\mathcal M_n,{\ensuremath {\mathscr{Q}}}_n^{\epsilon}) \leq \sup_{f\in{\ensuremath {\mathscr{F}}}}\sqrt{n\Delta_n H^2(f,\hat f_m)}$. We start by defining a discrete random variable $X^*$ concentrated at the points $0$, $x_i^*$, $i=2,\dots,m$: $${\ensuremath {\mathbb{P}}}(X^*=y)=\begin{cases} \gamma_i &\mbox{ if } y=x_i^*,\quad i=1,\dots,m,\\ 0 &\mbox{ otherwise}, \end{cases}$$ with the convention $x_1^*=0$. It is easy to see that $\mathcal M_n$ is equivalent to the statistical model associated with $n$ independent copies of $X^*$. Let us introduce the Markov kernel $$K(x_i^*, A) = \begin{cases} {\ensuremath {\mathbb{I}}}_A(0) & \text{if } i = 1,\\ \int_A V_i(x) \nu_0(dx) & \text{otherwise.} \end{cases}$$ Denote by $P^*$ the law of the random variable $X^*$ and by $Q_i^{\epsilon,\hat f}$ the law of a random variable $\epsilon_i \hat Y_i$ where $\epsilon_i$ is Bernoulli independent of $\hat Y_i$, with mean $\iota_m\Delta_n e^{-\iota_m\Delta_n}$ and $\hat Y_i$ has a density $\frac{\hat f_m g}{\iota_m}{\ensuremath {\mathbb{I}}}_{I\setminus[0,\varepsilon_m]}$ with respect to Lebesgue. The same computations as in Lemma \[lemma:ch4kernel\] prove that $KP^*=Q_i^{\epsilon,\hat f}$. Hence, thanks to Remark \[ch4independentkernels\], we get the equivalence between $\mathcal M_n$ and the statistical model associated with the observations of $n$ independent copies of $\epsilon_i \hat Y_i$. In order to bound $\delta(\mathcal M_n,{\ensuremath {\mathscr{Q}}}_n^{\epsilon})$ it is enough to bound the total variation distance between the probabilities $\bigotimes_{i=1}^n Q_i^{\epsilon,f}$ and $\bigotimes_{i=1}^n Q_i^{\epsilon,\hat f}$. Alternatively, we can bound the Hellinger distance between each of the $Q_i^{\epsilon,f}$ and $Q_i^{\epsilon,\hat f}$, thanks to Facts \[ch4h\] and \[ch4hp\], which is: $$\begin{aligned} \bigg\|\bigotimes_{i=1}^nQ_i^{\epsilon,f} -\bigotimes_{i=1}^nQ_i^{\epsilon,\hat f}\bigg\|_{TV} &\leq \sqrt{\sum_{i=1}^n H^2\big(Q_i^{\epsilon,f}, Q_i^{\epsilon,\hat f}\big)}\\ &= \sqrt{\sum_{i=1}^n \frac{1-\gamma_1}{\iota} H^2(f, \hat f_m)} \leq \sqrt{n\Delta_n H^2(f, \hat f_m)}.\end{aligned}$$ It follows that $$\delta(\mathcal M_n,{\ensuremath {\mathscr{Q}}}_n^{\epsilon})\leq \sqrt{n\Delta_n} \sup_{f \in {\ensuremath {\mathscr{F}}}}H(f,\hat f_m).$$ - Step 3: Let us denote by $\mathcal N_m^*$ the statistical model associated with the observation of $m$ independent Gaussian variables ${\ensuremath {\mathscr{Nn}}}(n\gamma_i,n\gamma_i)$, $i=1,\dots,m$. Very similar computations to those in [@cmultinomial] yield $$\Delta(\mathcal M_n,\mathcal N_m^*)=O\Big(\frac{m \ln m}{\sqrt{n}}\Big).$$ In order to prove the asymptotic equivalence between $\mathcal M_n$ and $\mathcal N_m$ defined as in we need to introduce some auxiliary statistical models. Let us denote by $\mathcal A_m$ the experiment obtained from $\mathcal{N}_m^*$ by disregarding the first component and by $\mathcal V_m$ the statistical model associated with the multivariate normal distribution with the same means and covariances as a multinomial distribution $\mathcal M(n,\gamma_1,\dots,\gamma_m)$. Furthermore, let us denote by $\mathcal N_m^{\#}$ the experiment associated with the observation of $m-1$ independent Gaussian variables ${\ensuremath {\mathscr{Nn}}}(\sqrt{n\gamma_i},\frac{1}{4})$, $i=2,\dots,m$. Clearly $\Delta(\mathcal V_m,\mathcal A_m)=0$ for all $m$: In one direction one only has to consider the projection disregarding the first component; in the other direction, it is enough to remark that $\mathcal V_m$ is the image experiment of $\mathcal A_m$ by the random variable $S:(x_2,\dots,x_m)\to (n(1-\frac{\sum_{i=2}^m x_i}{n}),x_2,\dots,x_m)$. Moreover, using two results contained in [@cmultinomial], see Sections 7.1 and 7.2, one has that $$\Delta(\mathcal A_m,\mathcal N_m^*)=O\bigg(\sqrt{\frac{m}{n}}\bigg),\quad \Delta(\mathcal A_m,\mathcal N_m^{\#})=O\bigg(\frac{m}{\sqrt n}\bigg).$$ Finally, using Facts \[ch4h\] and \[fact:ch4gaussiane\] we can write $$\begin{aligned} \Delta(\mathcal N_m^{\#},\mathcal N_m)&\leq \sqrt{2\sum_{i=2}^m \Big(\sqrt{T_n\nu(J_i)}-\sqrt{T_n\nu(J_i)\exp(-\iota_m\Delta_n)}\Big)^2}\\ &\leq\sqrt{2T_n\Delta_n^2\iota_m^3}\leq \sqrt{2n\Delta_n^3M^3\big(\nu_0\big(I\setminus [0,\varepsilon_m]\big)\big)^3}. \end{aligned}$$ To sum up, $\Delta(\mathcal M_n,\mathcal N_m)=O\Big(\frac{m \ln m}{\sqrt{n}}+\sqrt{n\Delta_n^3\big(\nu_0\big(I\setminus [0,\varepsilon_m]\big)\big)^3}\Big)$, with the $O$ depending only on $\kappa$ and $M$. - Step 4: An application of Lemmas \[lemma:ch4wn\] and \[lemma:ch4limitewn\] yield $$\Delta(\mathcal N_m,{\ensuremath {\mathscr{W}}}_n^{\nu_0}) \leq 2\sqrt T_n \sup_{f\in{\ensuremath {\mathscr{F}}}} \big(A_m(f)+B_m(f)+C_m(f)\big).$$ Proofs of the examples ====================== The purpose of this section is to give detailed proofs of Examples \[ex:ch4esempi\] and Examples \[ex:ch4CPP\]–\[ex3\]. As in Section \[sec:ch4proofs\] we suppose $I\subseteq {\ensuremath {\mathbb{R}}}_+$. We start by giving some bounds for the quantities $A_m(f)$, $B_m(f)$ and $L_2(f, \hat f_m)$, the $L_2$-distance between the restriction of $f$ and $\hat f_m$ on $I\setminus[0,\varepsilon_m].$ Bounds for $A_m(f)$, $B_m(f)$, $L_2(f, \hat{f}_m)$ when $\hat f_m$ is piecewise linear. --------------------------------------------------------------------------------------- In this section we suppose $f$ to be in ${\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^I$ defined as in . We are going to assume that the $V_j$ are given by triangular/trapezoidal functions as in . In particular, in this case $\hat f_m$ is piecewise linear. \[lemma:ch4hellinger\] Let $0<\kappa < M$ be two constants and let $f_i$, $i=1,2$ be functions defined on an interval $J$ and such that $\kappa \leq f_i\leq M$, $i=1,2$. Then, for any measure $\nu_0$, we have: $$\begin{aligned} \frac{1}{4 M} \int_J \big(f_1(x)-f_2(x)\big)^2 \nu_0(dx)&\leq\int_J \big(\sqrt{f_1(x)} - \sqrt{f_2(x)}\big)^2\nu_0(dx)\\ &\leq \frac{1}{4 \kappa} \int_J \big(f_1(x)-f_2(x)\big)^2\nu_0(dx). \end{aligned}$$ This simply comes from the following inequalities: $$\begin{aligned} \frac{1}{2\sqrt M} (f_1(x)-f_2(x)) &\leq \frac{f_1(x)-f_2(x)}{\sqrt{f_1(x)}+\sqrt{f_2(x)}} = \sqrt{f_1(x)} - \sqrt{f_2(x)}\\ &\leq \frac{1}{2 \sqrt{\kappa}} (f_1(x)-f_2(x)). \end{aligned}$$ Recall that $x_i^*$ is chosen so that $\int_{J_i} (x-x_i^*) \nu_0(dx) = 0$. Consider the following Taylor expansions for $x \in J_i$: $$f(x) = f(x_i^*) + f'(x_i^*) (x-x_i^*) + R_i(x); \quad \hat{f}_m(x) = \hat{f}_m(x_i^*) + \hat{f}_m'(x_i^*) (x-x_i^*),$$ where $\hat{f}_m(x_i^*) = \frac{\nu(J_i)}{\nu_0(J_i)}$ and $\hat{f}_m'(x_i^*)$ is the left or right derivative in $x_i^*$ depending whether $x < x_i^*$ or $x > x_i^*$ (as $\hat f_m$ is piecewise linear, no rest is involved in its Taylor expansion). \[lemma:ch4bounds\] The following estimates hold: $$\begin{aligned} |R_i(x)| &\leq K |\xi_i - x_i^*|^\gamma |x-x_i^*|; \\ \big|f(x_i^*) - \hat{f}_m(x_i^*)\big| &\leq \|R_i\|_{L_\infty(\nu_0)} \text{ for } i = 2, \dots, m-1; \label{eqn:bounds}\\ \big|f(x)-\hat{f}_m(x)\big| &\leq \begin{cases} 2 \|R_i\|_{L_\infty(\nu_0)} + K |x_i^*-\eta_i|^\gamma |x-x_i^*| & \text{ if } x \in J_i, \ i = 3, \dots, m-1;\\ C |x-\tau_i| & \text { if } x \in J_i, \ i \in \{2, m\}. \end{cases} \end{aligned}$$ for some constant $C$ and points $\xi_i \in J_i$, $\eta_i\in J_{i-1} \cup J_i\cup J_{i+1}$, $\tau_2 \in J_2 \cup J_3$ and $\tau_m \in J_{m-1} \cup J_m$. By definition of $R_i$, we have $$|R_i(x)| = \Big| \big(f'(\xi_i) - f'(x_i^*)\big)(x-x_i^*) \Big| \leq K |\xi_i - x_i^*|^\gamma |x-x_i^*|,$$ for some point $\xi_i \in J_i$. For the second inequality, $$\begin{aligned} |f(x_i^*)-\hat{f}_m(x_i^*)| &= \frac{1}{\nu_0(J_i)} \Big| \int_{J_i} (f(x_i^*)-f(x)) \nu_0(dx)\Big|\\ &= \frac{1}{\nu_0(J_i)} \bigg|\int_{J_i} R_i(x) \nu_0(dx)\bigg| \leq \|R_i\|_{L_\infty(\nu_0)}, \end{aligned}$$ where in the first inequality we have used the defining property of $x_i^*$. For the third inequality, let us start by proving that for all $2 < i < m-1$, $\hat{f}_m'(x_i^*) = f'(\chi_i)$ for some $\chi_i \in J_i\cup J_{i+1}$ (here, we are considering right derivatives; for left ones, this would be $J_{i-1} \cup J_i$). To see that, take $x\in J_i\cap [x_i^*,x_{i+1}^*]$ and introduce the function $h(x):=f(x)-l(x)$ where $$l(x)=\frac{x-x_i^*}{x_{i+1}^*-x_i^*}\big(\hat f_m(x_{i+1}^*)-\hat f_m(x_i^*)\big)+\hat f_m(x_i^*).$$ Then, using the fact that $\int_{J_i}(x-x_i^*)\nu_0(dx)=0$ joint with $\int_{J_{i+1}}(x-x_{i+1}^*)\nu_0(dx)=(x_{j+1}^*-x_j^*)\mu_m$, we get $$\int_{J_i}h(x)\nu_0(dx)=0=\int_{J_{i+1}}h(x)\nu_0(dx).$$ In particular, by means of the mean theorem, one can conclude that there exist two points $p_i\in J_i$ and $p_{i+1}\in J_{i+1}$ such that $$h(p_i)=\frac{\int_{J_i}h(x)\nu_0(dx)}{\nu_0(J_i)}=\frac{\int_{J_{i+1}}h(x)\nu_0(dx)}{\nu_0(J_{i+1})}=h(p_{i+1}).$$ As a consequence, we can deduce that there exists $\chi_i\in[p_i,p_{i+1}]\subseteq J_i\cup J_{i+1}$ such that $h'(\chi_i)=0$, hence $f'(\chi_i)=l'(\chi_i)=\hat f_m'(x_i^*)$. When $2 < i < m-1$, the two Taylor expansions joint with the fact that $\hat{f}_m'(x_i^*) = f'(\chi_i)$ for some $\chi_i \in J_i\cup J_{i+1}$, give $$\begin{aligned} |f(x) - \hat{f}_m (x)| &\leq |f(x_i^*) - \hat{f}_m(x_i^*)| + |R_i(x)| + K |x_i^* - \chi_i|^\gamma |x-x_i^*|\\ & \leq 2 \|R_i\|_{L_\infty(\nu_0)} + K |x_i^* - \chi_i|^\gamma |x-x_i^*| \end{aligned}$$ whenever $x \in J_i$ and $x > x_i^*$ (the case $x < x_i^*$ is handled similarly using the left derivative of $\hat f_m$ and $\xi_i \in J_{i-1} \cup J_i$). For the remaining cases, consider for example $i = 2$. Then $\hat{f}_m(x)$ is bounded by the minimum and the maximum of $f$ on $J_2 \cup J_3$, hence $\hat{f}_m(x) = f(\tau)$ for some $\tau \in J_2 \cup J_3$. Since $f'$ is bounded by $C = 2M +K$, one has $|f(x) - \hat{f}_m(x)| \leq C|x-\tau|$. \[lemma:ch4abc\] With the same notations as in Lemma \[lemma:ch4bounds\], the estimates for $A_m^2(f)$, $B_m^2(f)$ and $L_2(f, \hat{f}_m)^2$ are as follows: $$\begin{aligned} L_2(f, \hat{f}_m)^2&\leq \frac{1}{4\kappa} \bigg( \sum_{i=3}^m \int_{J_i} \Big(2 \|R_i\|_{L_\infty(\nu_0)} + K |x_i^*-\eta_i|^\gamma|x-x_i^*|\Big)^2 \nu_0(dx) \\ &\phantom{=}\ + C^2 \Big(\int_{J_2}|x-\tau_2|^2\nu_0(dx) + \int_{J_m}|x-\tau_m|^2\nu_0(dx)\Big).\\ A_m^2(f) &= L_2\big(\sqrt{f}, \widehat{\sqrt{f}}_m\big)^2 = O\Big(L_2(f, \hat{f}_m)^2\Big)\\ B_m^2(f) &= O\bigg( \sum_{i=2}^{m} \frac{1}{\sqrt{\kappa}} \nu_0(J_i) (2 \sqrt{M} + 1)^2 \|R_i\|_{L_\infty(\nu_0)}^2\bigg). \end{aligned}$$ The $L_2$-bound is now a straightforward application of Lemmas \[lemma:ch4hellinger\] and \[lemma:ch4bounds\]. The one on $A_m(f)$ follows, since if $f \in {\ensuremath {\mathscr{F}}}_{(\gamma, K, \kappa, M)}^I$ then $\sqrt{f} \in {\ensuremath {\mathscr{F}}}_{(\gamma, \frac{K}{\sqrt{\kappa}}, \sqrt{\kappa}, \sqrt{M})}^I$. In order to bound $B_m^2(f)$ write it as: $$B_m^2(f)=\sum_{j=1}^m \nu_0(J_j)\bigg(\frac{\int_{J_j}\sqrt{f(y)}\nu_0(dy)}{\nu_0(J_j)}-\sqrt{\frac{\nu(J_j)}{\nu_0(J_j)}}\bigg)^2=:\sum_{j=1}^m \nu_0(J_j)E_j^2.$$ By the triangular inequality, let us bound $E_j$ by $F_j+G_j$ where: $$F_j=\bigg|\sqrt{\frac{\nu(J_j)}{\nu_0(J_j)}}-\sqrt{f(x_j^*)}\bigg| \quad \textnormal{ and }\quad G_j=\bigg|\sqrt{f(x_j^*)}-\frac{\int_{J_j}\sqrt{f(y)}\nu_0(dy)}{\nu_0(J_j)}\bigg|.$$ Using the same trick as in the proof of Lemma \[lemma:ch4hellinger\], we can bound: $$\begin{aligned} F_j \leq 2 \sqrt{M} \bigg|\frac{\int_{J_j} \big(f(x)-f(x_i^*)\big)\nu_0(dx)}{\nu_0(J_j)}\bigg| \leq 2 \sqrt{M} \|R_j\|_{L_\infty(\nu_0)}. \end{aligned}$$ On the other hand, $$\begin{aligned} G_j&=\frac{1}{\nu_0(J_j)}\bigg|\int_{J_j}\big(\sqrt{f(x_j^*)}-\sqrt{f(y)}\big)\nu_0(dy)\bigg|\\ &=\frac{1}{\nu_0(J_j)}\bigg|\int_{J_j}\bigg(\frac{f'(x_j^*)}{2\sqrt{f(x_j^*)}}(x-x_j^*)+\tilde R_j(y)\bigg)\nu_0(dy)\bigg| \leq \|\tilde R_j\|_{L_\infty(\nu_0)}, \end{aligned}$$ which has the same magnitude as $\frac{1}{\kappa}\|R_j\|_{L_\infty(\nu_0)}$. Observe that when $\nu_0$ is finite, there is no need for a special definition of $\hat{f}_m$ near $0$, and all the estimates in Lemma \[lemma:ch4bounds\] hold true replacing every occurrence of $i = 2$ by $i = 1$. \[rmk:nonlinear\] The same computations as in Lemmas \[lemma:ch4bounds\] and \[lemma:ch4abc\] can be adapted to the general case where the $V_j$’s (and hence $\hat f_m$) are not piecewise linear. In the general case, the Taylor expansion of $\hat f_m$ in $x_i^*$ involves a rest as well, say $\hat R_i$, and one needs to bound this, as well. Proofs of Examples \[ex:ch4esempi\] {#subsec:esempi} ----------------------------------- In the following, we collect the details of the proofs of Examples \[ex:ch4esempi\]. **1. The finite case:** $\nu_0\equiv {\ensuremath{\textnormal{Leb}}}([0,1])$. Remark that in the case where $\nu_0$ if finite there are no convergence problems near zero and so we can consider the easier approximation of $f$: $$\hat f_m(x):= \begin{cases} m\theta_1 & \textnormal{if } x\in \big[0,x_1^*\big],\\ m^2\big[\theta_{j+1}(x-x_j^*)+\theta_j(x_{j+1}^*-x)\big] & \textnormal{if } x\in (x_j^*,x_{j+1}^*] \quad j = 1,\dots,m-1,\\ m\theta_m & \textnormal{if } x\in (x_m^*,1] \end{cases}$$ where $$x_j^*=\frac{2j-1}{2m},\quad J_j=\Big(\frac{j-1}{m},\frac{j}{m}\Big],\quad \theta_j=\int_{J_j}f(x)dx, \quad j=1,\dots,m.$$ In this case we take $\varepsilon_m = 0$ and Conditions $(C2)$ and $(C2')$ coincide: $$\lim_{n\to\infty}n\Delta_n\sup_{f\in {\ensuremath {\mathscr{F}}}}\Big(A_m^2(f)+B_m^2(f)\Big) = 0.$$ Applying Lemma \[lemma:ch4abc\], we get $$\sup_{f\in {\ensuremath {\mathscr{F}}}} \Big(L_2(f,\hat f_m)+ A_m(f)+ B_m(f)\Big)= O\big(m^{-\frac{3}{2}}+m^{-1-\gamma}\big);$$ (actually, each of the three terms on the left hand side has the same rate of convergence). **2. The finite variation case:** $\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}}(x)=x^{-1}{\ensuremath {\mathbb{I}}}_{[0,1]}(x).$ To prove that the standard choice of $V_j$ described at the beginning of Examples \[ex:ch4esempi\] leads to $\displaystyle{\int_{\varepsilon_m}^1 V_j(x)\frac{dx}{x}=1}$, it is enough to prove that this integral is independent of $j$, since in general $\displaystyle{\int_{\varepsilon_m}^1 \sum_{j=2}^m V_j(x)\frac{dx}{x}=m-1}.$ To that aim observe that, for $j=3,\dots,m-1$, $$\mu_m\int_{\varepsilon_m}^1 V_j(x)\nu_0(dx)=\int_{x_{j-1}^*}^{x_j^*}\frac{x-x_{j-1}^*}{x_j^*-x_{j-1}^*}\frac{dx}{x}+\int_{x_j^*}^{x_{j+1}^*}\frac{x_{j+1}^*-x}{x_{j+1}^*-x_j^*}\frac{dx}{x}.$$ Let us show that the first addendum does not depend on $j$. We have $$\int_{x_{j-1}^*}^{x_j^*}\frac{dx}{x_j^*-x_{j-1}^*}=1\quad \textnormal{and}\quad -\frac{x_{j-1}^*}{x_j^*-x_{j-1}^*}\int_{x_{j-1}^*}^{x_j^*}\frac{dx}{x}=\frac{x_{j-1}^*}{x_j^*-x_{j-1}^*}\ln\Big(\frac{x_{j-1}^*}{x_j^*}\Big).$$ Since $x_j^*=\frac{v_j-v_{j-1}}{\mu_m}$ and $v_j=\varepsilon_m^{\frac{m-j}{m-1}}$, the quantities $\frac{x_j^*}{x_{j-1}^*}$ and, hence, $\frac{x_{j-1}^*}{x_j^*-x_{j-1}^*}$ do not depend on $j$. The second addendum and the trapezoidal functions $V_2$ and $V_m$ are handled similarly. Thus, $\hat f_m$ can be chosen of the form $$\hat f_m(x):= \begin{cases} \quad 1 & \textnormal{if } x\in \big[0,\varepsilon_m\big],\\ \frac{\nu(J_2)}{\mu_m} & \textnormal{if } x\in \big(\varepsilon_m, x_2^*\big],\\ \frac{1}{x_{j+1}^*-x_j^*}\bigg[\frac{\nu(J_{j+1})}{\mu_m}(x-x_j^*)+\frac{\nu(J_{j})}{\mu_m}(x_{j+1}^*-x)\bigg] & \textnormal{if } x\in (x_j^*,x_{j+1}^*] \quad j = 2,\dots,m-1,\\ \frac{\nu(J_m)}{\mu_m} & \textnormal{if } x\in (x_m^*,1]. \end{cases}$$ A straightforward application of Lemmas \[lemma:ch4bounds\] and \[lemma:ch4abc\] gives $$\sqrt{\int_{\varepsilon_m}^1\Big(f(x)-\hat f_m(x)\Big)^2 \nu_0(dx)} +A_m(f)+B_m(f)=O\bigg(\bigg(\frac{\ln m}{m}\bigg)^{\gamma+1} \sqrt{\ln (\varepsilon_m^{-1})}\bigg),$$ as announced. **3. The infinite variation, non-compactly supported case:** $\frac{d\nu_0}{d{\ensuremath{\textnormal{Leb}}}}(x)=x^{-2}{\ensuremath {\mathbb{I}}}_{{\ensuremath {\mathbb{R}}}_+}(x)$. Recall that we want to prove that $$L_2(f,\hat f_m)^2+A_m^2(f)+B_m^2(f)=O\bigg(\frac{H(m)^{3+4\gamma}}{(\varepsilon_m m)^{2\gamma}}+\sup_{x\geq H(m)}\frac{f(x)^2}{H(m)}\bigg),$$ for any given sequence $H(m)$ going to infinity as $m\to\infty$. Let us start by addressing the problem that the triangular/trapezoidal choice for $V_j$ is not doable. Introduce the following notation: $V_j = {\ensuremath {\accentset{\triangle}{V}}}_j + A_j$, $j = 2, \dots, m$, where the ${\ensuremath {\accentset{\triangle}{V}}}_j$’s are triangular/trapezoidal function similar to those in . The difference is that here, since $x_m^*$ is not defined, ${\ensuremath {\accentset{\triangle}{V}}}_{m-1}$ is a trapezoid, linear between $x_{m-2}^*$ and $x_{m-1}^*$ and constantly equal to $\frac{1}{\mu_m}$ on $[x_{m-1}^*,v_{m-1}]$ and ${\ensuremath {\accentset{\triangle}{V}}}_m$ is supported on $[v_{m-1},\infty)$, where it is constantly equal to $\frac{1}{\mu_m}$. Each $A_j$ is chosen so that: 1. It is supported on $[x_{j-1}^*, x_{j+1}^*]$ (unless $j = 2$, $j = m-1$ or $j = m$; in the first case the support is $[x_2^*, x_3^*]$, in the second one it is $[x_{m-2}^*, x_{m-1}^*]$, and $A_m \equiv 0$); 2. ${A_j}$ coincides with $-A_{j-1}$ on $[x_{j-1}^*, x_j^*]$, $j = 3, \dots, m-1$ (so that $\sum V_j \equiv \frac{1}{\mu_n}$) and its first derivative is bounded (in absolute value) by $\frac{1}{\mu_m(x_j^* - x_{j-1}^*)}$ (so that $V_j$ is non-negative and bounded by $\frac{1}{\mu_n}$); 3. $A_j$ vanishes, along with its first derivatives, on $x_{j-1}^*$, $x_j^*$ and $x_{j+1}^*$. We claim that these conditions are sufficient to assure that $\hat f_m$ converges to $f$ quickly enough. First of all, by Remark \[rmk:nonlinear\], we observe that, to have a good bound on $L_2(f, \hat f_m)$, the crucial property of $\hat f_m$ is that its first right (resp. left) derivative has to be equal to $\frac{1}{\mu_m(x_{j+1}^*-x_j^*)}$ (resp. $\frac{1}{\mu_m(x_{j}^*-x_{j-1}^*)}$) and its second derivative has to be small enough (for example, so that the rest $\hat R_j$ is as small as the rest $R_j$ of $f$ already appearing in Lemma \[lemma:ch4bounds\]). The (say) left derivatives in $x_j^*$ of $\hat f_m$ are given by $$\hat f_m'(x_j^*) = \big({\ensuremath {\accentset{\triangle}{V}}}_j'(x_j^*) + A_j'(x_j^*)\big) \big(\nu(J_j)-\nu(J_{j-1})\big); \quad \hat f_m''(x_j^*) = A_j''(x_j^*)\big(\nu(J_j)-\nu(J_{j-1})\big).$$ Then, in order to bound $|\hat f_m''(x_j^*)|$ it is enough to bound $|A_j''(x_j^*)|$ because: $$\big|\hat f_m''(x_j^*)\big| \leq |A_j''(x_j^*)| \Big|\int_{J_j} f(x) \frac{dx}{x^2} - \int_{J_{j-1}} f(x) \frac{dx}{x^2}\Big| \leq |A_j''(x_j^*)| \displaystyle{\sup_{x\in I}}|f'(x)|(\ell_{j}+\ell_{j-1}) \mu_m,$$ where $\ell_{j}$ is the Lebesgue measure of $J_{j}$. We are thus left to show that we can choose the $A_j$’s satisfying points 1-3, with a small enough second derivative, and such that $\int_I V_j(x) \frac{dx}{x^2} = 1$. To make computations easier, we will make the following explicit choice: $$A_j(x) = b_j (x-x_j^*)^2 (x-x_{j-1}^*)^2 \quad \forall x \in [x_{j-1}^*, x_j^*),$$ for some $b_j$ depending only on $j$ and $m$ (the definitions on $[x_j^*, x_{j+1}^*)$ are uniquely determined by the condition $A_j + A_{j+1} \equiv 0$ there). Define $j_{\max}$ as the index such that $H(m) \in J_{j_{\max}}$; it is straightforward to check that $$j_{\max} \sim m- \frac{\varepsilon_m(m-1)}{H(m)}; \quad x_{m-k}^* = \varepsilon_m(m-1) \log \Big(1+\frac{1}{k}\Big), \quad k = 1, \dots, m-2.$$ One may compute the following Taylor expansions: $$\begin{aligned} \int_{x_{m-k-1}^*}^{x_{m-k}^*} {\ensuremath {\accentset{\triangle}{V}}}_{m-k}(x) \nu_0(dx) &= \frac{1}{2} - \frac{1}{6k} + \frac{5}{24k^2} + O\Big(\frac{1}{k^3}\Big);\\ \int_{x_{m-k}^*}^{x_{m-k+1}^*} {\ensuremath {\accentset{\triangle}{V}}}_{m-k}(x) \nu_0(dx) &= \frac{1}{2} + \frac{1}{6k} + \frac{1}{24k^2} + O\Big(\frac{1}{k^3}\Big). \end{aligned}$$ In particular, for $m \gg 0$ and $m-k \leq j_{\max}$, so that also $k \gg 0$, all the integrals $\int_{x_{j-1}^*}^{x_{j+1}^*} {\ensuremath {\accentset{\triangle}{V}}}_j(x) \nu_0(dx)$ are bigger than 1 (it is immediate to see that the same is true for ${\ensuremath {\accentset{\triangle}{V}}}_2$, as well). From now on we will fix a $k \geq \frac{\varepsilon_m m}{H(m)}$ and let $j = m-k$. Summing together the conditions $\int_I V_i(x)\nu_0(dx)=1$ $\forall i>j$ and noticing that the function $\sum_{i = j}^m V_i$ is constantly equal to $\frac{1}{\mu_m}$ on $[x_j^*,\infty)$ we have: $$\begin{aligned} \int_{x_{j-1}^*}^{x_j^*} A_j(x) \nu_0(dx) &= m-j+1 - \frac{1}{\mu_m} \nu_0([x_j^*, \infty)) - \int_{x_{j-1}^*}^{x_j^*} {\ensuremath {\accentset{\triangle}{V}}}_j(x) \nu_0(dx)\\ &= k+1- \frac{1}{\log(1+\frac{1}{k})} - \frac{1}{2} + \frac{1}{6k} + O\Big(\frac{1}{k^2}\Big) = \frac{1}{4k} + O\Big(\frac{1}{k^2}\Big) \end{aligned}$$ Our choice of $A_j$ allows us to compute this integral explicitly: $$\int_{x_{j-1}^*}^{x_j^*} b_j (x-x_{j-1}^*)^2(x-x_j^*)^2 \frac{dx}{x^2} = b_j \big(\varepsilon_m (m-1)\big)^3 \Big(\frac{2}{3} \frac{1}{k^4} + O\Big(\frac{1}{k^5}\Big)\Big).$$ In particular one gets that asymptotically $$b_j \sim \frac{1}{(\varepsilon_m(m-1))^3} \frac{3}{2} k^4 \frac{1}{4k} \sim \bigg(\frac{k}{\varepsilon_m m}\bigg)^3.$$ This immediately allows us to bound the first order derivative of $A_j$ as asked in point 2: Indeed, it is bounded above by $2 b_j \ell_{j-1}^3$ where $\ell_{j-1}$ is again the length of $J_{j-1}$, namely $\ell_j = \frac{\varepsilon_m(m-1)}{k(k+1)} \sim \frac{\varepsilon_m m}{k^2}$. It follows that for $m$ big enough: $$\displaystyle{\sup_{x\in I}|A_j'(x)|} \leq \frac{1}{k^3} \ll \frac{1}{\mu_m(x_j^*-x_{j-1}^*)} \sim \bigg(\frac{k}{\varepsilon_m m}\bigg)^2.$$ The second order derivative of $A_j(x)$ can be easily computed to be bounded by $4 b_j \ell_j^2$. Also remark that the conditions that $|f|$ is bounded by $M$ and that $f'$ is Hölder, say $|f'(x) - f'(y)| \leq K |x-y|^\gamma$, together give a uniform $L_\infty$ bound of $|f'|$ by $2M + K$. Summing up, we obtain: $$|\hat f_m''(x_j^*)| \lesssim b_j \ell_m^3 \mu_m \sim \frac{1}{k^3\varepsilon_m m}$$ (here and in the following we use the symbol $\lesssim$ to stress that we work up to constants and to higher order terms). The leading term of the rest $\hat R_j$ of the Taylor expansion of $\hat f_m$ near $x_j^*$ is $$\hat f_m''(x_j^*) |x-x_j^*|^2 \sim |f_m''(x_j^*)| \ell_j^2 \sim \frac{\varepsilon_m m}{k^7}.$$ Using Lemmas \[lemma:ch4bounds\] and \[lemma:ch4abc\] (taking into consideration Remark \[rmk:nonlinear\]) we obtain $$\begin{aligned} \int_{\varepsilon_m}^{\infty} |f(x) - \hat f_m(x)|^2 \nu_0(dx) &\lesssim \sum_{j=2}^{j_{\max}} \int_{J_j} |f(x) - \hat f_m(x)|^2 \nu_0(dx) + \int_{H(m)}^\infty |f(x)-\hat f_m(x)|^2 \nu_0(dx) \nonumber \\ &\lesssim \sum_{k=\frac{\varepsilon_m m}{H(m)}}^{m}\mu_m \bigg( \frac{(\varepsilon_m m)^{2+2\gamma}}{k^{4+4\gamma}} + \frac{(\varepsilon_m m)^2}{k^{14}}\bigg) + \frac{1}{H(m)}\sup_{x\geq H(m)}f(x)^2 \label{eq:xquadro} \\ &\lesssim \bigg(\frac{H(m)^{3+4\gamma}}{(\varepsilon_m m)^{2+2\gamma}} + \frac{H(m)^{13}}{(\varepsilon_m m)^{10}}\bigg) + \frac{1}{H(m)}. \nonumber \end{aligned}$$ It is easy to see that, since $0 < \gamma \leq 1$, as soon as the first term converges, it does so more slowly than the second one. Thus, an optimal choice for $H(m)$ is given by $\sqrt{\varepsilon_m m}$, that gives a rate of convergence: $$L_2(f,\hat f_m)^2 \lesssim \frac{1}{\sqrt{\varepsilon_m m}}.$$ This directly gives a bound on $H(f, \hat f_m)$. Also, the bound on the term $A_m(f)$, which is $L_2(\sqrt f,\widehat{\sqrt{f}}_m)^2$, follows as well, since $f \in {\ensuremath {\mathscr{F}}}_{(\gamma,K,\kappa,M)}^I$ implies $\sqrt{f} \in {\ensuremath {\mathscr{F}}}_{(\gamma, \frac{K}{\sqrt\kappa}, \sqrt \kappa, \sqrt M)}^I$. Finally, the term $B_m^2(f)$ contributes with the same rates as those in : Using Lemma \[lemma:ch4abc\], $$\begin{aligned} B_m^2(f) &\lesssim \sum_{j=2}^{\lceil m-\frac{\varepsilon_m(m-1)}{H(m)} \rceil} \nu_0(J_j) \|R_j\|_{L_\infty}^2 + \nu_0([H(m), \infty))\\ &\lesssim \mu_m \sum_{k=\frac{\varepsilon_m (m-1)}{H(m)}}^m \Big(\frac{\varepsilon_m m}{k^2}\Big)^{2+2\gamma} + \frac{1}{H(m)}\\ &\lesssim \frac{H(m)^{3+4\gamma}}{(\varepsilon_m m)^{2+2\gamma}} + \frac{1}{H(m)}. \end{aligned}$$ Proof of Example \[ex:ch4CPP\] {#subsec:ch4ex1} ------------------------------ In this case, since $\varepsilon_m = 0$, the proofs of Theorems \[ch4teo1\] and \[ch4teo2\] simplify and give better estimates near zero, namely: $$\begin{aligned} \Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}}, {\ensuremath {\mathscr{W}}}_n^{\nu_0}) &\leq C_1 \bigg(\sqrt{T_n}\sup_{f\in {\ensuremath {\mathscr{F}}}}\Big(A_m(f)+ B_m(f)+L_2(f,\hat f_m)\Big)+\sqrt{\frac{m^2}{T_n}}\bigg)\nonumber \\ \Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}}, {\ensuremath {\mathscr{W}}}_n^{\nu_0}) &\leq C_2\bigg(\sqrt{n\Delta_n^2}+\frac{m\ln m}{\sqrt{n}}+\sqrt{T_n}\sup_{f\in{\ensuremath {\mathscr{F}}}}\Big( A_m(f)+ B_m(f)+H\big(f,\hat f_m\big)\Big) \bigg) \label{eq:CPP},\end{aligned}$$ where $C_1$, $C_2$ depend only on $\kappa,M$ and $$\begin{aligned} &A_m(f)=\sqrt{\int_0^1\Big(\widehat{\sqrt f}_m(y)-\sqrt{f(y)}\Big)^2dy},\quad B_m(f)=\sum_{j=1}^m\bigg(\sqrt m\int_{J_j}\sqrt{f(y)}dy-\sqrt{\theta_j}\bigg)^2.\end{aligned}$$ As a consequence we get: $$\begin{aligned} \Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}},{\ensuremath {\mathscr{W}}}_n^{\nu_0})&\leq O\bigg(\sqrt{T_n}(m^{-\frac{3}{2}}+m^{-1-\gamma})+\sqrt{m^2T_n^{-1}}\bigg).\end{aligned}$$ To get the bounds in the statement of Example \[ex:ch4CPP\] the optimal choices are $m_n = T_n^{\frac{1}{2+\gamma}}$ when $\gamma \leq \frac{1}{2}$ and $m_n = T_n^{\frac{2}{5}}$ otherwise. Concerning the discrete model, we have: $$\begin{aligned} \Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{{\ensuremath{\textnormal{Leb}}}},{\ensuremath {\mathscr{W}}}_n^{\nu_0})&\leq O\bigg(\sqrt{n\Delta_n^2}+\frac{m\ln m}{\sqrt{n}}+ \sqrt{n\Delta_n}\big(m^{-\frac{3}{2}}+m^{-1-\gamma}\big)\bigg).\end{aligned}$$ There are four possible scenarios: If $\gamma>\frac{1}{2}$ and $\Delta_n=n^{-\beta}$ with $\frac{1}{2}<\beta<\frac{3}{4}$ (resp. $\beta\geq \frac{3}{4}$) then the optimal choice is $m_n=n^{1-\beta}$ (resp. $m_n=n^{\frac{2-\beta}{5}}$). If $\gamma\geq\frac{1}{2}$ and $\Delta_n=n^{-\beta}$ with $\frac{1}{2}<\beta<\frac{2+2\gamma}{3+2\gamma}$ (resp. $\beta\geq \frac{2+2\gamma}{3+2\gamma}$) then the optimal choice is $m_n=n^{\frac{2-\beta}{4+2\gamma}}$ (resp. $m_n=n^{1-\beta}$). Proof of Example \[ch4ex2\] {#subsec:ch4ex2} --------------------------- As in Examples \[ex:ch4esempi\], we let $\varepsilon_m=m^{-1-\alpha}$ and consider the standard triangular/trapezoidal $V_j$’s. In particular, $\hat f_m$ will be piecewise linear. Condition (C2’) is satisfied and we have $C_m(f)=O(\varepsilon_m)$. This bound, combined with the one obtained in , allows us to conclude that an upper bound for the rate of convergence of $\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})$ is given by: $$\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})\leq C \bigg(\sqrt{\sqrt{n^2\Delta_n}\varepsilon_m}+\sqrt{n\Delta_n}\Big(\frac{\ln (\varepsilon_m^{-1})}{m}\Big)^{2}+\frac{m\ln m}{\sqrt n}+\sqrt{n\Delta_n^2}\ln (\varepsilon_m^{-1}) \bigg),$$ where $C$ is a constant only depending on the bound on $\lambda > 0$. The sequences $\varepsilon_m$ and $m$ can be chosen arbitrarily to optimize the rate of convergence. It is clear from the expression above that, if we take $\varepsilon_m = m^{-1-\alpha}$ with $\alpha > 0$, bigger values of $\alpha$ reduce the first term $\sqrt{\sqrt{n^2\Delta_n}\varepsilon_m}$, while changing the other terms only by constants. It can be seen that taking $\alpha \geq 15$ is enough to make the first term negligeable with respect to the others. In that case, and under the assumption $\Delta_n = n^{-\beta}$, the optimal choice for $m$ is $m = n^\delta$ with $\delta = \frac{5-4\beta}{14}$. In that case, the global rate of convergence is $$\Delta({\ensuremath {\mathscr{Q}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0}) = \begin{cases} O\big(n^{\frac{1}{2}-\beta} \ln n\big) & \text{if } \frac{1}{2} < \beta \leq \frac{9}{10}\\ O\big(n^{-\frac{1+2\beta}{7}} \ln n\big) & \text{if } \frac{9}{10} < \beta < 1. \end{cases}$$ In the same way one can find $$\Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\bigg( \sqrt{n\Delta_n} \Big(\frac{\ln m}{m}\Big)^2 \sqrt{\ln(\varepsilon_m^{-1})} + \sqrt{\frac{m^2}{n\Delta_n \ln(\varepsilon_m)}} + \sqrt{n \Delta_n} \varepsilon_m \bigg).$$ As above, we can freely choose $\varepsilon_m$ and $m$ (in a possibly different way from above). Again, as soon as $\varepsilon_m = m^{-1-\alpha}$ with $\alpha \geq 1$ the third term plays no role, so that we can choose $\varepsilon_m = m^{-2}$. Letting $\Delta_n = n^{-\beta}$, $0 < \beta < 1$, and $m = n^\delta$, an optimal choice is $\delta = \frac{1-\beta}{3}$, giving $$\Delta({\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\Big(n^{\frac{\beta-1}{6}} \big(\ln n\big)^{\frac{5}{2}}\Big) = O\Big(T_n^{-\frac{1}{6}} \big(\ln T_n\big)^\frac{5}{2}\Big).$$ Proof of Example \[ex3\] {#subsec:ch4ex3} ------------------------ Using the computations in , combined with $\big(f(y)-\hat f_m(y)\big)^2\leq 4 \exp(-2\lambda_0 y^3) \leq 4 \exp(-2\lambda_0 H(m)^3)$ for all $y \geq H(m)$, we obtain: $$\begin{aligned} \int_{\varepsilon_m}^\infty \big|f(x) - \hat f_m(x)\big|^2 \nu_0(dx) &\lesssim \frac{H(m)^{7}}{(\varepsilon_m m)^{4}} + \int_{H(m)}^\infty \big|f(x) - \hat f_m(x)\big|^2 \nu_0(dx)\\ &\lesssim \frac{H(m)^{7}}{(\varepsilon_m m)^{4}} + \frac{e^{-2\lambda_0 H(m)^3}}{H(m)}. \end{aligned}$$ As in Example \[ex:ch4esempi\], this bounds directly $H^2(f, \hat f_m)$ and $A_m^2(f)$. Again, the first part of the integral appearing in $B_m^2(f)$ is asymptotically smaller than the one appearing above: $$\begin{aligned} B_m^2(f) &= \sum_{j=1}^m \bigg(\frac{1}{\sqrt{\mu_m}} \int_{J_j} \sqrt{f} \nu_0 - \sqrt{\int_{J_j} f(x) \nu_0(dx)}\bigg)^2\\ &\lesssim \frac{H(m)^{7}}{(\varepsilon_m m)^{4}} + \sum_{k=1}^{\frac{\varepsilon_m m}{H(m)}} \bigg( \frac{1}{\sqrt{\mu_m}} \int_{J_{m-k}} \sqrt{f} \nu_0 - \sqrt{\int_{J_{m-k}} f(x) \nu_0(dx)}\bigg)^2\\ &\lesssim \frac{H(m)^{7}}{(\varepsilon_m m)^{4}} + \frac{e^{-\lambda_0 H(m)^3}}{H(m)}. \end{aligned}$$ As above, for the last inequality we have bounded $f$ in each $J_{m-k}$, $k \leq \frac{\varepsilon_m m}{H(m)}$, with $\exp(-\lambda_0 H(m)^3)$. Thus the global rate of convergence of $L_2(f,\hat f_m)^2 + A_m^2(f) + B_m^2(f)$ is $\frac{H(m)^{7}}{(\varepsilon_m m)^{4}} + \frac{e^{-\lambda_0 H(m)^3}}{H(m)}$. Concerning $C_m(f)$, we have $C_m^2(f) = \int_0^{\varepsilon_m} \frac{(\sqrt{f(x)} - 1)^2}{x^2} dx \lesssim \varepsilon_m^5$. To write the global rate of convergence of the Le Cam distance in the discrete setting we make the choice $H(m) = \sqrt[3]{\frac{\eta}{\lambda_0}\ln m}$, for some constant $\eta$, and obtain: $$\begin{aligned} \Delta({\ensuremath {\mathscr{Q}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0}) &= O \bigg( \frac{\sqrt{n} \Delta_n}{\varepsilon_m} + \frac{m \ln m}{\sqrt{n}} + \sqrt{n \Delta_n} \Big( \frac{(\ln m)^{\frac{7}{6}}}{(\varepsilon_m m)^2} + \frac{m^{-\frac{\eta}{2}}}{\sqrt[3]{\ln m}} \Big) + \sqrt[4]{n^2 \Delta_n \varepsilon_m^5}\bigg). \end{aligned}$$ Letting $\Delta_n = n^{-\beta}$, $\varepsilon_m = n^{-\alpha}$ and $m = n^\delta$, optimal choices give $\alpha = \frac{\beta}{3}$ and $\delta = \frac{1}{3}+\frac{\beta}{18}$. We can also take $\eta = 2$ to get a final rate of convergence: $$\Delta({\ensuremath {\mathscr{Q}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0}) = \begin{cases} O\big(n^{\frac{1}{2} - \frac{2}{3}\beta}\big)& \text{if } \frac{3}{4} < \beta < \frac{12}{13}\\ O\big(n^{-\frac{1}{6}+\frac{\beta}{18}} (\ln n)^{\frac{7}{6}}\big) &\text{if } \frac{12}{13} \leq \beta < 1. \end{cases}$$ In the continuous setting, we have $$\Delta({\ensuremath {\mathscr{P}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\bigg(\sqrt{n\Delta_n} \Big( \frac{(\ln m)^\frac{7}{6}}{(\varepsilon_m m)^2} + \frac{m^{-\frac{\eta}{2}}}{\sqrt[3]{\ln m}} + \varepsilon_m^{\frac{5}{2}}\Big) + \sqrt{\frac{\varepsilon_m m^2}{n\Delta_n}} \bigg).$$ Using $T_n = n\Delta_n$, $\varepsilon_m = T_n^{-\alpha}$ and $m = T_n^\delta$, optimal choices are given by $\alpha = \frac{4}{17}$, $\delta = \frac{9}{17}$; choosing any $\eta \geq 3$ we get the rate of convergence $$\Delta({\ensuremath {\mathscr{P}}}_{n}^{\nu_0},{\ensuremath {\mathscr{W}}}_n^{\nu_0})=O\big(T_n^{-\frac{3}{34}} (\ln T_n)^{\frac{7}{6}}\big).$$ Background ========== Le Cam theory of statistical experiments {#sec:ch4lecam} ---------------------------------------- A *statistical model* or *experiment* is a triplet ${\ensuremath {\mathscr{P}}}_j=({\ensuremath {\mathscr{X}}}_j,{\ensuremath {\mathscr{A}}}_j,\{P_{j,\theta}; \theta\in\Theta\})$ where $\{P_{j,\theta}; \theta\in\Theta\}$ is a family of probability distributions all defined on the same $\sigma$-field ${\ensuremath {\mathscr{A}}}_j$ over the *sample space* ${\ensuremath {\mathscr{X}}}_j$ and $\Theta$ is the *parameter space*. The *deficiency* $\delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)$ of ${\ensuremath {\mathscr{P}}}_1$ with respect to ${\ensuremath {\mathscr{P}}}_2$ quantifies “how much information we lose” by using ${\ensuremath {\mathscr{P}}}_1$ instead of ${\ensuremath {\mathscr{P}}}_2$ and it is defined as $\delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)=\inf_K\sup_{\theta\in \Theta}||KP_{1,\theta}-P_{2,\theta}||_{TV},$ where TV stands for “total variation” and the infimum is taken over all “transitions” $K$ (see [@lecam], page 18). The general definition of transition is quite involved but, for our purposes, it is enough to know that Markov kernels are special cases of transitions. By $KP_{1,\theta}$ we mean the image measure of $P_{1,\theta}$ via the Markov kernel $K$, that is $$KP_{1,\theta}(A)=\int_{{\ensuremath {\mathscr{X}}}_1}K(x,A)P_{1,\theta}(dx),\quad\forall A\in {\ensuremath {\mathscr{A}}}_2.$$ The experiment $K{\ensuremath {\mathscr{P}}}_1=({\ensuremath {\mathscr{X}}}_2,{\ensuremath {\mathscr{A}}}_2,\{KP_{1,\theta}; \theta\in\Theta\})$ is called a *randomization* of ${\ensuremath {\mathscr{P}}}_1$ by the Markov kernel $K$. When the kernel $K$ is deterministic, that is $K(x,A)={\ensuremath {\mathbb{I}}}_{A}S(x)$ for some random variable $S:({\ensuremath {\mathscr{X}}}_1,{\ensuremath {\mathscr{A}}}_1)\to({\ensuremath {\mathscr{X}}}_2,{\ensuremath {\mathscr{A}}}_2)$, the experiment $K{\ensuremath {\mathscr{P}}}_1$ is called the *image experiment by the random variable* $S$. The Le Cam distance is defined as the symmetrization of $\delta$ and it defines a pseudometric. When $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)=0$ the two statistical models are said to be *equivalent*. Two sequences of statistical models $({\ensuremath {\mathscr{P}}}_{1}^n)_{n\in{\ensuremath {\mathbb{N}}}}$ and $({\ensuremath {\mathscr{P}}}_{2}^n)_{n\in{\ensuremath {\mathbb{N}}}}$ are called *asymptotically equivalent* if $\Delta({\ensuremath {\mathscr{P}}}_{1}^n,{\ensuremath {\mathscr{P}}}_{2}^n)$ tends to zero as $n$ goes to infinity. A very interesting feature of the Le Cam distance is that it can be also translated in terms of statistical decision theory. Let ${\ensuremath {\mathscr{D}}}$ be any (measurable) decision space and let $L:\Theta\times {\ensuremath {\mathscr{D}}}\mapsto[0,\infty)$ denote a loss function. Let $\|L\|=\sup_{(\theta,z)\in\Theta\times{\ensuremath {\mathscr{D}}}}L(\theta,z)$. Let $\pi_i$ denote a (randomized) decision procedure in the $i$-th experiment. Denote by $R_i(\pi_i,L,\theta)$ the risk from using procedure $\pi_i$ when $L$ is the loss function and $\theta$ is the true value of the parameter. Then, an equivalent definition of the deficiency is: $$\begin{aligned} \delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)=\inf_{\pi_1}\sup_{\pi_2}\sup_{\theta\in\Theta}\sup_{L:\|L\|=1}\big|R_1(\pi_1,L,\theta)-R_2(\pi_2,L,\theta)\big|.\end{aligned}$$ Thus $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)<\varepsilon$ means that for every procedure $\pi_i$ in problem $i$ there is a procedure $\pi_j$ in problem $j$, $\{i,j\}=\{1,2\}$, with risks differing by at most $\varepsilon$, uniformly over all bounded $L$ and $\theta\in\Theta$. In particular, when minimax rates of convergence in a nonparametric estimation problem are obtained in one experiment, the same rates automatically hold in any asymptotically equivalent experiment. There is more: When explicit transformations from one experiment to another are obtained, statistical procedures can be carried over from one experiment to the other one. There are various techniques to bound the Le Cam distance. We report below only the properties that are useful for our purposes. For the proofs see, e.g., [@lecam; @strasser]. \[ch4delta0\] Let ${\ensuremath {\mathscr{P}}}_j=({\ensuremath {\mathscr{X}}},{\ensuremath {\mathscr{A}}},\{P_{j,\theta}; \theta\in\Theta\})$, $j=1,2$, be two statistical models having the same sample space and define $\Delta_0({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2):=\sup_{\theta\in\Theta}\|P_{1,\theta}-P_{2,\theta}\|_{TV}.$ Then, $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)\leq \Delta_0({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)$. In particular, Property \[ch4delta0\] allows us to bound the Le Cam distance between statistical models sharing the same sample space by means of classical bounds for the total variation distance. To that aim, we collect below some useful results. \[ch4h\] Let $P_1$ and $P_2$ be two probability measures on ${\ensuremath {\mathscr{X}}}$, dominated by a common measure $\xi$, with densities $g_{i}=\frac{dP_{i}}{d\xi}$, $i=1,2$. Define $$\begin{aligned} L_1(P_1,P_2)&=\int_{{\ensuremath {\mathscr{X}}}} |g_{1}(x)-g_{2}(x)|\xi(dx), \\ H(P_1,P_2)&=\bigg(\int_{{\ensuremath {\mathscr{X}}}} \Big(\sqrt{g_{1}(x)}-\sqrt{g_{2}(x)}\Big)^2\xi(dx)\bigg)^{1/2}. \end{aligned}$$ Then, $$\|P_1-P_2\|_{TV}=\frac{1}{2}L_1(P_1,P_2)\leq H(P_1,P_2).$$ \[ch4hp\] Let $P$ and $Q$ be two product measures defined on the same sample space: $P=\otimes_{i=1}^n P_i$, $Q=\otimes_{i=1}^n Q_i$. Then $$H ^2(P,Q)\leq \sum_{i=1}^nH^2(P_i,Q_i).$$ \[fact:ch4hellingerpoisson\] Let $P_i$, $i=1,2$, be the law of a Poisson random variable with mean $\lambda_i$. Then $$H^2(P_1,P_2)=1-\exp\bigg(-\frac{1}{2}\Big(\sqrt{\lambda_1}-\sqrt{\lambda_2}\Big)^2\bigg).$$ \[fact:ch4gaussiane\] Let $Q_1\sim{\ensuremath {\mathscr{Nn}}}(\mu_1,\sigma_1^2)$ and $Q_2\sim{\ensuremath {\mathscr{Nn}}}(\mu_2,\sigma_2^2)$. Then $$\|Q_1-Q_2\|_{TV}\leq \sqrt{2\bigg(1-\frac{\sigma_1^2}{\sigma_2^2}\bigg)^2+\frac{(\mu_1-\mu_2)^2}{2\sigma_2^2}}.$$ \[fact:ch4processigaussiani\] For $i=1,2$, let $Q_i$, $i=1,2$, be the law on $(C,{\ensuremath {\mathscr{C}}})$ of two Gaussian processes of the form $$X^i_t=\int_{0}^t h_i(s)ds+ \int_0^t \sigma(s)dW_s,\ t\in[0,T]$$ where $h_i\in L_2({\ensuremath {\mathbb{R}}})$ and $\sigma\in{\ensuremath {\mathbb{R}}}_{>0}$. Then: $$L_1\big(Q_1,Q_2\big)\leq \sqrt{\int_{0}^T\frac{\big(h_1(y)-h_2(y)\big)^2}{\sigma^2(s)}ds}.$$ \[ch4fatto3\] Let ${\ensuremath {\mathscr{P}}}_i=({\ensuremath {\mathscr{X}}}_i,{\ensuremath {\mathscr{A}}}_i,\{P_{i,\theta}, \theta\in\Theta\})$, $i=1,2$, be two statistical models. Let $S:{\ensuremath {\mathscr{X}}}_1\to{\ensuremath {\mathscr{X}}}_2$ be a sufficient statistics such that the distribution of $S$ under $P_{1,\theta}$ is equal to $P_{2,\theta}$. Then $\Delta({\ensuremath {\mathscr{P}}}_1,{\ensuremath {\mathscr{P}}}_2)=0$. \[ch4independentkernels\] Let $P_i$ be a probability measure on $(E_i,\mathcal{E}_i)$ and $K_i$ a Markov kernel on $(G_i,\mathcal G_i)$. One can then define a Markov kernel $K$ on $(\prod_{i=1}^n E_i,\otimes_{i=1}^n \mathcal{G}_i)$ in the following way: $$K(x_1,\dots,x_n; A_1\times\dots\times A_n):=\prod_{i=1}^nK_i(x_i,A_i),\quad \forall x_i\in E_i,\ \forall A_i\in \mathcal{G}_i.$$ Clearly $K\otimes_{i=1}^nP_i=\otimes_{i=1}^nK_iP_i$. Finally, we recall the following result that allows us to bound the Le Cam distance between Poisson and Gaussian variables. \[ch4teomisto\](See [@BC04], Theorem 4) Let $\tilde P_{\lambda}$ be the law of a Poisson random variable $\tilde X_{\lambda}$ with mean $\lambda$. Furthermore, let $P_{\lambda}^*$ be the law of a random variable $Z^*_{\lambda}$ with Gaussian distribution ${\ensuremath {\mathscr{Nn}}}(2\sqrt{\lambda},1)$, and let $\tilde U$ be a uniform variable on $\big[-\frac{1}{2},\frac{1}{2}\big)$ independent of $\tilde X_{\lambda}$. Define $$\tilde Z_{\lambda}=2\textnormal{sgn}\big(\tilde X_{\lambda}+\tilde U\big)\sqrt{\big|\tilde X_{\lambda}+\tilde U\big|}.$$ Then, denoting by $P_{\lambda}$ the law of $\tilde Z_{\lambda}$, $$H ^2\big(P_{\lambda}, P_{\lambda}^*\big)=O(\lambda^{-1}).$$ Thanks to Theorem \[ch4teomisto\], denoting by $\Lambda$ a subset of ${\ensuremath {\mathbb{R}}}_{>0}$, by $\tilde {\ensuremath {\mathscr{P}}}$ (resp. ${\ensuremath {\mathscr{P}}}^*$) the statistical model associated with the family of probabilities $\{\tilde P_\lambda: \lambda \in \Lambda\}$ (resp. $\{P_\lambda^* : \lambda \in \Lambda\}$), we have $$\Delta\big(\tilde {\ensuremath {\mathscr{P}}}, {\ensuremath {\mathscr{P}}}^*\big) \leq \sup_{\lambda \in \Lambda} \frac{C}{\lambda},$$ for some constant $C$. Indeed, the correspondence associating $\tilde Z_\lambda$ to $\tilde X_\lambda$ defines a Markov kernel; conversely, associating to $\tilde Z_\lambda$ the closest integer to its square, defines a Markov kernel going in the other direction. Lévy processes {#sec:ch4levy} -------------- A stochastic process $\{X_t:t\geq 0\}$ on ${\ensuremath {\mathbb{R}}}$ defined on a probability space $(\Omega,{\ensuremath {\mathscr{A}}},{\ensuremath {\mathbb{P}}})$ is called a *Lévy process* if the following conditions are satisfied. 1. $X_0=0$ ${\ensuremath {\mathbb{P}}}$-a.s. 2. For any choice of $n\geq 1$ and $0\leq t_0<t_1<\ldots<t_n$, random variables $X_{t_0}$, $X_{t_1}-X_{t_0},\dots ,X_{t_n}-X_{t_{n-1}}$are independent. 3. The distribution of $X_{s+t}-X_s$ does not depend on $s$. 4. There is $\Omega_0\in {\ensuremath {\mathscr{A}}}$ with ${\ensuremath {\mathbb{P}}}(\Omega_0)=1$ such that, for every $\omega\in \Omega_0$, $X_t(\omega)$ is right-continuous in $t\geq 0$ and has left limits in $t>0$. 5. It is stochastically continuous. Thanks to the *Lévy-Khintchine formula*, the characteristic function of any Lévy process $\{X_t\}$ can be expressed, for all $u$ in ${\ensuremath {\mathbb{R}}}$, as: $$\label{caratteristica} {\ensuremath {\mathbb{E}}}\big[e^{iuX_t}\big]=\exp\bigg(-t\Big(iub-\frac{u^2\sigma^2}{2}-\int_{{\ensuremath {\mathbb{R}}}}(1-e^{iuy}+iuy{\ensuremath {\mathbb{I}}}_{\vert y\vert \leq 1})\nu(dy)\Big)\bigg),$$ where $b,\sigma\in {\ensuremath {\mathbb{R}}}$ and $\nu$ is a measure on ${\ensuremath {\mathbb{R}}}$ satisfying $$\nu(\{0\})=0 \textnormal{ and } \int_{{\ensuremath {\mathbb{R}}}}(|y|^2\wedge 1)\nu(dy)<\infty.$$ In the sequel we shall refer to $(b,\sigma^2,\nu)$ as the characteristic triplet of the process $\{X_t\}$ and $\nu$ will be called the *Lévy measure*. This data characterizes uniquely the law of the process $\{X_t\}$. Let $D=D([0,\infty),{\ensuremath {\mathbb{R}}})$ be the space of mappings $\omega$ from $[0,\infty)$ into ${\ensuremath {\mathbb{R}}}$ that are right-continuous with left limits. Define the *canonical process* $x:D\to D$ by $$\forall \omega\in D,\quad x_t(\omega)=\omega_t,\;\;\forall t\geq 0.$$ Let ${\ensuremath {\mathscr{D}}}_t$ and ${\ensuremath {\mathscr{D}}}$ be the $\sigma$-algebras generated by $\{x_s:0\leq s\leq t\}$ and $\{x_s:0\leq s<\infty\}$, respectively (here, we use the same notations as in [@sato]). By the condition (4) above, any Lévy process on ${\ensuremath {\mathbb{R}}}$ induces a probability measure $P$ on $(D,{\ensuremath {\mathscr{D}}})$. Thus $\{X_t\}$ on the probability space $(D,{\ensuremath {\mathscr{D}}},P)$ is identical in law with the original Lévy process. By saying that $(\{x_t\},P)$ is a Lévy process, we mean that $\{x_t:t\geq 0\}$ is a Lévy process under the probability measure $P$ on $(D,{\ensuremath {\mathscr{D}}})$. For all $t>0$ we will denote $P_t$ for the restriction of $P$ to ${\ensuremath {\mathscr{D}}}_t$. In the case where $\int_{|y|\leq 1}|y|\nu(dy)<\infty$, we set $\gamma^{\nu}:=\int_{|y|\leq 1}y\nu(dy)$. Note that, if $\nu$ is a finite Lévy measure, then the process having characteristic triplet $(\gamma^{\nu},0,\nu)$ is a compound Poisson process. Here and in the sequel we will denote by $\Delta x_r$ the jump of process $\{x_t\}$ at the time $r$: $$\Delta x_r = x_r - \lim_{s \uparrow r} x_s.$$ For the proof of Theorems \[ch4teo1\], \[ch4teo2\] we also need some results on the equivalence of measures for Lévy processes. By the notation $\ll$ we will mean “is absolutely continuous with respect to”. \[ch4teosato\] Let $P^1$ (resp. $P^2$) be the law induced on $(D,{\ensuremath {\mathscr{D}}})$ by a Lévy process of characteristic triplet $(\eta,0,\nu_1)$ (resp. $(0,0,\nu_2)$), where $$\label{ch4gamma*} \eta=\int_{\vert y \vert \leq 1}y(\nu_1-\nu_2)(dy)$$ is supposed to be finite. Then $P_t^1\ll P_t^2$ for all $t\geq 0$ if and only if $\nu_1\ll\nu_2$ and the density $\frac{d\nu_1}{d\nu_2}$ satisfies $$\label{ch4Sato} \int\bigg(\sqrt{\frac{d\nu_1}{d\nu_2}(y)}-1\bigg)^2\nu_2(dy)<\infty.$$ Remark that the finiteness in implies that in . When $P_t^1\ll P_t^2$, the density is $$\frac{dP_t^1}{dP_t^2}(x)=\exp(U_t(x)),$$ with $$\label{ch4U} U_t(x)=\lim_{\varepsilon\to 0} \bigg(\sum_{r\leq t}\ln \frac{d\nu_1}{d\nu_2}(\Delta x_r){\ensuremath {\mathbb{I}}}_{\vert\Delta x_r\vert>\varepsilon}- \int_{\vert y\vert > \varepsilon} t\bigg(\frac{d\nu_1}{d\nu_2}(y)-1\bigg)\nu_2(dy)\bigg),\\ P^{(0,0,\nu_2)}\textnormal{-a.s.}$$ The convergence in is uniform in $t$ on any bounded interval, $P^{(0,0,\nu_2)}$-a.s. Besides, $\{U_t(x)\}$ defined by is a Lévy process satisfying ${\ensuremath {\mathbb{E}}}_{P^{(0,0,\nu_2)}}[e^{U_t(x)}]=1$, $\forall t\geq 0$. Finally, let us consider the following result giving an explicit bound for the $L_1$ and the Hellinger distances between two Lévy processes of characteristic triplets of the form $(b_i,0,\nu_i)$, $i=1,2$ with $b_1-b_2=\int_{\vert y \vert \leq 1}y(\nu_1-\nu_2)(dy)$. \[teo:ch4bound\] For any $0<T<\infty$, let $P_T^i$ be the probability measure induced on $(D,{\ensuremath {\mathscr{D}}}_T)$ by a Lévy process of characteristic triplet $(b_i,0,\nu_i)$, $i=1,2$ and suppose that $\nu_1\ll\nu_2$. If $H^2(\nu_1,\nu_2):=\int\big(\sqrt{\frac{d\nu_1}{d\nu_2}(y)}-1\big)^2\nu_2(dy)<\infty,$ then $$H^2(P_T^1,P_T^2)\leq \frac{T}{2}H^2(\nu_1,\nu_2).$$ We conclude the Appendix with a technical statement about the Le Cam distance for finite variation models. \[ch4LC\] $$\Delta({\ensuremath {\mathscr{P}}}_n^{\nu_0},{\ensuremath {\mathscr{P}}}_{n,FV}^{\nu_0})=0.$$ Consider the Markov kernels $\pi_1$, $\pi_2$ defined as follows $$\pi_1(x,A)={\ensuremath {\mathbb{I}}}_{A}(x^d), \quad \pi_2(x,A)={\ensuremath {\mathbb{I}}}_{A}(x-\cdot \gamma^{\nu_0}), \quad \forall x\in D, A \in {\ensuremath {\mathscr{D}}},$$ where we have denoted by $x^d$ the discontinuous part of the trajectory $x$, i.e. $\Delta x_r = x_r - \lim_{s \uparrow r} x_s,\ x_t^d=\sum_{r \leq t}\Delta x_r$ and by $x-\cdot \gamma^{\nu_0}$ the trajectory $x_t-t\gamma{\nu_0}$, $t\in[0,T_n]$. On the one hand we have: $$\begin{aligned} \pi_1 P^{(\gamma^{\nu-\nu_0},0,\nu)}(A)&=\int_D \pi_1(x,A)P^{(\gamma^{\nu-\nu_0},0,\nu)}(dx)=\int_D {\ensuremath {\mathbb{I}}}_A(x^d)P^{(\gamma^{\nu-\nu_0},0,\nu)}(dx)\\ &=P^{(\gamma^{\nu},0,\nu)}(A),\end{aligned}$$ where in the last equality we have used the fact that, under $P^{(\gamma^{\nu-\nu_0},0,\nu)}$, $\{x_t^d\}$ is a Lévy process with characteristic triplet $(\gamma^{\nu},0,\nu)$ (see [@sato], Theorem 19.3). On the other hand: $$\begin{aligned} \pi_2 P^{(\gamma^{\nu},0,\nu)}(A)&=\int_D \pi_2(x,A)P^{(\gamma^{\nu_0},0,\nu)}(dx)=\int_D {\ensuremath {\mathbb{I}}}_A(x-\cdot \gamma^{\nu_0})P^{(\gamma^{\nu},0,\nu)}(dx)\\ &=P^{(\gamma^{\nu-\nu_0},0,\nu)}(A),\end{aligned}$$ since, by definition, $\gamma^{\nu}-\gamma^{\nu_0}$ is equal to $\gamma^{\nu-\nu_0}$. The conclusion follows by the definition of the Le Cam distance. Acknowledgements {#acknowledgements .unnumbered} ---------------- I am very grateful to Markus Reiss for several interesting discussions and many insights; this paper would never have existed in the present form without his advice and encouragement. My deepest thanks go to the anonymous referee, whose insightful comments have greatly improved the exposition of the paper; some gaps in the proofs have been corrected thanks to his/her remarks.
--- abstract: | We give a general construction of debiased/locally robust/orthogonal (LR) moment functions for GMM, where the derivative with respect to first step nonparametric estimation is zero and equivalently first step estimation has no effect on the influence function. This construction consists of adding an estimator of the influence function adjustment term for first step nonparametric estimation to identifying or original moment conditions. We also give numerical methods for estimating LR moment functions that do not require an explicit formula for the adjustment term. LR moment conditions have reduced bias and so are important when the first step is machine learning. We derive LR moment conditions for dynamic discrete choice based on first step machine learning estimators of conditional choice probabilities. We provide simple and general asymptotic theory for LR estimators based on sample splitting. This theory uses the additive decomposition of LR moment conditions into an identifying condition and a first step influence adjustment. Our conditions require only mean square consistency and a few (generally either one or two) readily interpretable rate conditions. LR moment functions have the advantage of being less sensitive to first step estimation. Some LR moment functions are also doubly robust meaning they hold if one first step is incorrect. We give novel classes of doubly robust moment functions and characterize double robustness. For doubly robust estimators our asymptotic theory only requires one rate condition. Keywords: Local robustness, orthogonal moments, double robustness, semiparametric estimation, bias, GMM. JEL classification: : C13; C14; C21; D24 author: - | Victor Chernozhukov\ *MIT* - | Juan Carlos Escanciano\ *Indiana University* - | Hidehiko Ichimura\ *University of Tokyo* - | Whitney K. Newey\ *MIT* - | James M. Robins\ *Harvard University* date: April 2018 title: Locally Robust Semiparametric Estimation --- Introduction ============ There are many economic parameters that depend on nonparametric or large dimensional first steps. Examples include dynamic discrete choice, games, average consumer surplus, and treatment effects. This paper shows how to construct moment functions for GMM estimators that are debiased/locally robust/orthogonal (LR), where moment conditions have a zero derivative with respect to the first step. We show that LR moment functions can be constructed by adding the influence function adjustment for first step estimation to the original moment functions. This construction can also be interpreted as a decomposition of LR moment functions into identifying moment functions and a first step influence function term. We use this decomposition to give simple and general conditions for root-n consistency and asymptotic normality, with different properties being assumed for the identifying and influence function terms. The conditions are easily interpretable mean square consistency and second order remainder conditions based on estimated moments that use cross-fitting (sample splitting). We also give numerical estimators of the influence function adjustment. LR moment functions have several advantages. LR moment conditions bias correct in a way that eliminates the large biases from plugging in first step machine learning estimators found in Belloni, Chernozhukov, and Hansen (2014). LR moment functions can be used to construct debiased/double machine learning (DML) estimators, as in Chernozhukov et al. (2017, 2018). We illustrate by deriving LR moment functions for dynamic discrete choice estimation based on conditional choice probabilities. We provide a DML estimator for dynamic discrete choice that uses first step machine learning of conditional choice probabilities. We find that it performs well in a Monte Carlo example. Such structural models provide a potentially important application of DML, because of potentially high dimensional state spaces. Adding the first step influence adjustment term provides a general way to construct LR moment conditions for structural models so that machine learning can be used for first step estimation of conditional choice probabilities, state transition distributions, and other unknown functions on which structural estimators depend. LR moment conditions also have the advantage of being relatively insensitive to small variation away from the first step true function. This robustness property is appealing in many settings where it may be difficult to get the first step completely correct. Many interesting and useful LR moment functions have the additional property that they are doubly robust (DR), meaning moment conditions hold when one first step is not correct. We give novel classes of DR moment conditions, including for average linear functionals of conditional expectations and probability densities. The construction of adding the first step influence function adjustment to an identifying moment function is useful to obtain these moment conditions. We also give necessary and sufficient conditions for a large class of moment functions to be DR. We find DR moments have simpler and more general conditions for asymptotic normality, which helps motivate our consideration of DR moment functions as special cases of LR ones. LR moment conditions also help minimize sensitivity to misspecification as in Bonhomme and Weidner (2018). LR moment conditions have smaller bias from first step estimation. We show that they have the small bias property of Newey, Hsieh, and Robins (2004), that the bias of the moments is of smaller order than the bias of the first step. This bias reduction leads to substantial improvements in finite sample properties in many cases relative to just using the original moment conditions. For dynamic discrete choice we find large bias reductions, moderate variance increases and even reductions in some cases, and coverage probabilities substantially closer to nominal. For machine learning estimators of the partially linear model, Chernozhukov et al. (2017, 2018) found bias reductions so large that the LR estimator is root-n consistent but the estimator based on the original moment condition is not. Substantial improvements were previously also found for density weighted averages by Newey, Hsieh, and Robins (2004, NHR). The twicing kernel estimators in NHR are numerically equal to LR estimators based on the original (before twicing) kernel, as shown in Newey, Hsieh, Robins (1998), and the twicing kernel estimators were shown to have smaller mean square error in large samples. Also, a Monte Carlo example in NHR finds that the mean square error (MSE) of the LR estimator has a smaller minimum and is flatter as a function of bandwidth than the MSE of Powell, Stock, and Stoker’s (1989) density weighted average derivative estimator. We expect similar finite sample improvements from LR moments in other cases. LR moment conditions have appeared in earlier work. They are semiparametric versions of Neyman (1959) C-alpha test scores for parametric models. Hasminskii and Ibragimov (1978) suggested LR estimation of functionals of a density and argued for their advantages over plug-in estimators. Pfanzagl and Wefelmeyer (1981) considered using LR moment conditions for improving the asymptotic efficiency of functionals of distribution estimators. Bickel and Ritov (1988) gave a LR estimator of the integrated squared density that attains root-n consistency under minimal conditions. The Robinson (1988) semiparametric regression and Ichimura (1993) index regression estimators are LR. Newey (1990) showed that LR moment conditions can be obtained as residuals from projections on the tangent set in a semiparametric model. Newey (1994a) showed that derivatives of an objective function where the first step has been “concentrated out” are LR, including the efficient score of a semiparametric model. NHR (1998, 2004) gave estimators of averages that are linear in density derivative functionals with remainder rates that are as fast as those in Bickel and Ritov (1988). Doubly robust moment functions have been constructed by Robins, Rotnitzky, and Zhao (1994, 1995), Robins and Rotnitzky (1995), Scharfstein, Rotnitzky, and Robins (1999), Robins, Rotnitzky, and van der Laan (2000), Robins and Rotnitzky (2001), Graham (2011), and Firpo and Rothe (2017). They are widely used for estimating treatment effects, e.g. Bang and Robins (2005). Van der Laan and Rubin (2006) developed targeted maximum likelihood to obtain a LR estimating equation based on the efficient influence function of a semiparametric model. Robins et al. (2008, 2017) showed that efficient influence functions are LR, characterized some doubly robust moment conditions, and developed higher order influence functions that can reduce bias. Belloni, Chernozhukov, and Wei (2013), Belloni, Chernozhukov, and Hansen (2014), Farrell (2015), Kandasamy et al. (2015), Belloni, Chernozhukov, Fernandez-Val, and Hansen (2016), and Athey, Imbens, and Wager (2017) gave LR estimators with machine learning first steps in several specific contexts. A main contribution of this paper is the construction of LR moment conditions from any moment condition and first step estimator that can result in a root-n consistent estimator of the parameter of interest. This construction is based on the limit of the first step when a data observation has a general distribution that allows for misspecification, similarly to Newey (1994). LR moment functions are constructed by adding to identifying moment functions the influence function of the true expectation of the identifying moment functions evaluated at the first step limit, i.e. by adding the influence function term that accounts for first step estimation. The addition of the influence adjustment “partials out” the first order effect of the first step on the moments. This construction of LR moments extends those cited above for first step density and distribution estimators to *any first step,* including instrumental variable estimators. Also, this construction is *estimator based* rather than model based as in van der Laan and Rubin (2006) and Robins et al. (2008, 2017). The construction depends only on the moment functions and the first step rather than on a semiparametric model. Also, we use the fundamental Gateaux derivative definition of the influence function to show LR rather than an embedding in a regular semiparametric model. The focus on the functional that is the true expected moments evaluated at the first step limit is the key to this construction. This focus should prove useful for constructing LR moments in many setting, including those where it has already been used to find the asymptotic variance of semiparametric estimators, such as Newey (1994a), Pakes and Olley (1995), Hahn (1998), Ai and Chen (2003), Hirano, Imbens, and Ridder (2003), Bajari, Hong, Krainer, and Nekipelov (2010), Bajari, Chernozhukov, Hong, and Nekipelov (2009), Hahn and Ridder (2013, 2016), and Ackerberg, Chen, Hahn, and Liao (2014), Hahn, Liao, and Ridder (2016). One can construct LR moment functions in each of these settings by adding the first step influence function derived for each case as an adjustment to the original, identifying moment functions. Another contribution is the development of LR moment conditions for dynamic discrete choice. We derive the influence adjustment for first step estimation of conditional choice probabilities as in Hotz and Miller (1993). We find encouraging Monte Carlo results when various machine learning methods are used to construct the first step. We also give LR moment functions for conditional moment restrictions based on orthogonal instruments. An additional contribution is to provide general estimators of the influence adjustment term that can be used to construct LR moments without knowing their form. These methods estimate the adjustment term numerically, thus avoiding the need to know its form. It is beyond the scope of this paper to develop machine learning versions of these numerical estimators. Such estimators are developed by Chernozhukov, Newey, and Robins (2018) for average linear functionals of conditional expectations. Further contributions include novel classes of DR estimators, including linear functionals of nonparametric instrumental variables and density estimators, and a characterization of (necessary and sufficient conditions for) double robustness. We also give related, novel partial robustness results where original moment conditions are satisfied even when the first step is not equal to the truth. A main contribution is simple and general asymptotic theory for LR estimators that use cross-fitting in the construction of the average moments. This theory is based on the structure of LR moment conditions as an identifying moment condition depending on one first step plus an influence adjustment that can depend on an additional first step. We give a remainder decomposition that leads to mean square consistency conditions for first steps plus a few readily interpretable rate conditions. For DR estimators there is only one rate condition, on a product of sample remainders from two first step estimators, leading to particularly simple conditions. This simplicity motivates our inclusion of results for DR estimators. This asymptotic theory is also useful for existing moment conditions that are already known to be LR. Whenever the moment condition can be decomposed into an identifying moment condition depending on one first step and an influence function term that may depend on two first steps the simple and general regularity conditions developed here will apply. LR moments reduce that smoothing bias that results from first step nonparametric estimation relative to original moment conditions. There are other sources of bias arising from nonlinearity of moment conditions in the first step and the empirical distribution. Cattaneo and Jansson (2017) and Cattaneo, Jansson, and Ma (2017) give useful bootstrap and jackknife methods that reduce nonlinearity bias. Newey and Robins (2017) show that one can also remove this bias by cross fitting in some settings. We allow for cross-fitting in this paper. Section 2 describes the general construction of LR moment functions for semiparametric GMM. Section 3 gives LR moment conditions for dynamic discrete choice. Section 4 shows how to estimate the first step influence adjustment. Section 5 gives novel classes of DR moment functions and characterizes double robustness. Section 6 gives an orthogonal instrument construction of LR moments based on conditional moment restrictions. Section 7 provides simple and general asymptotic theory for LR estimators. Locally Robust Moment Functions =============================== The subject of this paper is GMM estimators of parameters where the sample moment functions depend on a first step nonparametric or large dimensional estimator. We refer to these estimators as semiparametric. We could also refer to them as GMM where first step estimators are plugged in the moments. This terminology seems awkward though, so we simply refer to them as semiparametric GMM estimators. We denote such an estimator by $\hat{\beta}$, which is a function of the data $z_{1},...,z_{n}$ where $n$ is the number of observations. Throughout the paper we will assume that the data observations $z_{i}$ are i.i.d. We denote the object that $\hat{\beta}$ estimates as $\beta_{0}$, the subscript referring to the parameter value under the distribution $F_{0}$ of $z_{i}$. To describe semiparametric GMM let $m(z,\beta,\gamma)$ denote an $r\times1$ vector of functions of the data observation $z,$ parameters of interest $\beta$, and a function $\gamma$ that may be vector valued. The function $\gamma$ can depend on $\beta$ and $z$ through those arguments of $m.$ Here the function $\gamma$ represents some possible first step, such as an estimator, its limit, or a true function. A GMM estimator can be based on a moment condition where $\beta_{0}$ is the unique parameter vector satisfying$$E[m(z_{i},\beta_{0},\gamma_{0})]=0, \label{moments}$$ and $\gamma_{0}$ is the true $\gamma$. We assume that this moment condition identifies $\beta.$ Let $\hat{\gamma}$ denote some first step estimator of $\gamma_{0}$. Plugging in $\hat{\gamma}$ to obtain $m(z_{i},\beta,\hat{\gamma })$ and averaging over $z_{i}$ results in the estimated sample moments $\hat{m}(\beta)=\sum_{i=1}^{n}m(z_{i},\beta,\hat{\gamma})/n.$ For $\hat{W}$ a positive semi-definite weighting matrix a semiparametric GMM estimator is$$\tilde{\beta}=\arg\min_{\beta\in B}\hat{m}(\beta)^{T}\hat{W}\hat{m}(\beta),$$ where $A^{T}$ denotes the transpose of a matrix $A$ and $B$ is the parameter space for $\beta$. Such estimators have been considered by, e.g. Andrews (1994), Newey (1994a), Newey and McFadden (1994), Pakes and Olley (1995), Chen and Liao (2015), and others. Locally robust (LR) moment functions can be constructed by adding the influence function adjustment for the first step estimator $\hat{\gamma}$ to the identifying or original moment functions $m(z,\beta,\gamma).$ To describe this influence adjustment let $\gamma(F)$ denote the limit of $\hat{\gamma}$ when $z_{i}$ has distribution $F,$ where we restrict $F$ only in that $\gamma(F)$ exists and possibly other regularity conditions are satisfied. That is, $\gamma(F)$ is the limit of $\hat{\gamma}$ under possible misspecification, similar to Newey (1994). Let $G$ be some other distribution and $F_{\tau}=(1-\tau)F_{0}+\tau G$ for $0\leq\tau\leq1,$ where $F_{0}$ denotes the true distribution of $z_{i}.$ We assume that $G$ is chosen so that $\gamma(F_{\tau})$ is well defined for $\tau>0$ small enough and possibly other regularity conditions are satisfied, similarly to Ichimura and Newey (2017). The influence function adjustment will be the function $\phi (z,\beta,\gamma,\lambda)$ such that for all such $G,$$$\frac{d}{d\tau}E[m(z_{i},\beta,\gamma(F_{\tau}))]=\int\phi(z,\beta,\gamma _{0},\lambda_{0})G(dz),E[\phi(z_{i},\beta,\gamma_{0},\lambda_{0})]=0, \label{infdef}$$ where $\lambda$ is an additional nonparametric or large dimensional unknown object on which $\phi(z,\beta,\gamma,\lambda)$ depends and the derivative is from the right (i.e. for positive values of $\tau$) and at $\tau=0.$ This equation is the well known definition of the influence function $\phi (z,\beta,\gamma_{0},\lambda_{0})$ of $\mu(F)=E[m(z_{i},\beta,\gamma(F))]$ as the Gateaux derivative of $\mu(F),$ e.g. Huber (1981). The restriction of $G$ so that $\gamma(F_{\tau})$ exists allows $\phi(z,\beta,\gamma_{0},\lambda _{0})$ to be the influence function when $\gamma(F)$ is only well defined for certain types of distributions, such as when $\gamma(F)$ is a conditional expectation or density. The function $\phi(z,\beta,\gamma,\lambda)$ will generally exist when $E[m(z_{i},\beta,\gamma(F))]$ has a finite semiparametric variance bound. Also $\phi(z,\beta,\gamma,\lambda)$ will generally be unique because we are not restricting $G$ very much. Also, note that $\phi (z,\beta,\gamma,\lambda)$ will be the influence adjustment term from Newey (1994a), as discussed in Ichimura and Newey (2017). LR moment functions can be constructed by adding $\phi(z,\beta,\gamma ,\lambda)$ to $m(z,\beta,\gamma)$ to obtain new moment functions$$\psi(z,\beta,\gamma,\lambda)=m(z,\beta,\gamma)+\phi(z,\beta,\gamma,\lambda). \label{momadj}$$ Let $\hat{\lambda}$ be a nonparametric or large dimensional estimator having limit $\lambda(F)$ when $z_{i}$ has distribution $F,$ with $\lambda (F_{0})=\lambda_{0}.$ Also let $\hat{\psi}(\beta)=\sum_{i=1}^{n}\psi (z_{i},\beta,\hat{\gamma},\hat{\lambda})/n.$ A LR GMM estimator can be obtained as$$\hat{\beta}=\arg\min_{\beta\in B}\hat{\psi}(\beta)^{T}\hat{W}\hat{\psi}(\beta). \label{lrgmm}$$ As usual a choice of $\hat{W}$ that minimizes the asymptotic variance of $\sqrt{n}(\hat{\beta}-\beta_{0})$ will be a consistent estimator of the inverse of the asymptotic variance $\Omega$ of $\sqrt{n}\hat{\psi}(\beta _{0}).$ As we will further discuss, $\psi(z,\beta,\gamma,\lambda)$ being LR will mean that the estimation of $\gamma$ and $\lambda$ does not affect $\Omega$, so that $\Omega=E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})^{T}].$ An optimal $\hat{W}$ also gives an efficient estimator in the wider sense shown in Ackerberg, Chen, Hahn, and Liao (2014), making $\hat{\beta}$ efficient in a semiparametric model where the only restrictions imposed are equation (\[moments\]). The LR property we consider is that the derivative of the true expectation of the moment function with respect to the first step is zero, for a Gateaux derivative like that for the influence function in equation (\[infdef\]). Define $F_{\tau}=(1-\tau)F_{0}+\tau G$ as before where $G$ is such that both $\gamma(F_{\tau})$ and $\lambda(F_{\tau})$ are well defined. The LR property is that for all $G$ as specified,$$\frac{d}{d\tau}E[\psi(z_{i},\beta,\gamma(F_{\tau}),\lambda(F_{\tau}))]=0. \label{lrdef}$$ Note that this condition is the same as that of Newey (1994a) for the presence of $\hat{\gamma}$ an $\hat{\lambda}$ to have no effect on the asymptotic distribution, when each $F_{\tau}$ is a regular parametric submodel. Consequently, the asymptotic variance of $\sqrt{n}\hat{\psi}(\beta_{0})$ will be $\Omega$ as in the last paragraph. To show LR of the moment functions $\psi(z,\beta,\gamma,\lambda)=m(z,\beta ,\gamma)+\phi(z,\beta,\gamma,\lambda)$ from equation (\[momadj\]) we use the fact that the second, zero expectation condition in equation (\[infdef\]) must hold for all possible true distributions. For any given $\beta$ define $\mu(F)=E[m(z_{i},\beta,\gamma(F))]$ and $\phi(z,F)=\phi(z,\beta ,\gamma(F),\lambda(F)).$ <span style="font-variant:small-caps;">Theorem 1:</span> *If i)* $d\mu(F_{\tau})/d\tau=\int\phi (z,F_{0})G(dz)$*, ii)* $\int\phi(z,F_{\tau})F_{\tau}(dz)=0$ *for all* $\tau\in\lbrack0,\bar{\tau}),$ *and iii)* $\int\phi(z,F_{\tau })F_{0}(dz)$ *and* $\int\phi(z,F_{\tau})G(dz)$ *are continuous at* $\tau=0$ *then*$$\frac{d}{d\tau}E[\phi(z_{i},F_{\tau})]=-\frac{d\mu(F_{\tau})}{d\tau}. \label{thm1con}$$ The proofs of this result and others are given in Appendix B. Assumptions i) and ii) of Theorem 1 require that both parts of equation (\[infdef\]) hold with the second, zero mean condition being satisfied when $F_{\tau}$ is the true distribution. Assumption iii) is a regularity condition. The LR property follows from Theorem 1 by adding $d\mu(F_{\tau})/d\tau$ to both sides of equation (\[thm1con\]) and noting that the sum of derivatives is the derivative of the sum. Equation (\[thm1con\]) shows that the addition of $\phi(z,\beta,\gamma,\lambda)$ “partials out” the effect of the first step $\gamma$ on the moment by “cancelling” the derivative of the identifying moment $E[m(z_{i},\beta,\gamma(F_{\tau}))]$ with respect to $\tau$. This LR result for $\psi(z,\beta,\gamma,\lambda)$ differs from the literature in its Gateaux derivative formulation and in the fact that it is not a semiparametric influence function but is the hybrid sum of an identifying moment function $m(z,\beta,\gamma)$ and an influence function adjustment $\phi(z,\beta ,\gamma,\lambda).$ Another zero derivative property of LR moment functions is useful. If the sets $\Gamma$ and $\Lambda$ of possible limits $\gamma(F)$ and $\lambda(F)$, respectively, are linear, $\gamma(F)$ and $\lambda(F)$ can vary separately from one another, and certain functional differentiability conditions hold then LR moment functions will have the property that for any $\gamma\in\Gamma $, $\lambda\in\Lambda$, and $\bar{\psi}(\gamma,\lambda)=E[\psi(z_{i},\beta _{0},\gamma,\lambda)]$, $$\frac{\partial}{\partial\tau}\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma ,\lambda_{0})=0,\frac{\partial}{\partial\tau}\bar{\psi}(\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)=0. \label{lrdef2}$$ That is, the expected value of the LR moment function will have a zero Gateaux derivative with respect to each of the first steps $\gamma$ and $\lambda.$ This property will be useful for several results to follow. Under still stronger smoothness conditions this zero derivative condition will result in the existence of a constant $C$ such that for a function norm $\left\Vert \cdot\right\Vert $,$$\left\vert \bar{\psi}(\gamma,\lambda_{0})\right\vert \leq C\left\Vert \gamma-\gamma_{0}\right\Vert ^{2},\text{ }\left\vert \bar{\psi}(\gamma _{0},\lambda)\right\vert \leq C\left\Vert \lambda-\lambda_{0}\right\Vert ^{2}, \label{nlremainder}$$ when $\left\Vert \gamma-\gamma_{0}\right\Vert $ and $\left\Vert \lambda -\lambda_{0}\right\Vert $ are small enough. In Appendix B we give smoothness conditions that are sufficient for LR to imply equations (\[lrdef2\]) and (\[nlremainder\]). When formulating regularity conditions for particular moment functions and first step estimators it may be more convenient to work directly with equation (\[lrdef2\]) and/or (\[nlremainder\]). The approach of constructing LR moment functions by adding the influence adjustment differs from the model based approach of using an efficient influence function or score for a semiparametric model as moment functions . The approach here is *estimator based* rather than model based. The influence adjustment $\phi(z,\beta,\gamma,\lambda)$ is determined by the limit $\gamma(F)$ of the first step estimator $\hat{\gamma}$ and the moment functions $m(z,\beta,\gamma)$ rather than by some underlying semiparametric model. This estimator based approach has proven useful for deriving the influence function of a wide variety of semiparametric estimators, as mentioned in the Introduction. Here this estimator based approach provides a general way to construct LR moment functions. For any moment function $m(z,\beta,\gamma)$ and first step estimator $\hat{\gamma}$ a corresponding LR estimator can be constructed as in equations (\[momadj\]) and (\[lrgmm\]). The addition of $\phi(z,\beta,\gamma,\lambda)$ does not affect identification of $\beta$ because $\phi(z,\beta,\gamma_{0},\lambda_{0})$ has expectation zero for any $\beta$ and true $F_{0}.$ Consequently, the LR GMM estimator will have the same asymptotic variance as the original GMM estimator $\tilde{\beta}$ when $\sqrt{n}(\tilde{\beta}-\beta_{0})$ is asymptotically normal, under appropriate regularity conditions. The addition of $\phi(z,\beta ,\gamma,\lambda)$ will change other properties of the estimator. As discussed in Chernozhukov et al. (2017, 2018), it can even remove enough bias so that the LR estimator is root-n consistent and the original estimator is not. If $F_{\tau}$ was modified so that $\tau$ is a function of a smoothing parameter, e.g. a bandwidth, and $\tau$ gives the magnitude of the smoothing bias of $\gamma(F_{\tau}),$ then equation (\[lrdef\]) is a small bias condition, equivalent to$$E[\psi(z_{i},\beta_{0},\gamma(F_{\tau}),\lambda(F_{\tau}))]=o(\tau).$$ Here $E[\psi(z_{i},\beta_{0},\gamma(F_{\tau}),\lambda(F_{\tau}))]$ is a bias in the moment condition resulting from smoothing that shrinks faster than $\tau.$ In this sense LR GMM estimators have the small bias property considered in NHR. This interpretation is also one sense in which LR GMM is “debiased.” In some cases the original moment functions $m(z,\beta,\gamma)$ are already LR and the influence adjustment will be zero. An important class of moment functions that are LR are those where $m(z,\beta,\gamma)$ is the derivative with respect to $\beta$ of an objective function where nonparametric parts have been concentrated out. That is, suppose that there is a function $q(z,\beta,\zeta)$ such that $m(z,\beta,\gamma)=\partial q(z,\beta,\zeta (\beta))/\partial\beta$ where $\zeta(\beta)=\arg\max_{\zeta}E[q(z_{i},\beta,\zeta)]$, where $\gamma$ includes $\zeta(\beta)$ and possibly additional functions. Proposition 2 of Newey (1994a) and Lemma 2.5 of Chernozhukov et al. (2018) then imply that $m(z,\beta,\gamma)$ will be LR. This class of moment functions includes various partially linear regression models where $\zeta$ represents a conditional expectation. It also includes the efficient score for a semiparametric model, Newey (1994a, pp. 1358-1359). Cross fitting, also known as sample splitting, has often been used to improve the properties of semiparametric and machine learning estimators; e.g. see Bickel (1982), Schick (1986), and Powell, Stock, and Stoker (1989). Cross fitting removes a source of bias and can be used to construct estimators with remainder terms that converge to zero as fast as is known to be possible, as in NHR and Newey and Robins (2017). Cross fitting is also useful for double machine learning estimators, as outlined in Chernozhukov et al. (2017, 2018). For these reasons we allow for cross-fitting, where sample moments have the form$$\hat{\psi}(\beta)=\frac{1}{n}\sum_{i=1}^{n}\psi(z_{i},\beta,\hat{\gamma}_{i},\hat{\lambda}_{i}),$$ with $\hat{\gamma}_{i}$ and $\hat{\lambda}_{i}$ being formed from observations other than the $i^{th}.$ This kind of cross fitting removes an “own observation” bias term and is useful for showing root-n consistency when $\hat{\gamma}_{i}$ and $\hat{\lambda}_{i}$ are machine learning estimators. One version of cross-fitting with good properties in examples in Chernozhukov et al. (2018) can be obtained by partitioning the observation indices into $L$ groups $I_{\ell},(\ell=1,...,L),$ forming $\hat{\gamma}_{\ell}$ and $\hat{\lambda}_{\ell}$ from observations not in $I_{\ell}$, and constructing$$\hat{\psi}(\beta)=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in I_{\ell}}\psi (z_{i},\beta,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell}). \label{cfit}$$ Further bias reductions may be obtained in some cases by using different sets of observations for computing $\hat{\gamma}_{\ell}$ and $\hat{\lambda}_{\ell },$ leading to remainders that converge to zero as rapidly as known possible in interesting cases; see Newey and Robins (2017). The asymptotic theory of Section 7 focuses on this kind of cross fitting. As an example we consider a bound on average equivalent variation. Let $\gamma_{0}(x)$ denote the conditional expectation of quantity $q$ conditional on $x=(p^{T},y)$ where $p=(p_{1},p_{2}^{T})^{T}$ is a vector of prices and $y$ is income$.$ The object of interest is a bound on average equivalent variation for a price change from $\bar{p}_{1}$ to $\check{p}_{1}$ given by$$\beta_{0}=E[\int\ell(p_{1},y_{i})\gamma_{0}(p_{1},p_{2i},y_{i})dp_{1}],\ell(p_{1},y)=w(y)1(\bar{p}_{1}\leq p_{1}\leq\check{p}_{1})\exp \{-B(p_{1}-\bar{p}_{1})\}],$$ where $w(y)$ is a function of income and $B$ a constant. It follows by Hausman and Newey (2016) that if $B$ is a lower (upper) bound on the income effect for all individuals then $\beta_{0}$ is an upper (lower) bound on the equivalent variation for a price change from $\bar{p}_{1}$ to $\check{p}_{1},$ averaged over heterogeneity, other prices $p_{2i},$ and income $y_{i}$. The function $w(y)$ allows for averages over income in specific ranges, as in Hausman and Newey (2017). A moment function that could be used to estimate $\beta_{0}$ is$$m(z,\beta,\gamma)=\int\ell(p_{1},y)\gamma(p_{1},p_{2},y)dp_{1}-\beta.$$ Note that $$E[m(z_{i},\beta_{0},\gamma)]+\beta_{0}=E[\int\ell(p_{1},y_{i})\gamma (p_{1},p_{2i},y_{i})dp_{1}]=E[\lambda_{0}(x_{i})\gamma(x_{i})],\lambda _{0}(x)=\frac{\ell(p_{1},y)}{f_{0}(p_{1}|p_{2},y)},$$ where $f_{0}(p_{1}|p_{2},y)$ is the conditional pdf of $p_{1i}$ given $p_{2i}$ and $y_{i}$. Then by Proposition 4 of Newey (1994) the influence function adjustment for any nonparametric estimator $\hat{\gamma}(x)$ of $E[q_{i}|x_{i}=x]$ is$$\phi(z,\beta,\gamma,\lambda)=\lambda(x)[q-\gamma(x)].$$ Here $\lambda_{0}(x)$ is an example of an additional unknown function that is included in $\phi(z,\beta,\gamma,\lambda)$ but not in the original moment functions $m(z,\beta,\gamma)$. Let $\hat{\gamma}_{i}(x)$ be an estimator of $E[q_{i}|x_{i}=x]$ that can depend on $i$ and $\hat{\lambda}_{i}(x)$ be an estimator of $\lambda_{0}(x)$, such as $\hat{f}_{i}(p_{1}|p_{2},y)^{-1}\ell(p_{1},y)$ for an estimator $\hat{f}_{i}(p_{1}|p_{2},y).$ The LR estimator obtained by solving $\hat{\psi}(\beta)=0$ for $m(z,\beta,\gamma)$ and $\phi(z,\beta,\gamma,\lambda)$ as above is$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\left\{ \int\ell(p_{1},y_{i})\hat {\gamma}_{i}(p_{1},p_{2i},y_{i})dp_{1}+\hat{\lambda}_{i}(x_{i})[q_{i}-\hat{\gamma}_{i}(x_{i})]\right\} . \label{exlr}$$ Machine Learning for Dynamic Discrete Choice ============================================ A challenging problem when estimating dynamic structural models is the dimensionality of state spaces. Machine learning addresses this problem via model selection to estimate high dimensional choice probabilities. These choice probabilities estimators can then be used in conditional choice probability (CCP) estimators of structural parameters, following Hotz and Miller (1993). In order for CCP estimators based on machine learning to be root-n consistent they must be based on orthogonal (i.e. LR) moment conditions, see Chernozhukov et al. (2017, 2018). Adding the adjustment term provides the way to construct LR moment conditions from known moment conditions for CCP estimators. In this Section we do so for the Rust’s (1987) model of dynamic discrete choice. We consider an agent choosing among $J$ discrete alternatives by maximizing the expected present discounted value of utility. We assume that the per-period utility function for an agent making choice $j$ in period $t$ is given by$$U_{jt}=u_{j}(x_{t},\beta_{0})+\epsilon_{jt},(j=1,...,J;t=1,2,...).$$ The vector $x_{t}$ is the observed state variables of the problem (*e.g.* work experience, number of children, wealth) and the vector $\beta$ is unknown parameters. The disturbances $\epsilon_{t}=\{\epsilon _{1t},...,\epsilon_{Jt}\}$ are not observed by the econometrician. As in much of the literature we assume that $\epsilon_{t}$ is i.i.d. over time with known CDF that has support $R^{J},$ is independent of $x_{t},$ and $x_{t}$ is first-order Markov. To describe the agent’s choice probabilities let $\delta$ denote a time discount parameter, $\bar{v}(x)$ the expected value function, $y_{jt}\in\{0,1\}$ the indicator that choice $j$ is made and $\bar{v}_{j}(x_{t})=u_{j}(x_{t},\beta_{0})+\delta E[\bar{v}(x_{t+1})|x_{t},j]$ the expected value function conditional on choice $j.$ As in Rust (1987), we assume that in each period the agent makes the choice $j$ that maximizes the expected present discounted value of utility $\bar{v}_{j}(x_{t})+\epsilon _{jt}.$ The probability of choosing $j$ in period $t$ is then$$P_{j}(\bar{v}_{t})=\Pr(\bar{v}_{j}(x_{t})+\epsilon_{jt}\geq\bar{v}_{k}(x_{t})+\epsilon_{kt};k=1,...,J),\bar{v}_{t}=(\bar{v}_{1}(x_{t}),...,\bar {v}_{J}(x_{t}))^{\prime}. \label{choice prob}$$ These choice probabilities have a useful relationship to the structural parameters $\beta$ when there is a renewal choice, where the conditional distribution of $x_{t+1}$ given the renewal choice and $x_{t}$ does not depend on $x_{t}.$ Without loss of generality suppose that the renewal choice is $j=1.$ Let $\tilde{v}_{jt}$ denote $\tilde{v}_{j}(x_{t})=\bar{v}_{j}(x_{t})-\bar{v}_{1}(x_{t}),$ so that $\tilde{v}_{1t}\equiv0$. As usual, subtracting $\bar{v}_{1t}$ from each $\bar{v}_{jt}$ in $P_{j}(\bar{v}_{t})$ does not change the choice probabilities, so that they depend only on $\tilde{v}_{t}=(\tilde{v}_{2t},...,\tilde{v}_{Jt}).$ The renewal nature of $j=1$ leads to a specific formula for $\tilde{v}_{jt}$ in terms of the per period utilities $u_{jt}=u_{j}(x_{t},\beta_{0})$ and the choice probabilities $P_{t}=P(\tilde{v}_{t})=(P_{1}(\bar{v}_{t}),...P_{J}(\bar{v}_{t}))^{\prime}.$ As in Hotz and Miller (1993), there is a function $\mathcal{P}^{-1}(P)$ such that $\tilde{v}_{t}=\mathcal{P}^{-1}(P_{t}).$ Let $H(P)$ denote the function such that $$H(P_{t})=E[\max_{1\leq j\leq J}\{\mathcal{P}^{-1}(P_{t})_{j}+\epsilon _{jt}\}|x_{t}]=E[\max_{1\leq j\leq J}\{\tilde{v}_{jt}+\epsilon_{jt}\}|x_{t}].$$ For example, for multinomial logit $H(P_{t})=.5772-\ln(P_{1t}).$ Note that by $j=1$ being a renewal we have $E[\bar{v}_{t+1}|x_{t},1]=C$ for a constant $C$, so that$$\bar{v}(x_{t})=\bar{v}_{1t}+H(P_{t})=u_{1t}+\delta C+H(P_{t}).$$ It then follows that$$\bar{v}_{jt}=u_{jt}+\delta E[\bar{v}(x_{t+1})|x_{t},j]=u_{jt}+\delta E[u_{1,t+1}+H(P_{t+1})|x_{t},j]+\delta^{2}C,(j=1,...,J).$$ Subtracting then gives$$\tilde{v}_{jt}=u_{jt}-u_{1t}+\delta\{E[u_{1,t+1}+H(P_{t+1})|x_{t},j]-E[u_{1,t+1}+H(P_{t+1})|1]\}. \label{value}$$ This expression for the choice specific value function $\tilde{v}_{jt}$ depends only on $u_{j}(x_{t},\beta),$ $H(P_{t+1})$, and conditional expectations given the state and choice, and so can be used to form semiparametric moment functions. To describe those moment functions let $\gamma_{1}(x)$ denote the vector of possible values of the choice probabilities $E[y_{t}|x_{t}=x],$ where $y_{t}=(y_{1t},...,y_{Jt})^{\prime}.$ Also let $\gamma_{j}(x_{t},\beta ,\gamma_{1}),(j=2,...,J)$ denote a possible $E[u_{1}(x_{t+1},\beta )+H(\gamma_{1}(x_{t+1}))|x_{t},j]$ as a function of $\beta$, $x_{t}$ and $\gamma_{1},$ and $\gamma_{J+1}(\beta,\gamma_{1})$ a possible value of $E[u_{1}(x_{t},\beta)+H(\gamma_{1}(x_{t+1}))|1].$ Then a possible value of $\tilde{v}_{jt}$ is given by $$\tilde{v}_{j}(x_{t},\beta,\gamma)=u_{j}(x_{t},\beta)-u_{1}(x_{t},\beta )+\delta\lbrack\gamma_{j}(x_{t},\beta,\gamma_{1})-\gamma_{J+1}(\beta ,\gamma_{1})],(j=2,...,J).$$ These value function differences are semiparametric, depending on the function $\gamma_{1}$ of choice probabilities and the conditional expectations $\gamma_{j}$, $(j=2,...,J).$ Let $\tilde{v}(x_{t},\beta,\gamma)=(\tilde{v}_{2}(x_{t},\beta,\gamma),...,\tilde{v}_{J}(x_{t},\beta,\gamma))^{\prime}$ and $A(x_{t})$ denote a matrix of functions of $x_{t}$ with $J$ columns. Semiparametric moment functions are given by$$m(z,\beta,\gamma)=A(x)[y-P(\tilde{v}(x,\beta,\gamma))].$$ LR moment functions can be constructed by adding the adjustment term for the presence of the first step $\gamma.$ This adjustment term is derived in Appendix A. It takes the form $$\phi(z,\beta,\gamma,\lambda)=\sum_{j=1}^{J+1}\phi_{j}(z,\beta,\gamma ,\lambda),$$ where $\phi_{j}(z,\beta,\gamma,\lambda)$ is the adjustment term for $\gamma_{j}$ holding all other components $\gamma$ fixed at their true values. To describe it define$$\begin{aligned} P_{\tilde{v}j}(\tilde{v}) & =\partial P(\tilde{v})/\partial\tilde{v}_{j},\text{ }\pi_{1}=\Pr(y_{t1}=1),\text{ }\lambda_{10}(x)=E[y_{1t}|x_{t+1}=x],\label{ddcdef}\\ \lambda_{j0}(x) & =E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}|x_{t+1}=x],(j=2,...,J).\nonumber\end{aligned}$$ Then for $w_{t}=x_{t+1}$ and $z=(y,x,w)$ let$$\begin{aligned} \phi_{1}(z,\beta,\gamma,\lambda) & =-\delta\left( \sum_{j=2}^{J}\{\lambda_{j}(x)-E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})]\pi_{1}^{-1}\lambda_{1}(x)\}\right) [\partial H(\gamma_{1}(x))/\partial P]^{\prime }\{y-\gamma_{1}(x)\}\\ \phi_{j}(z,\beta,\gamma,\lambda) & =-\delta A(x)P_{\tilde{v}j}(\tilde {v}(x,\beta,\gamma))\frac{y_{j}}{P_{j}(\tilde{v}(x,\beta,\gamma))}\{u_{1}(w,\beta)+H(\gamma_{1}(w))-\gamma_{j}(x,\beta,\gamma_{1})\},(j=2,...,J),\\ \phi_{J+1}(z,\beta,\gamma,\lambda) & =\delta\left( \sum_{j=2}^{J}E[A(x_{t})P_{\tilde{v}j}(\tilde{v}(x_{t},\beta,\gamma))]\right) \pi_{1}^{-1}y_{1}\{u_{1}(w,\beta)+H(\gamma_{1}(w))-\gamma_{J+1}(\beta,\gamma_{1})\}.\end{aligned}$$ <span style="font-variant:small-caps;">Theorem 2:</span> *If the marginal distribution of* $x_{t}$ *does not vary with* $t$ *then LR moment functions for the dynamic discrete choice model are*$$\psi(z,\beta,\gamma)=A(x_{t})[y_{t}-P(\tilde{v}(x_{t},\beta,\gamma ))]+\sum_{j=1}^{J+1}\phi_{j}(z,\beta,\lambda).$$ The form of $\psi(z,\beta,\gamma)$ is amenable to machine learning. A machine learning estimator of the conditional choice probability vector $\gamma _{10}(x)$ is straightforward to compute and can then be used throughout the construction of the orthogonal moment conditions everywhere $\gamma_{1}$ appears. If $u_{1}(x,\beta)$ is linear in $x,$ say $u_{1}(x,\beta )=x_{1}^{\prime}\beta_{1}$ for subvectors $x_{1}$ and $\beta_{1}$ of $x$ and $\beta$ respectively, then machine learning estimators can be used to obtain $\hat{E}[x_{1,t+1}|x_{t},j]$ and $\hat{E}[\hat{H}_{t+1}|x_{j},j],$ $(j=2,...,J),$ and a sample average used to form $\hat{\gamma}_{J+1}(\beta,\hat{\gamma}_{1})$. The value function differences can then be estimated as$$\tilde{v}_{j}(x_{t},\beta,\hat{\gamma})=u_{j}(x_{t},\beta)-u_{1}(x_{t},\beta)+\hat{E}[x_{1,t+1}|x_{t},j]^{\prime}\beta_{1}-\hat{E}[x_{1,t+1}|1]^{\prime}\beta_{1}+\hat{E}[\hat{H}_{t+1}|x_{t},j]-\hat{E}[\hat{H}_{t+1}|1].$$ Furthermore, denominator problems can be avoided by using structural probabilities (rather than the machine learning estimators) in all denominator terms. The challenging part of the machine learning for this estimator is the dependence on $\beta$ of the reverse conditional expectations in $\lambda _{1}(x)$. It may be computationally prohibitive and possibly unstable to redo machine learning for each $\beta.$ One way to to deal with this complication is to update $\beta$ periodically, with more frequent updates near convergence. It is important that at convergence the $\beta$ in the reverse conditional expectations is the same as the $\beta$ that appears elsewhere. With data $z_{i}$ that is i.i.d. over individuals these moment functions can be used for any $t$ to estimate the structural parameters $\beta.$ Also, for data for a single individual we could use a time average $\sum_{t=1}^{T-1}\psi(z_{t},\beta,\gamma)/(T-1)$ to estimate $\beta.$ It will be just as important to use LR moments for estimation with a single individual as it is with a cross section of individuals, although our asymptotic theory will not apply to that case. Bajari, Chernozhukov, Hong, and Nekipelov (2009) derived the influence adjustment for dynamic discrete games of imperfect information. Locally robust moment conditions for such games could be formed using their results. We leave that formulation to future work. As an example of the finite sample performance of the LR GMM we report a Monte Carlo study of the LR estimator of this Section. The design of the experiment is loosely like the bus replacement application of Rust (1987). Here $x_{t}$ is a state variable meant to represent the lifetime of a bus engine. The transition density is $$x_{t+1}=\left\{ \begin{array} [c]{c}x_{t}+N(.25,1)^{2},y_{t}=1,\\ x_{t}=1+N(.25,1)^{2},y_{t}=0. \end{array} \right. .$$ where $y_{t}=0$ corresponds to replacement of the bus engine and $y_{t}=1$ to nonreplacement. We assume that the agent chooses $y_{t}$ contingent on state to maximize$$\sum_{t=1}^{\infty}\delta^{t-1}[y_{t}(\alpha\sqrt{x_{t}}+\varepsilon _{t})+(1-y_{t})RC],\alpha=-.3,RC=-4.$$ The unconditional probability of replacement in this model is about $1/8,$ which is substantially higher than that estimated in Rust (1987). The sample used for estimation was $1000$ observations for a single decision maker. We carried out $10,000$ replications. We estimate the conditional choice probabilities by kernel and series nonparametric regression and by logit lasso, random forest, and boosted tree machine learning methods. Logit conditional choice probabilities and derivatives were used in the construction of $\hat{\lambda}_{j}$ wherever they appear in order to avoid denominator issues. The unknown conditional expectations in the $\hat{\lambda}_{j}$ were estimated by series regressions throughout. Kernel regression was also tried but did not work particularly well and so results are not reported. Table 1 reports the results of the experiment. Bias, standard deviation, and coverage probability for asymptotic 95 percent confidence intervals are given in Table 1. Table 1 \[c\][lllllll]{}\ & & &\ & $\ \ \ \ \alpha$ &  RC & $\ \ \ \alpha$ &  RC & $\ \ \ \alpha$ &  RC\ Two step kernel & -.24 & .08 & .08 & .32 & .01 & .86\ LR kernel & -.05 & .02 & .06 & .32 & .95 & .92\ Two step quad & -.00 & .14 & .049 & .33$^{\ast}$ & .91 & .89\ LR quad & -.00 & .01 & .085 & .39 & .95 & .92\ Logit Lasso & -.12 & .25 & .06 & .28 & .74 & .84\ LR Logit Lasso & -.09 & .01 & .08 & .36 & .93 & .95\ Random Forest & -.15 & -.44 & .09 & .50 & .91 & .98\ LR Ran. For. & .00 & .00 & .06 & .44 & 1.0 & .98\ Boosted Trees & -.10 & -.28 & .08 & .50 & .99 & .99\ LR Boost Tr. & .03 & .09 & .07 & .47 & .99 & .97 Here we find bias reduction from the LR estimator in all cases. We also find variance reduction from LR estimation when the first step is kernel estimation, random forests, and boosted trees. The LR estimator also leads to actual coverage of confidence intervals being closer to the nominal coverage. The results for random forests and boosted trees seem noisier than the others, with higher standard deviations and confidence interval coverage probabilities farther from nominal. Overall, we find substantial improvements from using LR moments rather than only the identifying, original moments. Estimating the Influence Adjustment =================================== Construction of LR moment functions requires an estimator $\hat{\phi}(z,\beta)$ of the adjustment term. The form of $\phi(z,\beta,\gamma,\lambda)$ is known for some cases from the semiparametric estimation literature. Powell, Stock, and Stoker (1989) derived the adjustment term for density weighted average derivatives. Newey (1994a) gave the adjustment term for mean square projections (including conditional expectations), densities, and their derivatives. Hahn (1998) and Hirano, Imbens, and Ridder (2003) used those results to obtain the adjustment term for treatment effect estimators, where the LR estimator will be the doubly robust estimator of Robins, Rotnitzky, and Zhao (1994, 1995). Bajari, Hong, Krainer, and Nekipelov (2010) and Bajari, Chernozhukov, Hong, and Nekipelov (2009) derived adjustment terms in some game models. Hahn and Ridder (2013, 2016) derived adjustments in models with generated regressors including control functions. These prior results can be used to obtain LR estimators by adding the adjustment term with nonparametric estimators plugged in. For new cases it may be necessary to derive the form of the adjustment term. Also, it is possible to numerically estimate the adjustment term based on series estimators and other nonparametric estimators. In this Section we describe how to construct estimators of the adjustment term in these ways. Deriving the Formula for the Adjustment Term -------------------------------------------- One approach to estimating the adjustment term is to derive a formula for $\phi(z,\beta,\gamma,\lambda)$ and then plug in $\hat{\gamma}$ and $\hat{\lambda}$ in that formula$.$ A formula for $\phi(z,\beta,\gamma ,\lambda)$ can be obtained as in Newey (1994a). Let $\gamma(F)$ be the limit of the nonparametric estimator $\hat{\gamma}$ when $z_{i}$ has distribution $F.$ Also, let $F_{\tau}$ denote a regular parametric model of distributions with $F_{\tau}=F_{0}$ at $\tau=0$ and score (derivative of the log likelihood at $\tau=0)$ equal to $S(z)$. Then under certain regularity conditions $\phi(z,\beta,\gamma_{0},\lambda_{0})$ will be the unique solution to$$\left. \frac{\partial\int m(z,\beta,\gamma(F_{\tau}))F_{0}(dz)}{\partial\tau }\right\vert _{\tau=0}=E[\phi(z_{i},\beta,\gamma_{0},\lambda_{0})S(z_{i})],E[\phi(z_{i},\beta,\gamma_{0},\lambda_{0})]=0, \label{funeq}$$ as $\{F_{\tau}\}$ and the corresponding score $S(z)$ are allowed to vary over a family of parametric models where the set of scores for the family has mean square closure that includes all mean zero functions with finite variance. Equation (\[funeq\]) is a functional equation that can be solved to find the adjustment term, as was done in many of the papers cited in the previous paragraph. The influence adjustment can be calculated by taking a limit of the Gateaux derivative as shown in Ichimura and Newey (2017). Let $\gamma(F)$ be the limit of $\hat{\gamma}$ when $F$ is the true distribution of $z_{i}$, as before. Let $G_{z}^{h}$ be a family of distributions that approaches a point mass at $z$ as $h\longrightarrow0.$ If $\phi(z_{i},\beta,\gamma_{0},\lambda_{0})$ is continuous in $z_{i}$ with probability one then$$\phi(z,\beta,\gamma_{0},\lambda_{0})=\lim_{h\longrightarrow0}\left( \left. \frac{\partial E[m(z_{i},\beta,\gamma(F_{\tau}^{h}))]}{\partial\tau }\right\vert _{\tau=0}\right) ,F_{\tau}^{h}=(1-\tau)F_{0}+\tau G_{z}^{h}. \label{derlim}$$ This calculation is more constructive than equation (\[funeq\]) in the sense that the adjustment term here is a limit of a derivative rather than the solution to a functional equation. In Sections 5 and 6 we use those results to construct LR estimators when the first step is a nonparametric instrumental variables (NPIV) estimator. With a formula for $\phi(z,\beta,\gamma,\lambda)$ in hand from either solving the functional equation in equation (\[funeq\]) or from calculating the limit of the derivative in equation (\[derlim\]), one can estimate the adjustment term by plugging estimators $\hat{\gamma}$ and $\hat{\lambda}$ into $\phi(z,\beta,\gamma,\lambda).$ This approach to estimating LR moments can used to construct LR moments for the average surplus described near the end of Section 2. There the adjustment term depends on the conditional density of $p_{1i}$ given $p_{2i}$ and $y_{i}$. Let $\hat{f}_{\ell}(p_{1}|p_{2},y)$ be some estimator of the conditional pdf of $p_{1i}$ given $p_{2i}$ and $y_{i}.$ Plugging that estimator into the formula for $\lambda_{0}(x)$ gives $\hat{\lambda}_{\ell}(x)=\frac{\ell(p_{1},y)}{\hat{f}_{\ell}(p_{1}|p_{2},y)}.$This $\hat{\lambda}_{\ell}(x)$ can then be used in equation (\[exlr\])$.$ Estimating the Influence Adjustment for First Step Series Estimators -------------------------------------------------------------------- Estimating the adjustment term is relatively straightforward when the first step is a series estimator. The adjustment term can be estimated by treating the first step estimator as if it were parametric and applying a standard formula for the adjustment term for parametric two-step estimators. Suppose that $\hat{\gamma}_{\ell}$ depends on the data through a $K\times1$ vector $\hat{\zeta}_{\ell}$ of parameter estimators that has true value $\zeta_{0}$. Let $m(z,\beta,\zeta)$ denote $m(z,\beta,\gamma)$ as a function of $\zeta.$ Suppose that there is a $K\times1$ vector of functions $h(z,\zeta)$ such that $\hat{\zeta}_{\ell}$ satisfies$$\frac{1}{\sqrt{\bar{n}_{\ell}}}\sum_{i\in\bar{I}_{\ell}}h(z_{i},\hat{\zeta }_{\ell})=o_{p}(1),$$ where $\bar{I}_{\ell}$ is a subset of observations, none which are included in $I_{\ell},$ and $\bar{n}_{\ell}$ is the number of observations in $\bar {I}_{\ell}.$ Then a standard calculation for parametric two-step estimators (e.g. Newey, 1984, and Murphy and Topel, 1985) gives the parametric adjustment term$$\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})=\hat{\Psi}_{\ell}(\beta)h(z_{i},\hat{\zeta}_{\ell}),\hat{\Psi}_{\ell}(\beta)=-\sum_{j\in\bar {I}_{\ell}}\frac{\partial m(z_{j},\beta,\hat{\zeta}_{\ell})}{\partial\zeta }\left( \sum_{j\in\bar{I}_{\ell}}\frac{\partial h(z_{j},\hat{\zeta}_{\ell})}{\partial\zeta}\right) ^{-1},i\in I_{\ell}.$$ In many cases $\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$ approximates the true adjustment term $\phi(z,\beta,\gamma_{0},\lambda_{0}),$ as shown by Newey (1994a, 1997) and Ackerberg, Chen, and Hahn (2012) for estimating the asymptotic variance of functions of series estimators. Here this approximation is used for estimation of $\beta$ instead of just for variance estimation. The estimated LR moment function will be$$\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})=m(z_{i},\beta ,\hat{\zeta}_{\ell})+\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell}). \label{lr series}$$ We note that if $\hat{\zeta}_{\ell}$ were computed from the whole sample then $\hat{\phi}(\beta)=0$. This degeneracy does not occur when cross-fitting is used, which removes “own observation” bias and is important for first step machine learning estimators, as noted in Section 2. We can apply this approach to construct LR moment functions for an estimator of the average surplus bound example that is based on series regression. Here the first step estimator of $\gamma_{0}(x)=E[q_{i}|x_{i}=x]$ will be that from an ordinary least regression of $q_{i}$ on a vector $a(x_{i})$ of approximating functions. The corresponding $m(z,\beta,\zeta)$ and $h(z,\zeta)$ are$$m(z,\beta,\zeta)=A(x)^{\prime}\zeta-\beta,h(z,\zeta)=a(x)[q-a(x)^{\prime}\zeta],A(x)=\int\ell(p_{1},y)a(p_{1},p_{2},y)dp_{1}.$$ Let $\hat{\zeta}_{\ell}$ denote the least squares coefficients from regressing $q_{i}$ on $a(x_{i})$ for observations that are not included in $I_{\ell}$. Then the estimator of the locally robust moments given in equation (\[lr series\]) is $$\begin{aligned} \psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell}) & =A(x_{i})^{\prime }\hat{\zeta}_{\ell}-\beta+\hat{\Psi}_{\ell}a(x_{i})[q_{i}-a(x_{i})^{\prime }\hat{\zeta}_{\ell}],\\ \hat{\Psi}_{\ell} & =\sum_{j\in\bar{I}_{\ell}}A(x_{j})^{\prime}\left( \sum_{j\in\bar{I}_{\ell}}a(x_{j})a(x_{j})^{\prime}\right) ^{-1}.\end{aligned}$$ It can be shown similarly to Newey (1994a, p. 1369) that $\hat{\Psi}_{\ell}$ estimates the population least squares coefficients from a regression of $\lambda_{0}(x_{i})$ on $a(x_{i}),$ so that $\hat{\lambda}_{\ell}(x_{i})=\hat{\Psi}_{\ell}a(x_{i})$ estimates $\lambda_{0}(x_{i}).$ In comparison the LR estimator described in the previous subsection was based on an explicit nonparametric estimator of $f_{0}(p_{1}|p_{2},y),$ while this $\hat{\lambda }_{\ell}(x)$ implicitly estimates the inverse of that pdf via a mean-square approximation of $\lambda_{0}(x_{i})$ by $\hat{\Psi}_{\ell}a(x_{i}).$ Chernozhukov, Newey, and Robins (2018) introduce machine learning methods for choosing the functions to include in the vector $A(x)$. This method can be combined with machine learning methods for estimating $E[q_{i}|x_{i}]$ to construct a double machine learning estimator of average surplus, as shown in Chernozhukov, Hausman, and Newey (2018). In parametric models moment functions like those in equation (\[lr series\]) are used to “partial out” nuisance parameters $\zeta.$ For maximum likelihood these moment functions are the basis of Neyman’s (1959) C-alpha test. Wooldridge (1991) generalized such moment conditions to nonlinear least squares and Lee (2005), Bera et al. (2010), and Chernozhukov et al. (2015) to GMM. What is novel here is their use in the construction of semiparametric estimators and the interpretation of the estimated LR moment functions $\psi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$ as the sum of an original moment function $m(z_{i},\beta,\hat{\zeta}_{\ell})$ and an influence adjustment $\phi(z_{i},\beta,\hat{\zeta}_{\ell},\hat{\Psi}_{\ell})$. Estimating the Influence Adjustment with First Step Smoothing ------------------------------------------------------------- The adjustment term can be estimated in a general way that allows for kernel density, locally linear regression, and other kernel smoothing estimators for the first step. The idea is to differentiate with respect to the effect of the $i^{th}$ observation on sample moments. Newey (1994b) used a special case of this approach to estimate the asymptotic variance of a functional of a kernel based semiparametric or nonparametric estimator. Here we extend this method to a wider class of first step estimators, such as locally linear regression, and apply it to estimate the adjustment term for construction of LR moments. We will describe this estimator for the case where $\gamma$ is a vector of functions of a vector of variables $x.$ Let $h(z,x,\gamma)$ be a vector of functions of a data observation $z$, $x$, and a possible realized value of $\gamma$ (i.e. a vector of real numbers $\gamma$). Also let $\hat{h}_{\ell }(x,\gamma)=\sum_{j\in\bar{I}_{\ell}}h(z_{j},x,\gamma)/\bar{n}_{\ell}$ be a sample average over a set of observations $\bar{I}_{\ell}$ not included in $I_{\ell},$ where $\bar{n}_{j}$ is the number of observations in $\bar{I}_{j}.$ We assume that the first step estimator $\hat{\gamma}_{\ell}(x)$ solves$$0=\hat{h}_{\ell}(x,\gamma).$$ We suppress the dependence of $h$ and $\hat{\gamma}$ on a bandwidth. For example for a pdf $\kappa(u)$ a kernel density estimator would correspond to $h(z_{j},x,\gamma)=\kappa(x-x_{j})-\gamma$ and a locally linear regression would be $\hat{\gamma}_{1}(x)$ for$$h(z_{j},x,\gamma)=\kappa(x-x_{j})\left( \begin{array} [c]{c}1\\ x-x_{j}\end{array} \right) [y_{j}-\gamma_{1}-(x-x_{j})^{\prime}\gamma_{2}].$$ To measure the effect of the $i^{th}$ observation on $\hat{\gamma}$ let $\hat{\gamma}_{\ell i}^{\xi}(x)$ be the solution to $$0=\hat{h}_{\ell}(x,\gamma)+\xi\cdot h(z_{i},x,\gamma).$$ This $\hat{\gamma}_{\ell i}^{\xi}(x)$ is the value of the function obtained from adding the contribution $\xi\cdot h(z_{i},x,\gamma)$ of the $i^{th}$ observation. An estimator of the adjustment term can be obtained by differentiating the average of the original moment function with respect to $\xi$ at $\xi=0.$ This procedure leads to an estimated locally robust moment function given by$$\psi(z_{i},\beta,\hat{\gamma}_{\ell})=m(z_{i},\beta,\hat{\gamma}_{\ell })+\left. \frac{\partial}{\partial\xi}\frac{1}{\bar{n}_{\ell}}\sum_{j\in \bar{I}_{\ell}}m(z_{j},\beta,\hat{\gamma}_{\ell i}^{\xi}(\cdot))\right\vert _{\xi=0}.$$ This estimator is a generalization of the influence function estimator for kernels in Newey (1994b). Double and Partial Robustness ============================= The zero derivative condition in equation (\[lrdef\]) is an appealing robustness property in and of itself. A zero derivative means that the expected moment functions remain closer to zero than $\tau$ as $\tau$ varies away from zero. This property can be interpreted as local insensitivity of the moments to the value of $\gamma$ being plugged in, with the moments remaining close to zero as $\gamma$ varies away from its true value. Because it is difficult to get nonparametric functions exactly right, especially in high dimensional settings, this property is an appealing one. Such robustness considerations, well explained in Robins and Rotnitzky (2001), have motivated the development of doubly robust (DR) moment conditions. DR moment conditions have expectation zero if one first stage component is incorrect. DR moment conditions allow two chances for the moment conditions to hold, an appealing robustness feature. Also, DR moment conditions have simpler conditions for asymptotic normality than general LR moment functions as discussed in Section 7. Because many interesting LR moment conditions are also DR we consider double robustness. LR moments that are constructed by adding the adjustment term for first step estimation provide candidates for DR moment functions. The derivative of the expected moments with respect to each first step will be zero, a necessary condition for DR. The condition for moments constructed in this way to be DR is the following: <span style="font-variant:small-caps;">Assumption 1:</span> *There are sets* $\Gamma$ *and* $\Lambda $ *such that for all* $\gamma\in\Gamma$ *and* $\lambda\in \Lambda$$$E[m(z_{i},\beta_{0},\gamma)]=-E[\phi(z_{i},\beta_{0},\gamma,\lambda _{0})],E[\phi(z_{i},\beta_{0},\gamma_{0},\lambda)]=0.$$ This condition is just the definition of DR for the moment function $\psi(z,\beta,\gamma)=m(z,\beta,\gamma)+\phi(z,\beta,\gamma,\lambda)$, pertaining to specific sets $\Gamma$ ** and $\Lambda.$ The construction of adding the adjustment term to an identifying or original moment function leads to several novel classes of DR moment conditions. One such class has a first step that satisfies a conditional moment restriction$$E[y_{i}-\gamma_{0}(w_{i})|x_{i}]=0, \label{cmrlin}$$ where $w_{i}$ is potentially endogenous and $x_{i}$ is a vector of instrumental variables. This condition is the nonparametric instrumental variable (NPIV) restriction as in Newey and Powell (1989, 2003) and Newey (1991). A first step conditional expectation where $\gamma_{0}(x_{i})=E[y_{i}|x_{i}]$ is included as special case with $w_{i}=x_{i}.$ Ichimura and Newey (2017) showed that the adjustment term for this step takes the form $\phi(z,\gamma,\lambda)=\lambda(x)[y-\gamma(w)]$ so $m(z,\beta,\gamma )+\lambda(x)[y-\gamma(x)]$ is a candidate for a DR moment function. A sufficient condition for DR is: <span style="font-variant:small-caps;">Assumption 2:</span> *i) Equation (\[cmrlin\]) is satisfied; ii)* $\Lambda=\{\lambda(x):E[\lambda(x_{i})^{2}]<\infty\}$ *and* $\Gamma=\{\gamma(w):E[\gamma(w_{i})^{2}]<\infty\};$ *iii) there is* $v(w)$ *with* $E[v(w_{i})^{2}]<\infty$ *such that* $E[m(z_{i},\beta_{0},\gamma)]=E[v(w_{i})\{\gamma(w_{i})-\gamma_{0}(w_{i})\}]$ *for all* $\gamma\in\Gamma$*; iv) there is* $\lambda _{0}(x)$ *such that* $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$*; and v)* $E[y_{i}^{2}]<\infty.$ By the Riesz representation theorem condition iii) is necessary and sufficient for $E[m(z_{i},\beta_{0},\gamma)]$ to be a mean square continuous functional of $\gamma$ with representer $v(w).$ Condition iv) is an additional condition giving continuity in the reduced form difference $E[\gamma(w_{i})-\gamma _{0}(w_{i})|x_{i}]$, as further discussed in Ichimura and Newey (2017). Under this condition$$\begin{aligned} E[m(z_{i},\beta_{0},\gamma)] & =E[E[\lambda_{0}(x_{i})|w_{i}]\{\gamma (w_{i})-\gamma_{0}(w_{i})\}]=E[\lambda_{0}(x_{i})\{\gamma(w_{i})-\gamma _{0}(w_{i})\}]\\ & =-E[\phi(z_{i},\gamma,\lambda_{0})],\text{ \ }E[\phi(z_{i},\gamma _{0},\lambda)]=E[\lambda(x_{i})\{y_{i}-\gamma_{0}(w_{i})\}]=0.\end{aligned}$$ Thus Assumption 2 implies Assumption 1 so that we have <span style="font-variant:small-caps;">Theorem 3:</span> *If Assumption 2 is satisfied then* $m(z,\beta ,\gamma)+\lambda(x)\{y-\gamma(w)\}$ *is doubly robust.* There are many interesting, novel examples of DR moment conditions that are special cases of Theorem 3. The average surplus bound is an example where $y_{i}=q_{i},$ $w_{i}=x_{i},$ $x_{i}$ is the observed vector of prices and income, $\Lambda=\Gamma$ is the set of all measurable functions of $x_{i}$ with finite second moment, and $\gamma_{0}(x)=E[y_{i}|x_{i}=x].$ Let $x_{1}$ denote $p_{1}$ and $x_{2}$ the vector of other prices and income, so that $x=(x_{1},x_{2}^{\prime})^{\prime}$. Also let $f_{0}(x_{1}|x_{2})$ denote the conditional pdf of $p_{1}$ given $x_{2}$ and $\ell(x)=\ell(p_{1},y)$ for income $y$. Let $m(z,\beta,\gamma)=\int\ell(p_{1},x_{2})\gamma(p_{1},x_{2})dp_{1}-\beta$ as before. Multiplying and dividing through by $f_{0}(p_{1}|x_{2})$ gives, for all $\gamma,\lambda\in\Gamma$ and $\lambda _{0}(x)=f_{0}(x_{1}|x_{2})^{-1}\ell(x),$ $$E[m(z_{i},\beta_{0},\gamma)]=E[\int\ell(p_{1},x_{2i})\gamma(p_{1},x_{2i})dp_{1}]-\beta_{0}=E[E[\lambda_{0}(x_{i})\gamma(x_{i})|x_{2i}]]-\beta_{0}=E[\lambda_{0}(x_{i})\{\gamma(x_{i})-\gamma_{0}(x_{i})\}].$$ Theorem 3 then implies that the LR moment function for average surplus $m(z,\beta,\gamma)+\lambda(x)[q-\gamma(x)]$ is DR. A corresponding DR estimator $\hat{\beta}$ is given in equation (\[exlr\]). The surplus bound is an example of a parameter where $\beta_{0}=E[g(z_{i},\gamma_{0})]$ for some linear functional $g(z,\gamma)$ of $\gamma$ and for $\gamma_{0}$ satisfying the conditional moment restriction of equation (\[cmrlin\])$.$ For the surplus bound $g(z,\gamma)=\int\ell(p_{1},x_{2})\gamma(p_{1},x_{2})dp_{1}.$ If Assumption 2 is satisfied then choosing $m(z,\beta,\gamma)=g(z,\gamma)-\beta$ a DR moment condition is $g(z,\gamma )-\beta+\lambda(x)[y-\gamma(w)].$ A corresponding DR estimator is$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\{g(z_{i},\hat{\gamma}_{i})+\hat{\lambda }_{i}(x_{i})[y_{i}-\hat{\gamma}_{i}(w_{i})]\}, \label{drlin}$$ where $\hat{\gamma}_{i}(w)$ and $\hat{\lambda}_{i}(x)$ are estimators of $\gamma_{0}(w)$ and $\lambda_{0}(x)$ respectively. An estimator $\hat{\gamma }_{i}$ can be constructed by nonparametric regression when $w_{i}=x_{i}$ or NPIV in general. A series estimator $\hat{\lambda}_{i}(x)$ can be constructed similarly to the surplus bound example in Section 3.2. For $w_{i}=x_{i}$ Newey and Robins (2017) give such series estimators of $\hat{\lambda}_{i}(x)$ and Chernozhukov, Newey, and Robins (2018) show how to choose the approximating functions for $\hat{\lambda}_{i}(x_{i})$ by machine learning. Simple and general conditions for root-n consistency and asymptotic normality of $\hat{\beta}$ that allow for machine learning are given in Section 7. Novel examples of the DR estimator in equation (\[drlin\]) $w_{i}=x_{i}$ are given by Newey and Robins (2017) and Chernozhukov, Newey, and Robins (2018). Also Appendix C provides a generalization to $\gamma(w)$ and $\gamma(x)$ that satisfy orthogonality conditions more general than conditional moment restrictions and novel examples of those. A novel example with $w_{i}\neq x_{i}$ is a weighted average derivative of $\gamma_{0}(w)$ satisfying equation (\[cmrlin\]). Here $g(z,\gamma)=\bar{v}(w)\partial\gamma(w)/\partial w$ for some weight function $\bar{v}(w)$. Let $f_{0}(w)$ be the pdf of $w_{i}$ and $v(w)=-f_{0}(w)^{-1}\partial\lbrack\bar{v}(w)f_{0}(w)]/\partial w,$ assuming that derivatives exist. Assume that $\bar{v}(w)\gamma(w)f_{0}(w)$ is zero on the boundary of the support of $w_{i}.$ Integration by parts then gives Assumption 2 iii). Assume also that there exists $\lambda_{0}\in\Lambda$ with $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}].$ Then for estimators $\hat{\gamma}_{i}$ and $\hat{\lambda}_{i}$ a DR estimator of the weighted average derivative is$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\{\bar{v}(w_{i})\frac{\partial\hat {\gamma}_{i}(w_{i})}{\partial w}+\hat{\lambda}_{i}(x_{i})[y_{i}-\hat{\gamma }_{i}(w_{i})]\}.$$ This is a DR version of the weighted average derivative estimator of Ai and Chen (2007). A special case of this example is the DR moment condition for the weighted average derivative in the exogenous case where $w_{i}=x_{i}$ given in Firpo and Rothe (2017). Theorem 3 includes existing DR moment functions as special cases where $w_{i}=x_{i}$, including the mean with randomly missing data given by Robins and Rotnitzky (1995), the class of DR estimators in Robins et al. (2008), and the DR estimators of Firpo and Rothe (2017). We illustrate for the mean with missing data. Let $w=x,$ $x=(a,u)$ for an observed data indicator $a\in\{0,1\}$ and covariates $u,$ $m(z,\beta,\gamma)=\gamma(1,u)-\beta,$ and $\lambda_{0}(x)=a/\Pr(a_{i}=1|u_{i}=u).$ Here it is well known that $$E[m(z_{i},\beta_{0},\gamma)]=E[\gamma(1,u_{i})]-\beta_{0}=E[\lambda_{0}(x_{i})\{\gamma(x_{i})-\gamma_{0}(x_{i})\}]=-E[\lambda_{0}(x_{i})\{y_{i}-\gamma(x_{i})\}].$$ Then DR of the moment function $\gamma(1,w)-\beta+\lambda(x)[y-\gamma(x)]$ of Robins and Rotnitzky (1995) follows by Proposition 5. Another novel class of DR moment conditions are those where the first step $\gamma$ is a pdf of a function $x$ of the data observation $z.$ By Proposition 5 of Newey (1994a), the adjustment term for such a first step is $\phi(z,\beta,\gamma,\lambda)=\lambda(x)-\int\lambda(u)\gamma(u)du$ for some possible $\lambda$. A sufficient condition for the DR as in Assumption 1 is: <span style="font-variant:small-caps;">Assumption 3:</span> $x_{i}$ *has pdf* $\gamma_{0}(x)$ *and for* $\Gamma=\{\gamma:\gamma(x)\geq0$, $\int\gamma(x)dx=1\}$ *there is* $\lambda_{0}(x)$ *such that for all* $\gamma\in\Gamma,$$$E[m(z_{i},\beta_{0},\gamma)]=\int\lambda_{0}(x)\{\gamma(x)-\gamma_{0}(x)\}dx.$$ Note that for $\phi(z,\gamma,\lambda)=\lambda(x)-\int\lambda(\tilde{x})\gamma(\tilde{x})d\tilde{x}$ it follows from Assumption 3 that $E[m(z_{i},\beta_{0},\gamma)]=-E[\phi(z_{i},\gamma,\lambda_{0})]$ for all $\gamma \in\Gamma$. Also, $E[\phi(z_{i},\gamma_{0},\lambda)]=E[\lambda(x_{i})]-\int\lambda(\tilde{x})\gamma_{0}(\tilde{x})dx=0.$ Then Assumption 1 is satisfied so we have: <span style="font-variant:small-caps;">Theorem 4:</span> *If Assumption 3 is satisfied then* $m(z,\beta ,\gamma)+\lambda(x)-\int\lambda(\tilde{x})\gamma(\tilde{x})d\tilde{x}$ *is DR.* The integrated squared density $\beta_{0}=\int\gamma_{0}(x)^{2}dx$ is an example for $m(z,\beta,\gamma)=\gamma(x)-\beta,$ $\lambda_{0}=\gamma_{0},$ and $$\psi(z,\beta,\gamma,\lambda)=\gamma(x)-\beta+\lambda(x)-\int\lambda(\tilde {x})\gamma(\tilde{x})dx.$$ This DR moment function seems to be novel. Another example is the density weighted average derivative (DWAD) of Powell, Stock, and Stoker (1989), where $m(z,\beta,\gamma)=-2y\cdot\partial\gamma(x)/\partial x-\beta$. Let $\delta(x_{i})=E[y_{i}|x_{i}]\gamma_{0}(x_{i})$. Assuming that $\delta (u)\gamma(u)$ is zero on the boundary and differentiable, integration by parts gives$$E[m(z_{i},\beta_{0},\gamma)]=-2E[y_{i}\partial\gamma(x_{i})/\partial x]-\beta_{0}=\int[\partial\delta(\tilde{x})/\partial x]\{\gamma(\tilde {x})-\gamma_{0}(\tilde{x})\}du,$$ so that Assumption 3 is satisfied with $\lambda_{0}(x)=\partial\delta (x)/\partial x.$ Then by Theorem 4$$\hat{\beta}=\frac{1}{n}\sum_{i=1}^{n}\{-2\frac{\partial\hat{\gamma}_{i}(x_{i})}{\partial x}+\frac{\partial\hat{\delta}_{i}(x_{i})}{\partial x}-\int\frac{\partial\hat{\delta}_{i}(\tilde{x})}{\partial x}\hat{\gamma}_{i}(\tilde{x})d\tilde{x}\}$$ is a DR estimator. It was shown in NHR (1998) that the Powell, Stock, and Stoker (1989) estimator with a twicing kernel is numerically equal to a leave one out version of this estimator for the original (before twicing) kernel. Thus the DR result for $\hat{\beta}$ gives an interpretation of the twicing kernel estimator as a DR estimator. The expectation of the DR moment functions of both Theorem 3 and 4 are affine in $\gamma$ and $\lambda$ holding the other fixed at the truth. This property of DR moment functions is general, as we show by the following characterization of DR moment functions: <span style="font-variant:small-caps;">Theorem 5:</span> *If* $\Gamma$ *and* $\Lambda$ *are linear then* $\psi(z,\beta,\gamma,\lambda)$ *is DR if and only if* $$\left. \partial E[\psi(z_{i},\beta_{0},(1-\tau)\gamma_{0}+\tau\gamma ,\lambda_{0})]\right\vert _{\tau=0}=0,\left. \partial E[\psi(z_{i},\beta _{0},\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)]\right\vert _{\tau=0}=0,$$ *and* $E[\psi(z_{i},\beta_{0},\gamma,\lambda_{0})]$ *and* $E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda)]$ *are affine in* $\gamma $ *and* $\lambda$ *respectively.* The zero derivative condition of this result is a Gateaux derivative, componentwise version of LR. Thus, we can focus a search for DR moment conditions on those that are LR. Also, a DR moment function must have an expectation that is affine in each of $\gamma$ and $\lambda$ while the other is held fixed at the truth. It is sufficient for this condition that $\psi(z_{i},\beta_{0},\gamma,\lambda)$ be affine in each of $\gamma$ and $\lambda$ while the other is held fixed. This property can depend on how $\gamma$ and $\lambda$ are specified. For example the missing data DR moment function $m(1,u)-\beta+\pi(u)^{-1}a[y-\gamma(x)]$ is not affine in the propensity score $\pi(u)=\Pr(a_{i}=1|u_{i}=u)$ but is in $\lambda (x)=\pi(u)^{-1}a$. In general Theorem 5 motivates the construction of DR moment functions by adding the adjustment term to obtain a LR moment function that will then be DR if it is affine in $\gamma$ and $\lambda$ separately. It is interesting to note that in the NPIV setting of Theorem 3 and the density setting of Theorem 4 that the adjustment term is always affine in $\gamma$ and $\lambda.$ It then follows from Theorem 5 that in those settings LR moment conditions are precisely those where $E[m(z_{i},\beta_{0},\gamma)]$ is affine in $\gamma.$ Robins and Rotnitzky (2001) gave conditions for the existence of DR moment conditions in semiparametric models. Theorem 5 is complementary to those results in giving a complete characterization of DR moments when $\Gamma$ and $\Lambda$ are linear. Assumptions 2 and 3 both specify that $E[m(z_{i},\beta_{0},\gamma)]$ is continuous in an integrated squared deviation norm. These continuity conditions are linked to finiteness of the semiparametric variance bound for the functional $E[m(z_{i},\beta_{0},\gamma)],$ as discussed in Newey and McFadden (1994) for Assumption 2 with $w_{i}=x_{i}$ and for Assumption 3. For Assumption 2 with $w_{i}\neq x_{i}$ Severini and Tripathi (2012) showed for $m(z,\beta,\gamma)=v(w)\gamma(w)-\beta$ with known $v(w)$ that the existence of $\lambda_{0}(w)$ with $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$ is necessary for the existence of a root-n consistent estimator of $\beta$. Thus the conditions of Assumption 2 are also linked to necessary conditions for root-n consistent estimation when $w_{i}\neq x_{i}.$ Partial robustness refers to settings where $E[m(z_{i},\beta_{0},\bar{\gamma })]=0$ for some $\bar{\gamma}\neq\gamma_{0}$. The novel DR moment conditions given here lead to novel partial robustness results as we now demonstrate in the conditional moment restriction setting of Assumption 2. When $\lambda _{0}(x)$ in Assumption 2 is restricted in some way there may exist $\tilde{\gamma}\neq\gamma_{0}$ with $E[\lambda_{0}(x_{i})\{y_{i}-\tilde {\gamma}(w_{i})\}]=0.$ Then$$E[m(z_{i},\beta_{0},\tilde{\gamma})]=-E[\lambda_{0}(x_{i})\{y_{i}-\tilde{\gamma}(w_{i})\}]=0.$$ Consider the average derivative $\beta_{0}=E[\partial\gamma_{0}(w_{i})/\partial w_{r}]$ where $m(z,\beta,\gamma)=\partial\gamma(w)/\partial w_{r}-\beta$ for some $r.$ Let $\delta=(E[a(x_{i})p(w_{i})^{\prime}])^{-1}E[a(x_{i})y_{i}]$ be the limit of the linear IV estimator with right hand side variables $p(w)$ and the same number of instruments $a(x).$ The following is a partial robustness result that provides conditions for the average derivative of the linear IV estimator to equal the true average derivative: <span style="font-variant:small-caps;">Theorem 6:</span> If $-\partial\ln f_{0}(w)/\partial w_{r}=c^{\prime}p(w)$ for a constant vector $c$, $E[p(w_{i})p(w_{i})^{\prime}]$ is nonsingular, and $E[a(x_{i})|w_{i}=w]=\Pi p(w)$ for a square nonsingular $\Pi$ then for $\delta=(E[a(x_{i})p(w_{i})^{\prime}])^{-1}E[a(x_{i})y_{i}],$$$E[\partial\{p(w_{i})^{\prime}\delta\}/\partial w_{r}]=E[\partial\gamma _{0}(w_{i})/\partial w_{r}].$$ This result shows that if the density score is a linear combination of the right-hand side variables $p(w)$ used by linear IV, the conditional expectation of the instruments $a(x_{i})$ given $w_{i}$ is a nonsingular linear combination of $p(w)$, and $p(w)$ has a nonsingular second moment matrix then the average derivative of the linear IV estimator is the true average derivative. This is a generalization to NPIV of Stoker’s (1986) result that linear regression coefficients equal the average derivatives when the regressors are multivariate Gaussian. DR moment conditions can be used to identify parameters of interest. Under Assumption 1 $\beta_{0}$ may be identified from$$E[m(z_{i},\beta_{0},\bar{\gamma})]=-E[\phi(z_{i},\beta_{0},\bar{\gamma },\lambda_{0})]$$ for any fixed $\bar{\gamma}$ when the solution $\beta_{0}$ to this equation is unique. <span style="font-variant:small-caps;">Theorem 7:</span> *If Assumption 1 is satisfied,* $\lambda_{0}$ *is identified, and for some* $\bar{\gamma}$ *the equation* $E[\psi(z_{i},\beta,\bar{\gamma},\lambda_{0})]=0$ *has a unique solution then* $\beta_{0}$ *is identified as that solution.* Applying this result to the NPIV setting of Assumption 2 gives an explicit formula for certain functionals of $\gamma_{0}(w)$ without requiring that the completeness identification condition of Newey and Powell (1989, 2003) be satisfied, similarly to Santos (2011). Suppose that $v(w)$ is identified, e.g. as for the weighted average derivative. Since both $w$ and $x$ are observed it follows that a solution $\lambda_{0}(x)$ to $v(w)=E[\lambda_{0}(x)|w]$ will be identified if such a solution exists. Plugging in $\bar{\gamma}=0$ into the equation $E[\psi(z_{i},\beta_{0},\bar{\gamma},\lambda_{0})]=0$ gives <span style="font-variant:small-caps;">Corollary 8:</span> *If* $v(w_{i})$ *is identified and there exists* $\lambda_{0}(x_{i})$ *such that* $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$ *then* $\beta_{0}=E[v(w_{i})\gamma_{0}(w_{i})]$ *is identified as* $\beta_{0}=E[\lambda_{0}(x_{i})y_{i}]$*.* Note that this result holds without the completeness condition. Identification of $\beta_{0}=E[v(w_{i})\gamma_{0}(w_{i})]$ for known $v(w_{i})$ with $v(w_{i})=E[\lambda_{0}(x_{i})|w_{i}]$ follows from Severini and Tripathi (2006). Corollary 8 extends that analysis to the case where $v(w_{i})$ is only identified but not necessarily known and links it to DR moment conditions. Santos (2011) gives a related formula for a parameter $\beta_{0}=\int\tilde {v}(w)\lambda_{0}(w)dw$. The formula here differs from Santos (2011) in being an expectation rather than a Lebesgue integral. Santos (2011) constructed an estimator. That is beyond the scope of this paper. Conditional Moment Restrictions =============================== Models of conditional moment restrictions that depend on unknown functions are important in econometrics. In such models the nonparametric components may be determined simultaneously with the parametric components. In this setting it is useful to work directly with the instrumental variables to obtain LR moment conditions rather than to make a first step influence adjustment. For that reason we focus in this Section on constructing LR moments by orthogonalizing the instrumental variables. Our orthogonal instruments framework is based on based on conditional moment restrictions of the form$$E[\rho_{j}(z_{i},\beta_{0},\gamma_{0})|x_{ji}]=0,(j=1,...,J), \label{cond mom restrict}$$ where each $\rho_{j}(z,\beta,\gamma)$ is a scalar residual and $x_{j}$ are instruments that may differ across $j$. This model is considered by Chamberlain (1992) and Ai and Chen (2003, 2007) when $x_{j}$ is the same for each $j$ and for Ai and Chen (2012) when the set of $x_{j}$ includes $x_{j-1}.$ We allow the residual vector $\rho(z,\beta,\gamma)$ to depend on the entire function $\gamma$ and not just on its value at some function of the observed data $z_{i}$. In this framework we consider LR moment functions having the form$$\psi(z,\beta,\gamma,\lambda)=\lambda(x)\rho(z,\beta,\gamma), \label{gcm}$$ where $\lambda(x)=[\lambda_{1}(x_{1}),...,\lambda_{J}(x_{J})]$ is a matrix of instrumental variables with the $j^{th}$ column given by $\lambda_{j}(x_{j}).$ We will define orthogonal instruments to be those that make $\psi (z,\beta,\gamma,\lambda)$ locally robust. To define orthogonal instrumental variables we assume that $\gamma$ is allowed to vary over a linear set $\Gamma$ as $F$ varies. For each $\Delta\in\Gamma$ let$$\bar{\rho}_{\gamma}(x,\Delta)=(\frac{\partial E[\rho_{1}(z_{i},\beta _{0},\gamma_{0}+\tau\Delta)|x_{1}]}{\partial\tau},...,\frac{\partial E[\rho_{J}(z_{i},\beta_{0},\gamma_{0}+\tau\Delta)|x_{J}]}{\partial\tau })^{\prime}.$$ This $\bar{\rho}_{\gamma}(x,\Delta)$ is the Gateaux derivative with respect to $\gamma$ of the conditional expectation of the residuals in the direction $\Delta.$ We characterize $\lambda_{0}(x)$ as orthogonal if$$E[\lambda_{0}(x_{i})\bar{\rho}_{\gamma}(x_{i},\Delta)]=0\text{ for all }\Delta\in\Gamma.$$ We assume that $\bar{\rho}_{\gamma}(x,\Delta)$ is linear in $\Delta$ and consider the Hilbert space of vectors of random vectors $a(x)=$ $(a_{1}(x_{1}),...,a_{J}(x_{J}))$ with inner product $\left\langle a,b\right\rangle =E[a(x_{i})^{\prime}b(x_{i})]$. Let $\bar{\Lambda}_{\gamma}$ denote the closure of the set $\{\bar{\rho}_{\gamma}(x,\Delta):\Delta\in\Gamma\}$ in that Hilbert space. Orthogonal instruments are those where each row of $\lambda _{0}(x)$ is orthogonal to $\bar{\Lambda}_{\gamma}.$ They can be interpreted as instrumental variables where the effect of estimation of $\gamma$ has been partialed out. When $\lambda_{0}(x)$ is orthogonal then $\psi(z,\beta ,\gamma,\lambda)=\lambda(x)\rho(z,\beta,\gamma)$ is LR: <span style="font-variant:small-caps;">Theorem 9:</span> *If each row of* $\lambda_{0}(x)$ *is orthogonal to* $\bar{\Lambda}_{\gamma}$ *then the moment functions in equation (\[gcm\]) are LR.* We also have a DR result: <span style="font-variant:small-caps;">Theorem 10:</span> *If each row of* $\lambda_{0}(x)$ *is orthogonal to* $\bar{\Lambda}_{\gamma}$ *and* $\rho(z,\beta,\gamma )$ *is affine in* $\gamma\in\Gamma$ *then the moment functions in equation (\[gcm\]) are DR for* $\Lambda=\{\lambda(x):$ ** $E[\lambda(x_{i})^{\prime}\rho(z_{i},\beta_{0},\gamma_{0})^{\prime}\rho (z_{i},\beta_{0},\gamma_{0})\lambda(x_{i})]$. There are many ways to construct orthogonal instruments. For instance, given a $r\times(J-1)$ matrix of instrumental variables $\lambda(x)$ one could construct corresponding orthogonal ones $\lambda_{0}(x_{i})$ as the matrix where each row of $\lambda(x)$ is replaced by the residual from the least squares projection of the corresponding row of $\lambda(x)$ on $\bar{\Lambda }_{\gamma}$. For local identification of $\beta$ we also require that $$rank(\left. \partial E[\psi(z_{i},\beta,\gamma_{0})]/\partial\beta\right\vert _{\beta=\beta_{0}})=\dim(\beta). \label{local id beta}$$ A model where $\beta_{0}$ is identified from semiparametric conditional moment restrictions with common instrumental variables is a special case where $x_{ji}$ is the same for each $j$. In this case there is a way to construct orthogonal instruments that leads to an efficient estimator of $\beta_{0}$. Let $\Sigma(x_{i})$ denote some positive definite matrix with its smallest eigenvalue bounded away from zero, so that $\Sigma(x_{i})^{-1}$ is bounded. Let $\left\langle a,b\right\rangle _{\Sigma}=E[a(x_{i})^{\prime}\Sigma (x_{i})^{-1}b(x_{i})]$ denote an inner product and note that $\bar{\Lambda }_{\gamma}$ is closed in this inner product by $\Sigma(x_{i})^{-1}$ bounded. Let $\tilde{\lambda}_{k}^{\Sigma}(x_{i},\lambda)$ denote the residual from the least squares projection of the $k^{th}$ row $\lambda\left( x\right) ^{\prime}e_{k}$ of $\lambda(x)$ on $\bar{\Lambda}_{\gamma}$ with the inner product $\left\langle a,b\right\rangle _{\Sigma}.$ Then for all $\Delta \in\Gamma,$ $$E[\tilde{\lambda}_{k}^{\Sigma}(x_{i},\lambda)^{\prime}\Sigma(x_{i})^{-1}\bar{\rho}_{\gamma}(x_{i},\Delta)]=0,$$ so that for $\tilde{\lambda}^{\Sigma}(x_{i},\lambda)=[\tilde{\lambda}_{1}^{\Sigma}(x_{i},\lambda),...,\tilde{\lambda}_{r}^{\Sigma}(x_{i},\lambda)]$ the instrumental variables $\tilde{\lambda}^{\Sigma}(x_{i},\lambda )\Sigma(x_{i})^{-1}$ are orthogonal. Also, $\tilde{\lambda}^{\Sigma}(x_{i},\lambda)$ can be interpreted as the solution to$$\min_{\{D(x):D(x)^{\prime}e_{k}\in\bar{\Lambda}_{\gamma},k=1,...,r\}}tr(E[\{\lambda(x_{i})-D(x_{i})\}\Sigma(x_{i})^{-1}\{\lambda(x_{i})-D(x_{i})\}^{\prime}])$$ where the minimization is in the positive semidefinite sense. The orthogonal instruments that minimize the asymptotic variance of GMM in the class of GMM estimators with orthogonal instruments are given by$$\lambda_{0}^{\ast}(x)=\tilde{\lambda}^{\Sigma^{\ast}}(x,\lambda_{\beta})\Sigma^{\ast}(x)^{-1},\lambda_{\beta}(x_{i})=\left. \frac{\partial E[\rho(z_{i},\beta,\gamma_{0})|x_{i}]}{\partial\beta}\right\vert _{\beta =\beta_{0}}^{\prime},\Sigma^{\ast}(x_{i})=Var(\rho_{i}|x_{i}),\rho_{i}=\rho(z_{i},\beta_{0},\gamma_{0}).$$ <span style="font-variant:small-caps;">Theorem 11:</span> *The instruments* $\varphi^{\ast}(x_{i})$ *give an efficient estimator in the class of IV estimators with orthogonal instruments.* The asymptotic variance of the GMM estimator with optimal orthogonal instruments is $$(E[m_{i}^{\ast}m_{i}^{\ast\prime}])^{-1}=E[\tilde{\lambda}(x_{i},\lambda ^{\ast},\Sigma^{\ast})\Sigma^{\ast}(x_{i})^{-1}\tilde{\lambda}(x_{i},\lambda^{\ast},\Sigma^{\ast})^{\prime}])^{-1}.$$ This matrix coincides with the semiparametric variance bound of Ai and Chen (2003). Estimation of the optimal orthogonal instruments is beyond the scope of this paper. The series estimator of Ai and Chen (2003) could be used for this. This framework includes moment restrictions with a NPIV first step $\gamma$ satisfying $E[\rho(z_{i},\gamma_{0})|x_{i}]=0$ where we can specify $\rho _{1}(z,\beta,\gamma)=m(z,\beta,\gamma),$ $x_{1i}=1,$ $\rho_{2}(z,\beta ,\gamma)=\rho(z,\gamma),$ and $x_{2i}=x_{i}.$ It generalizes that setup by allowing for more residuals $\rho_{j}(z,\beta,\gamma)$, $(j\geq3)$ and allowing all residuals to depend on $\beta.$ Asymptotic Theory ================= In this Section we give simple and general asymptotic theory for LR estimators that incorporates the cross-fitting of equation (\[cfit\]). Throughout we use the structure of LR moment functions that are the sum $\psi(z,\beta ,\gamma,\lambda)=m(z,\beta,\gamma)+\phi(z,\beta,\gamma,\lambda)$ of an identifying or original moment function $m(z,\beta,\gamma)$ depending on a first step function $\gamma$ and an influence adjustment term $\phi (z,\beta,\gamma,\lambda)$ that can depend on an additional first step $\lambda.$ The asymptotic theory will apply to any moment function that can be decomposed into a function of a single nonparametric estimator and a function of two nonparametric estimators. This structure and LR leads to particularly simple and general conditions. The conditions we give are composed of mean square consistency conditions for first steps and one, two, or three rate conditions for quadratic remainders. We will only use one quadratic remainder rate for DR moment conditions, involving faster than $1/\sqrt{n}$ convergence of products of estimation errors for $\hat{\gamma}$ and $\hat{\lambda}.$ When $E[m(z_{i},\beta _{0},\gamma)+\phi(z_{i},\beta_{0},\gamma,\lambda_{0})]$ is not affine in $\gamma$ we will impose a second rate condition that involves faster than $n^{-1/4}$ convergence of $\hat{\gamma}.$ When $E[\phi(z_{i},\gamma _{0},\lambda)]$ is also not affine in $\lambda$ we will impose a third rate condition that involves faster than $n^{-1/4}$ convergence of $\hat{\lambda}.$ Most adjustment terms $\phi(z,\beta,\gamma,\lambda)$ of which we are aware, including for first step conditional moment restrictions and densities, have $E[\phi(z_{i},\beta_{0},\gamma_{0},\lambda)]$ affine in $\lambda,$ so that faster $n^{-1/4}$ convergence of $\hat{\lambda}$ will not be required under our conditions. It will suffice for most LR estimators which we know of to have faster than $n^{-1/4}$ convergence of $\hat{\gamma}$ and faster than $1/\sqrt{n}$ convergence of the product of estimation errors for $\hat{\gamma }$ and $\hat{\lambda},$ with only the latter condition imposed for DR moment functions. We also impose some additional conditions for convergence of the Jacobian of the moments and sample second moments that give asymptotic normality and consistent asymptotic variance estimation for $\hat{\beta}$. An important intermediate result for asymptotic normality is$$\sqrt{n}\hat{\psi}(\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})+o_{p}(1), \label{no effec}$$ where $\hat{\psi}(\beta)$ is the cross-fit, sample, LR moments of equation (\[cfit\]). This result will mean that the presence of the first step estimators has no effect on the limiting distribution of the moments at the true $\beta_{0}$. To formulate conditions for this result we decompose the difference between the left and right-hand sides into several remainders. Let $\phi(z,\gamma,\lambda)=\phi(z,\beta_{0},\gamma,\lambda),$ $\bar{\phi}(\gamma,\lambda)=E[\phi(z_{i},\gamma,\lambda)],$ and $\bar{m}(\gamma )=E[m(z_{i},\beta_{0},\gamma)],$ so that $\bar{\psi}(\gamma,\lambda)=\bar {m}(\gamma)+\bar{\phi}(\gamma,\lambda)$ Then adding and subtracting terms gives $$\sqrt{n}[\hat{\psi}(\beta_{0})-\sum_{i=1}^{n}\psi(z_{i},\beta_{0},\gamma _{0},\lambda_{0})/n]=\hat{R}_{1}+\hat{R}_{2}+\hat{R}_{3}+\hat{R}_{4}, \label{redecomp}$$ where$$\begin{aligned} \hat{R}_{1} & =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[m(z_{i},\beta_{0},\hat{\gamma}_{i})-m(z_{i},\beta_{0},\gamma_{0})-\bar{m}(\hat{\gamma}_{i})]\label{remain}\\ & +\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[\phi(z_{i},\hat{\gamma}_{i},\lambda _{0})-\phi(z_{i},\gamma_{0},\lambda_{0})-\bar{\phi}(\hat{\gamma}_{i},\lambda_{0})+\phi(z_{i},\gamma_{0},\hat{\lambda}_{i})-\phi(z_{i},\gamma _{0},\lambda_{0})-\bar{\phi}(\gamma_{0},\hat{\lambda}_{i})],\nonumber\\ \hat{R}_{2} & =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[\phi(z_{i},\hat{\gamma}_{i},\hat{\lambda}_{i})-\phi(z_{i},\hat{\gamma}_{i},\lambda_{0})-\phi (z_{i},\gamma_{0},\hat{\lambda}_{i})+\phi(z_{i},\gamma_{0},\lambda _{0})],\nonumber\\ \hat{R}_{3} & =\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\bar{\psi}(\hat{\gamma}_{i},\lambda_{0}),\;\;\;\hat{R}_{4}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\bar{\phi }(\gamma_{0},\hat{\lambda}_{i}),\nonumber\end{aligned}$$ We specify regularity conditions sufficient for each of $\hat{R}_{1}$, $\hat{R}_{2}$, $\hat{R}_{3},$ and $\hat{R}_{4}$ to converge in probability to zero so that equation (\[no effec\]) will hold. The remainder term $\hat {R}_{1}$ is a stochastic equicontinuity term as in Andrews (1994). We give mean square consistency conditions for $\hat{R}_{1}\overset{p}{\longrightarrow }0$ in Assumption 3. The remainder term $\hat{R}_{2}$ is a second order remainder that involves both $\hat{\gamma}$ and $\hat{\lambda}.$ When the influence adjustment is $\phi(z,\gamma,\lambda)=\lambda(x)[y-\gamma(w)],$ as for conditional moment restrictions, then$$\hat{R}_{2}=\frac{-1}{\sqrt{n}}\sum_{i=1}^{n}[\hat{\lambda}_{i}(x_{i})-\lambda_{0}(x_{i})][\hat{\gamma}_{i}(w_{i})-\gamma_{0}(w_{i})].$$ $\hat{R}_{2}$ will converge to zero when the product of convergence rates for $\hat{\lambda}_{i}(x_{i})$ and $\hat{\gamma}_{i}(w_{i})$ is faster than $1/\sqrt{n}.$ However, that is not the weakest possible condition. Weaker conditions for locally linear regression first steps are given by Firpo and Rothe (2017) and for series regression first steps by Newey and Robins (2017). These weaker conditions still require that the product of biases of $\hat{\lambda}_{i}(x_{i})$ and $\hat{\gamma}_{i}(w_{i})$ converge to zero faster than $1/\sqrt{n}$ but have weaker conditions for variance terms. We allow for these weaker conditions by allowing $\hat{R}_{2}\overset{p}{\longrightarrow}0$ as a regularity condition. Assumption 5 gives these conditions. We will have $\hat{R}_{3}=\hat{R}_{4}=0$ in the DR case of Assumption 1, where $\hat{R}_{1}\overset{p}{\longrightarrow}0$ and $\hat{R}_{2}\overset{p}{\longrightarrow}0$ will suffice for equation (\[no effec\]). In non DR cases LR leads to $\bar{\psi}(\gamma,\lambda_{0})=\bar{m}(\gamma )+\bar{\phi}(\gamma,\lambda_{0})$ having a zero functional derivative with respect to $\gamma$ at $\gamma_{0}$ so that $\hat{R}_{3}\overset{p}{\longrightarrow}0$ when $\hat{\gamma}_{i}$ converges to $\gamma_{0}$ at a rapid enough, feasible rate. For example if $\bar{\psi }(\gamma,\lambda_{0})$ is twice continuously Frechet differentiable in a neighborhood of $\gamma_{0}$ for a norm $\left\Vert \cdot\right\Vert ,$ with zero Frechet derivative at $\gamma_{0}$. Then$$\left\vert \hat{R}_{3}\right\vert \leq C\sum_{\ell=1}^{L}\sqrt{n}\left\Vert \hat{\gamma}_{\ell}-\gamma_{0}\right\Vert ^{2}\overset{p}{\longrightarrow}0$$ when $\left\Vert \hat{\gamma}-\gamma_{0}\right\Vert =o_{p}(n^{-1/4})$. Here $\hat{R}_{3}\overset{p}{\longrightarrow}0$ when each $\hat{\gamma}_{\ell}$ converges to $\gamma_{0}$ more quickly than $n^{-1/4}$. It may be possible to weaken this condition by bias correcting $m(z,\beta,\hat{\gamma}),$ as by the bootstrap in Cattaneo and Jansson (2017), by the jackknife in Cattaneo Ma and Jansson (2017), and by cross-fitting in Newey and Robins (2017). Consideration of such bias corrections for $m(z,\beta,\hat{\gamma})$ is beyond the scope of this paper. In many cases $\hat{R}_{4}=0$ even though the moment conditions are not DR. For example that is true when $\hat{\gamma}$ is a pdf or when $\gamma_{0}$ estimates the solution to a conditional moment restriction. In such cases mean square consistency, $\hat{R}_{2}\overset{p}{\longrightarrow}0,$ and faster than $n^{-1/4}$ consistency of $\hat{\gamma}$ suffices for equation (\[no effec\]); no convergence rate for $\hat{\lambda}$ is needed. The simplification that $\hat{R}_{4}=0$ seems to be the result of $\lambda$ being a Riesz representer for the linear functional that is the derivative of $\bar{m}(\gamma)$ with respect to $\gamma.$ Such a Riesz representer will enter $\bar{\phi}(\lambda,\gamma_{0})$ linearly, leading to $\hat{R}_{4}=0.$ When $\hat{R}_{4}\neq0$ then $\hat{R}_{4}\overset{p}{\longrightarrow}0$ will follow from twice Frechet differentiability of $\bar{\phi}(\lambda,\gamma _{0})$ in $\lambda$ and faster than $n^{-1/4}$ convergence of $\hat{\lambda}.$ All of the conditions can be easily checked for a wide variety of machine learning and conventional nonparametric estimators. There are well known conditions for mean square consistency for many conventional and machine learning methods. Rates for products of estimation errors are also know for many first step estimators as are conditions for $n^{-1/4}$ consistency. Thus, the simple conditions we give here are general enough to apply to a wide variety of first step estimators. The first formal assumption of this section is sufficient for $\hat{R}_{1}\overset{p}{\longrightarrow}0.$ <span style="font-variant:small-caps;">Assumption 4:</span> *For each* $\ell=1,...,L$*, i) Either* $m(z,\beta_{0},\gamma)$ *does not depend on* $z$ *or* $\int\{m(z,\beta_{0},\hat{\gamma}_{\ell})-m(z,\beta_{0},\gamma_{0})\}^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$ *ii)* $\int\{\phi (z,\hat{\gamma}_{\ell},\lambda_{0})-\phi(z,\gamma_{0},\lambda_{0})\}^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$ *and* $\int\{\phi(z,\gamma _{0},\hat{\lambda}_{\ell})-\phi(z,\gamma_{0},\lambda_{0})\}^{2}F_{0}(dz)\overset{p}{\longrightarrow}0;$ The cross-fitting used in the construction of $\hat{\psi}(\beta_{0})$ is what makes the mean-square consistency conditions of Assumption 4 sufficient for $\hat{R}_{1}\overset{p}{\longrightarrow}0$. The next condition is sufficient for $\hat{R}_{2}\overset{p}{\longrightarrow}0.$ <span style="font-variant:small-caps;">Assumption 5:</span> *For each* $\ell=1,...,L$*, either i)*$$\sqrt{n}\int\max_{j}|\phi_{j}(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi_{j}(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z,\hat{\gamma}_{\ell },\lambda_{0})+\phi_{j}(z,\gamma_{0},\lambda_{0})|F_{0}(dz)\overset{p}{\longrightarrow}0$$ *or ii)* $\hat{R}_{2}\overset{p}{\longrightarrow}0.$ As previously discussed, this condition allows for just $\hat{R}_{2}\overset{p}{\longrightarrow}0$ in order to allow the weak regularity conditions of Firpo and Rothe (2017) and Newey and Robins (2017). The first result of this Section shows that Assumptions 4 and 5 are sufficient for equation (*\[no effec\]*) when the moment functions are DR. <span style="font-variant:small-caps;">Lemma 12:</span> *If Assumption 1 is satisfied, with probability approaching one* $\hat{\gamma}\in\Gamma$*,* $\hat{\lambda}\in\Lambda ,$ *and Assumptions 4 and 5 are satisfied then equation (\[no effec\]) is satisfied.* An important class of DR estimators are those from equation (\[drlin\]). The following result gives conditions for asymptotic linearity of these estimators: <span style="font-variant:small-caps;">Theorem 13:</span> *If a) Assumptions 2 and 4 i) are satisfied with* $\hat{\gamma}\in\Gamma$ *and* $\hat{\lambda}\in\Lambda$ *with probability approaching one; b)* $\lambda_{0}(x_{i})$ *and* $E[\{y_{i}-\gamma_{0}(w_{i})\}^{2}|x_{i}]$ *are bounded; c) for each* $\ell=1,...,L$*,* $\int[\hat{\gamma}_{\ell}(w)-\gamma_{0}(w)]^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$ ** $\int[\hat{\lambda}_{\ell }(x)-\lambda_{0}(x)]^{2}F_{0}(dz)$ ** $\overset{p}{\longrightarrow}0$*, and either*$$\sqrt{n}\left\{ \int[\hat{\gamma}_{\ell}(w)-\gamma_{0}(w)]^{2}F_{0}(dw)\right\} ^{1/2}\left\{ \int[\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)]^{2}F_{0}(dx)\right\} ^{1/2}\mathit{\ }\overset{p}{\longrightarrow}0$$ *or*$$\frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}\{\hat{\gamma}_{\ell}(w_{i})-\gamma _{0}(w_{i})\}\{\hat{\lambda}_{\ell}(x_{i})-\lambda_{0}(x_{i})\}\overset{p}{\longrightarrow}0;$$ *then*$$\sqrt{n}(\hat{\beta}-\beta_{0})=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[g(z_{i},\gamma_{0})-\beta_{0}+\lambda_{0}(x_{i})\{y_{i}-\gamma_{0}(w_{i})\}]+o_{p}(1).$$ The conditions of this result are simple, general, and allow for machine learning first steps. Conditions a) and b) simply require mean square consistency of the first step estimators $\hat{\gamma}$ and $\hat{\lambda}.$ The only convergence rate condition is c), which requires a product of estimation errors for the two first steps to go to zero faster than $1/\sqrt{n}$. This condition allows for a trade-off in convergence rates between the two first steps, and can be satisfied even when one of the two rates is not very fast. This trade-off can be important when $\lambda_{0}(x)$ is not continuous in one of the components of $x$, as in the surplus bound example. Discontinuity in $x$ can limit that rate at which $\lambda_{0}(x)$ can be estimated. This result extends the results of Chernozhukov et al. (2018) and Farrell (2015) for DR estimators of treatment effects to the whole novel class of DR estimators from equation (\[drlin\]) with machine learning first steps. In interesting related work, Athey et al. (2016) show root-n consistent estimation of an average treatment effect is possible under very weak conditions on the propensity score, under strong sparsity of the regression function. Thus, for machine learning the conditions here and in Athey et al. (2016) are complementary and one may prefer either depending on whether or not the regression function can be estimated extremely well based on a sparse method. The results here apply to many more DR moment conditions. DR moment conditions have the special feature that $\hat{R}_{3}$ and $\hat {R}_{4}$ in Proposition 4 are equal to zero. For estimators that are not DR we impose that $\hat{R}_{3}$ and $\hat{R}_{4}$ converge to zero. <span style="font-variant:small-caps;">Assumption 6:</span> *For each* $\ell=1,...,L$*, i)* $\sqrt {n}\bar{\psi}(\hat{\gamma}_{\ell},\lambda_{0})\overset{p}{\longrightarrow}0$ *and ii)* $\sqrt{n}\bar{\phi}(\gamma_{0},\hat{\lambda}_{\ell })\overset{p}{\longrightarrow}0.$ Assumption 6 requires that $\hat{\gamma}$ converge to $\gamma_{0}$ rapidly enough but places no restrictions on the convergence rate of $\hat{\lambda}$ when $\bar{\phi}(\gamma_{0},\hat{\lambda}_{\ell})=0.$ <span style="font-variant:small-caps;">Lemma 14:</span> *If Assumptions 4-6 are satisfied then equation (\[no effec\]) is satisfied.* Assumptions 4-6 are based on the decomposition of LR moment functions into an identifying part and an influence function adjustment. These conditions differ from other previous work in semiparametric estimation, as in Andrews (1994), Newey (1994), Newey and McFadden (1994), Chen, Linton, and van Keilegom (2003), Ichimura and Lee (2010), Escanciano et al. (2016), and Chernozhukov et al. (2018), that are not based on this decomposition. The conditions extend Chernozhukov et. al. (2018) to many more DR estimators and to estimators that are nonlinear in $\hat{\gamma}$ but only require a convergence rate for $\hat{\gamma}$ and not for $\hat{\lambda}$. This framework helps explain the potential problems with “plugging in” a first step machine learning estimator into a moment function that is not LR. Lemma 14 implies that if Assumptions 4-6 are satisfied for some $\hat{\lambda}$ then $\sqrt{n}\hat{m}(\beta_{0})-\sum_{i=1}^{n}\psi(z_{i},\beta_{0},\gamma _{0})/\sqrt{n}\overset{p}{\longrightarrow}0$ if and only if$$\hat{R}_{5}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i},\hat{\gamma},\hat{\lambda})\overset{p}{\longrightarrow}0. \label{plugin}$$ The plug-in method will fail when this equation does not hold. For example, suppose $\gamma_{0}=E[y|x]$ so that by Proposition 4 of Newey (1994),$$\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i},\hat{\gamma},\hat{\lambda})=\frac{-1}{\sqrt{n}}\sum_{i=1}^{n}\hat{\lambda}_{i}(x_{i})[y_{i}-\hat{\gamma }_{i}(x_{i})].$$ Here $\hat{R}_{5}\overset{p}{\longrightarrow}0$ is an approximate orthogonality condition between the approximation $\hat{\lambda}_{i}(x_{i})$ to $\lambda_{0}(x_{i})$ and the nonparametric first stage residuals $y_{i}-\hat{\gamma}_{i}(x_{i}).$ Machine learning uses model selection in the construction of $\hat{\gamma}_{i}(x_{i}).$ If the model selected by $\hat{\gamma}_{i}(x_{i})$ to approximate $\gamma_{0}(x_{i})$ is not rich (or dense) enough to also approximate $\lambda_{0}(x_{i})$ then $\hat{\lambda}_{i}(x_{i})$ need not be approximately orthogonal to $y_{i}-\hat{\gamma}_{i}(x_{i})$ and $\hat{R}_{5}$ need not converge to zero. In particular, if the variables selected to be used to approximate $\gamma_{0}(x_{i})$ cannot be used to also approximate $\lambda_{0}(x_{i})$ then the approximate orthogonality condition can fail. This phenomenon helps explain the poor performance of the plug-in estimator shown in Belloni, Chernozhukov, and Hansen (2014) and Chernozhukov et al. (2017, 2018). The plug-in estimator can be root-n consistent if the only thing being selected is an overall order of approximation, as in the series estimation results of Newey (1994). General conditions for root-n consistency of the plug-in estimator can be formulated using Assumptions 4-6 and $\hat{R}_{2}\overset{p}{\longrightarrow}0,$ which we do in Appendix D. Another component of an asymptotic normality result is convergence of the Jacobian term $\partial\hat{\psi}(\beta)/\partial\beta$ to $M=\left. E[\partial\psi(z_{i},\beta,\gamma_{0},\lambda_{0})/\partial\beta\right\vert _{\beta=\beta_{0}}].$ We impose the following condition for this purpose. <span style="font-variant:small-caps;">Assumption 7:</span> $M\,$*exists and there is a neighborhood* $\mathcal{N}$ *of* $\beta_{0}$ *and* $\left\Vert \cdot \right\Vert $ *such that i) for each* $\ell,$ $\left\Vert \hat{\gamma }_{\ell}-\gamma_{0}\right\Vert \overset{p}{\longrightarrow}0,$ $\left\Vert \hat{\lambda}_{\ell}-\lambda_{0}\right\Vert \overset{p}{\longrightarrow}0;$ *ii)* for all $\left\Vert \gamma-\gamma_{0}\right\Vert $ and $\left\Vert \lambda-\lambda_{0}\right\Vert $ small enough $\psi(z_{i},\beta,\gamma,\lambda)$ *is differentiable in* $\beta$ *on* $\mathcal{N}$ *with probability approaching* $1$ *iii) there is* $\zeta^{\prime}>0$ *and* $d(z_{i})$ *with* $E[d(z_{i})]<\infty $ *such that for* $\beta\in N$ *and* $\left\Vert \gamma -\gamma_{0}\right\Vert $ *small enough* $$\left\Vert \frac{\partial\psi(z_{i},\beta,\gamma,\lambda)}{\partial\beta }-\frac{\partial\psi(z_{i},\beta_{0},\gamma,\lambda)}{\partial\beta }\right\Vert \leq d(z_{i})\left\Vert \beta-\beta_{0}\right\Vert ^{\zeta ^{\prime}};$$ *iii) For each* $\ell=1,...,L,$ $j,$ and $k$, $\int\left\vert \partial\psi_{j}(z,\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell })/\partial\beta_{k}-\partial\psi_{j}(z,\beta_{0},\gamma_{0},\lambda _{0})/\partial\beta_{k}\right\vert F_{0}(dz)\overset{p}{\longrightarrow}0,$ The following intermediate result gives Jacobian convergence. <span style="font-variant:small-caps;">Lemma 15:</span> *If Assumption 7 is satisfied then for any* $\bar{\beta}\overset{p}{\longrightarrow}\beta_{0},$ ** $\hat{\psi}(\beta)$ *is differentiable at* $\bar{\beta}$ *with probability approaching one and* $\partial\hat{\psi}(\bar{\beta})/\partial\beta \overset{p}{\longrightarrow}M.$ With these results in place the asymptotic normality of semiparametric GMM follows in a standard way. <span style="font-variant:small-caps;">Theorem 16:</span> *If Assumptions 4-7 are satisfied,* $\hat{\beta }\overset{p}{\longrightarrow}\beta_{0},$ ** $\hat{W}\overset{p}{\longrightarrow}W$*,* $M^{\prime}WM$ *is nonsingular, and* $E[\left\Vert \psi(z_{i},\beta_{0},\gamma_{0},\lambda _{0})\right\Vert ^{2}]<\infty$ *then for* $\Omega=E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})\psi(z_{i},\beta_{0},\gamma_{0},\lambda _{0})^{\prime}],$$$\sqrt{n}(\hat{\beta}-\beta_{0})\overset{d}{\longrightarrow}N(0,V),V=(M^{\prime }WM)^{-1}M^{\prime}W\Omega WM(M^{\prime}WM)^{-1}.$$ It is also useful to have a consistent estimator of the asymptotic variance of $\hat{\beta}$. As usual such an estimator can be constructed as$$\begin{aligned} \hat{V} & =(\hat{M}^{\prime}\hat{W}\hat{M})^{-1}\hat{M}^{\prime}\hat{W}\hat{\Omega}\hat{W}\hat{M}(\hat{M}^{\prime}\hat{W}\hat{M})^{-1},\\ \hat{M} & =\frac{\partial\hat{\psi}(\hat{\beta})}{\partial\beta},\hat {\Omega}=\frac{1}{n}\sum_{\ell=1}^{L}\sum_{i\in\mathcal{I}_{\ell}}\psi (z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})\psi(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})^{\prime}.\end{aligned}$$ Note that this variance estimator ignores the estimation of $\gamma$ and $\lambda$ which works here because the moment conditions are LR. The following result gives conditions for consistency of $\hat{V}.$ <span style="font-variant:small-caps;">Theorem 17:</span> *If Assumptions 4 and 7 are satisfied with* $E[b(z_{i})^{2}]<\infty,$ ** $M^{\prime}WM$ *is nonsingular, and* $$\int\left\Vert \phi(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi (z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell},\lambda _{0})+\phi(z,\gamma_{0},\lambda_{0})\right\Vert ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0$$ *then* $\hat{\Omega}\overset{p}{\longrightarrow}\Omega$ *and* $\hat{V}\overset{p}{\longrightarrow}V.$ In this section we have used cross-fitting and a decomposition of moment conditions into identifying and influence adjustment components to formulate simple and general conditions for asymptotic normality of LR GMM estimators. For reducing higher order bias and variance it may be desirable to let the number of groups grow with the sample size. That case is beyond the scope of this paper. Appendix A: Proofs of Theorems ============================== **Proof of Theorem 1:** By ii) and iii), $$0=(1-\tau)\int\phi(z,F_{\tau})F_{0}(dz)+\tau\int\phi(z,F_{\tau})G(dz).$$ Dividing by $\tau$ and solving gives$$\frac{1}{\tau}\int\phi(z,F_{\tau})F_{0}(dz)=-\int\phi(z,F_{\tau})G(dz)+\int\phi(z,F_{\tau})F_{0}(z).$$ Taking limits as $\tau\longrightarrow0$, $\tau>0$ and using i) gives$$\frac{d}{d\tau}\int\phi(z,F_{\tau})F_{0}(dz)=-\int\phi(z,F_{0})G(dz)+0=-\frac {d\mu(F_{\tau})}{d\tau}.\text{ }Q.E.D.$$ **Proof of Theorem 2**: We begin by deriving $\phi_{1},$ the adjustment term for the first step CCP estimation. We use the definitions given in the body of the paper. We also let$$\begin{aligned} P_{\tilde{v}j}(\tilde{v}) & =\partial P(\tilde{v})/\partial\tilde{v}_{j},\text{ }\pi_{1}=\Pr(y_{t1}=1),\text{ }\lambda_{10}(x)=E[y_{1t}|x_{t+1}=x],\\ \lambda_{j0}(x) & =E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}|x_{t+1}=x],(j=2,...,J).\end{aligned}$$ Consider a parametric submodel as described in Section 4 and let $\gamma _{1}(x,\tau)$ denote the conditional expectation of $y_{t}$ given $x_{t}$ under the parametric submodel. Note that for $\tilde{v}_{t}=\tilde{v}(x_{t}),$$$\begin{aligned} & E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})\frac{\partial E[H(\gamma _{1}(x_{t+1},\tau))|x_{t},y_{tj}=1]}{\partial\tau}]\\ & =\frac{\partial}{\partial\tau}E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}H(\gamma_{1}(x_{t+1},\tau))]\\ & =\frac{\partial}{\partial\tau}E[E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac {y_{tj}}{P_{j}(\tilde{v}_{t})}|x_{t+1}]H(\gamma_{1}(x_{t+1},\tau))]\\ & =\frac{\partial}{\partial\tau}E[\lambda_{j0}(x_{t+1})H(\gamma_{1}(x_{t+1},\tau))]=\frac{\partial}{\partial\tau}E[\lambda_{j0}(x_{t})H(\gamma_{1}(x_{t},\tau))]\\ & =E[\lambda_{j0}(x_{t})\frac{\partial H(\gamma_{10}(x_{t}))}{\partial P}^{\prime}\frac{\partial\gamma_{1}(x_{t},\tau)}{\partial\tau}]=E[\lambda _{j0}(x_{t})\frac{\partial H(\gamma_{10}(x_{t}))}{\partial P}^{\prime}\{y_{t}-\gamma_{10}(x_{t})\}S(z_{t})].\end{aligned}$$ where the last (sixth) equality follows as in Proposition 4 of Newey (1994a), and the fourth equality follows by equality of the marginal distributions of $x_{t}$ and $x_{t+1}$. Similarly, for $\pi_{1}=\Pr(y_{t1}=1)$ and $\lambda_{10}(x)=E[y_{1t}|x_{t+1}=x]$ we have$$\begin{aligned} \frac{\partial E[H(\gamma_{1}(x_{t+1},\tau))|y_{t1}=1]}{\partial\tau} & =\frac{\partial E[\pi_{1}^{-1}y_{1t}H(\gamma_{1}(x_{t+1},\tau))]}{\partial \tau}=\frac{\partial E[\pi_{1}^{-1}\lambda_{10}(x_{t+1})H(\gamma_{1}(x_{t+1},\tau))]}{\partial\tau}\\ & =\frac{\partial E[\pi_{1}^{-1}\lambda_{10}(x_{t})H(\gamma_{1}(x_{t},\tau))]}{\partial\tau}\\ & =E[\pi_{1}^{-1}\lambda_{10}(x_{t})\frac{\partial H(\gamma_{10}(x_{t}))}{\partial P}^{\prime}\{y_{t}-\gamma_{10}(x_{t})\}S(z_{t})]\end{aligned}$$ Then combining terms gives$$\begin{aligned} & \frac{\partial E[m(z_{t},\beta_{0},\gamma_{1}(\tau),\gamma_{-10})]}{\partial\tau}\\ & =-\delta\sum_{j=2}^{J}\{E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{\partial E[H(\gamma_{1}(x_{t+1},\tau))|x_{t},y_{tj}=1]}{\partial\tau}]\\ & -E[A(x_{t})P_{vj}(\tilde{v}_{t})]\frac{\partial E[H(\gamma_{1}(x_{t+1},\tau))|y_{t1}=1]}{\partial\tau}\}\\ & =-\delta\sum_{j=2}^{J}E[\{\lambda_{j0}(x_{t})-E[A(x_{t})P_{\tilde{v}j}(\tilde{v}_{t})]\pi_{1}^{-1}\lambda_{10}(x_{t})\}\frac{\partial H(\gamma_{10}(x_{t}))}{\partial P}^{\prime}\{y_{t}-\gamma_{10}(x_{t})\}S(z_{t})]\\ & =E[\phi_{1}(z_{t},\beta_{0},\gamma_{0},\lambda_{0})S(z_{t})].\end{aligned}$$ Next, we show the result for $\phi_{j}(z,\beta,\gamma,\lambda)$ for $2\leq j\leq J.$ As in the proof of Proposition 4 of Newey (1994a), for any $w_{t}$ we have$$\frac{\partial}{\partial\tau}E[w_{t}|x_{t},y_{tj}=1,\tau]=E[\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}\{w_{t}-E[w_{t}|x_{t},y_{tj}=1]\}S(z_{t})|x_{t}].$$ It follows that$$\begin{aligned} \frac{\partial E[m(z_{t},\beta_{0},\gamma_{j}(\tau),\gamma_{-j,0})]}{\partial\tau} & =-\delta E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{\partial E[u_{1,t+1}+H_{t+1}|x_{t},y_{tj}=1,\tau]}{\partial\tau}]\\ & =-\delta\frac{\partial}{\partial\tau}E[E[A(x_{t})P_{vj}(\tilde{v}_{t})\{u_{1,t+1}+H_{t+1}\}|x_{t},y_{tj}=1,\tau]].\\ & =-\delta E[A(x_{t})P_{vj}(\tilde{v}_{t})\frac{y_{tj}}{P_{j}(\tilde{v}_{t})}\{u_{1,t+1}+H_{t+1}-\gamma_{j0}(x_{t},\beta_{0},\gamma_{1})\}S(z_{t})]\\ & =E[\phi_{j}(z_{t},\beta_{0},\gamma_{0},\lambda_{0})S(z_{t})],\end{aligned}$$ showing that the formula for $\phi_{j}$ is correct. The proof for $\phi_{J+1}$ follows similarly. *Q.E.D.* **Proof of Theorem 3:** Given in text. **Proof of Theorem 4:** Given in text. **Proof of Theorem 5:** Let $\bar{\psi}(\gamma,\lambda)=E[\psi (z_{i},\beta_{0},\gamma,\lambda)]$. Suppose that $\psi(z,\beta,\gamma ,\lambda)$ is DR. Then for any $\gamma\neq\gamma_{0},\gamma\in\Gamma$ we have$$0=\bar{\psi}(\gamma,\lambda_{0})=\bar{\psi}(\gamma_{0},\lambda_{0})=\bar{\psi }((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0}),$$ for any $\tau.$ Therefore for any $\tau$,$$\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})=0=(1-\tau)\bar{\psi }(\gamma_{0},\lambda_{0})+\tau\bar{\psi}(\gamma,\lambda_{0}),$$ so that $\bar{\psi}(\gamma,\lambda_{0})$ is affine in $\gamma.$ Also by the previous equation $\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})=0$ identically in $\tau$ so that $$\frac{\partial}{\partial\tau}\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma ,\lambda_{0})=0,$$ where the derivative with respect to $\tau$ is evaluated at $\tau=0.$ Applying the same argument switching of $\lambda$ and $\gamma$ we find that $\bar{\psi }(\gamma_{0},\lambda)$ is affine in $\lambda$ and $\partial\bar{\psi}(\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)/\partial\tau=0.$ Next suppose that $\bar{\psi}(\gamma,\lambda_{0})$ is affine $\gamma$ and $\partial\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})/\partial \tau=0.$ Then by $\bar{\psi}(\gamma_{0},\lambda_{0})=0$, for any $\gamma \in\Gamma,$ $$\begin{aligned} \bar{\psi}(\gamma,\lambda_{0}) & =\partial\lbrack\tau\bar{\psi}(\gamma,\lambda_{0})]/\partial\tau=\partial\lbrack(1-\tau)\bar{\psi}(\gamma_{0},\lambda_{0})+\tau\bar{\psi}(\gamma,\lambda_{0})]/\partial\tau\\ & =\partial\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})/\partial \tau=0.\end{aligned}$$ Switching the roles of $\gamma$ and $\lambda$ it follows analogously that $\bar{\psi}(\gamma_{0},\lambda)=0$ for all $\lambda\in\Lambda,$ so $\bar{\psi }(\gamma,\lambda)$ is doubly robust. *Q.E.D.* **Proof of Theorem 6:** Let $\lambda_{0}(x)=-c^{\prime}\Pi^{-1}a(x)$ so that $E[\lambda_{0}(x_{i})|w_{i}]=-c^{\prime}\Pi^{-1}\Pi p(w_{i})=-c^{\prime }p(w_{i}).$Then integration by parts gives$$\begin{aligned} E[m(z_{i},\beta_{0},\tilde{\gamma})] & =E[c^{\prime}p(w_{i})\{\tilde{\gamma }(w_{i})-\gamma_{0}(w_{i})\}]=-E[\gamma_{0}(x_{i})\{\tilde{\gamma}(w_{i})-\gamma_{0}(w_{i})\}]\\ & =E[\gamma_{0}(x_{i})\{y_{i}-\tilde{\gamma}(w_{i})\}]=-c^{\prime}\Pi ^{-1}E[a(x_{i})\{y_{i}-\tilde{\gamma}(w_{i})\}]=0.\text{ }Q.E.D.\end{aligned}$$ **Proof of Theorem 7:** If $\lambda_{0}$ is identified then $m(z,\beta,\bar{\gamma},\lambda_{0})$ is identified for every $\beta$. By DR$$E[m(z_{i},\beta,\bar{\gamma},\lambda_{0})]=0$$ at $\beta=\beta_{0}$ and by assumption this is the only $\beta$ where this equation is satisfied. *Q.E.D.* **Proof of Corollary 8:** Given in text. **Proof of Theorem 9:** Note that for $\rho_{i}=\rho(z_{i},\beta _{0},\gamma_{0}),$$$\bar{\psi}(\gamma_{0},(1-\tau)\lambda_{0}+\tau\lambda)]=(1-\tau)E[\lambda _{0}(x_{i})\rho_{i}]+\tau E[\lambda(x_{i})\rho_{i}]=0. \label{th9proof}$$ Differentiating gives the second equality in eq. (\[lrdef2\]). Also, for $\Delta=\gamma-\gamma_{0},$$$\frac{\partial\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})}{\partial\tau}=E[\lambda_{0}(x_{i})\bar{\rho}(x_{i},\Delta)]=0,$$ giving the first equality in eq. (\[lrdef2\]). *Q.E.D.* **Proof of Theorem 10:** The first equality in eq. (\[th9proof\]) of the proof of Theorem 9 shows that $\bar{\psi}(\gamma_{0},\lambda)$ is affine in $\lambda$. Also,$$\bar{\psi}((1-\tau)\gamma_{0}+\tau\gamma,\lambda_{0})=E[\lambda_{0}(x_{i})\{(1-\tau)\rho(z_{i},\beta_{0},\gamma_{0})+\tau\rho(z_{i},\beta _{0},\gamma)\}]=(1-\tau)\bar{\psi}(\gamma_{0},\lambda_{0})+\tau\bar{\psi }(\gamma,\lambda_{0}),$$ so that $\bar{\psi}(\gamma,\lambda_{0})$ is affine in $\gamma.$ The conclusion then follows by Theorem 5. *Q.E.D.* **Proof of Theorem 11:** To see that $\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda^{\ast})\Sigma^{\ast}(x_{i})^{-1}$ minimizes the asymptotic variance note that for any orthogonal instrumental variable matrix $\lambda_{0}(x),$ by the rows of $\lambda_{\beta}(x_{i})-\tilde{\lambda }^{\Sigma^{\ast}}(x_{i},\lambda_{\beta})$ being in $\bar{\Lambda}_{\gamma},$ $$M=E[\lambda_{0}(x_{i})\lambda_{\beta}(x_{i})^{\prime}]=E[\lambda_{0}(x_{i})\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta})^{\prime }]=E[\lambda_{0}(x_{i})\rho_{i}\rho_{i}^{\prime}\Sigma^{\ast}(x_{i})^{-1}\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta})^{\prime}].$$ Since the instruments are orthogonal the asymptotic variance matrix of the GMM estimator with $\hat{W}\overset{p}{\longrightarrow}W$ is the same as if $\hat{\gamma}=\gamma_{0}.$ Define $m_{i}=M^{\prime}W\lambda_{0}(x_{i})\rho _{i}$ and $m_{i}^{\ast}=\tilde{\lambda}^{\Sigma^{\ast}}(x_{i},\lambda_{\beta })\Sigma^{\ast}(x_{i})^{-1}\rho_{i}.$ The asymptotic variance of the GMM estimator for orthogonal instruments $\lambda_{0}(x)$ is$$(M^{\prime}WM)^{-1}M^{\prime}WE[\lambda_{0}(x_{i})\rho_{i}\rho_{i}^{\prime }\lambda_{0}(x_{i})^{\prime}]WM(M^{\prime}WM)^{-1}=(E[m_{i}m_{i}^{\ast\prime }])^{-1}E[m_{i}m_{i}^{\prime}](E[m_{i}m_{i}^{\ast}])^{-1\prime}.$$ The fact that this matrix is minimized in the positive semidefinite sense for $m_{i}=m_{i}^{\ast}$ is well known, e.g. see Newey and McFadden (1994). *Q.E.D.* The following result is useful for the results of Section 7: <span style="font-variant:small-caps;">Lemma A1:</span> *If Assumption 4 is satisfied then* $\hat{R}_{1}\overset{p}{\longrightarrow}0.$ *If Assumption 5 is satisfied then* $\hat{R}_{2}\overset{p}{\longrightarrow}0.$ Proof: Define $\hat{\Delta}_{i\ell}=m(z_{i},\hat{\gamma}_{\ell})-m(z_{i},\gamma_{0})-\bar{m}(\hat{\gamma}_{\ell})$ for $i\in I_{\ell}$ and let $Z_{\ell}^{c}$ denote the observations $z_{i}$ for $i\notin I_{\ell}$. Note that $\hat{\gamma}_{\ell}$ depends only on $Z_{\ell}^{c}$. By construction and independence of $Z_{\ell}^{c}$ and $z_{i},i\in I_{\ell}$ we have $E[\hat{\Delta}_{i\ell}|Z_{\ell}^{c}]=0.$ Also by independence of the observations, $E[\hat{\Delta}_{i\ell}\hat{\Delta}_{j\ell}|Z_{\ell}^{c}]=0$ for $i,j\in I_{\ell}.$ Furthermore, for $i\in I_{\ell}$ $E[\hat{\Delta}_{i\ell }^{2}|Z_{\ell}^{c}]\leq\int[m(z,\hat{\gamma}_{\ell})-m(z,\gamma_{0})]^{2}F_{0}(dz)$. Then we have $$\begin{aligned} E[\left( \frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}\hat{\Delta}_{i\ell}\right) ^{2}|Z_{\ell}^{c}] & =\frac{1}{n}E[\left( \sum_{i\in I_{\ell}}\hat{\Delta }_{i\ell}\right) ^{2}|Z_{\ell}^{c}]=\frac{1}{n}\sum_{i\in I_{\ell}}E[\hat{\Delta}_{i\ell}^{2}|Z_{\ell}^{c}]\\ & \leq\int[m(z,\hat{\gamma}_{\ell})-m(z,\gamma_{0})]^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.\end{aligned}$$ The conditional Markov inequality then implies that $\sum_{i\in I_{\ell}}\hat{\Delta}_{i\ell}/\sqrt{n}\overset{p}{\longrightarrow}0.$ The analogous results also hold for $\hat{\Delta}_{i\ell}=\phi(z_{i},\hat{\gamma}_{\ell },\lambda_{0})-\phi(z_{i},\gamma_{0},\lambda_{0})-\bar{\phi}(\hat{\gamma }_{\ell},\lambda_{0})$ and $\hat{\Delta}_{i\ell}=\phi(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi(z_{i},\gamma_{0},\lambda_{0})-\bar{\phi}(\gamma_{0},\hat{\lambda}_{\ell})$. Summing across these three terms and across $\ell=1,...,L$ gives the first conclusion. For the second conclusion, note that under the first hypothesis of Assumption 5,$$\begin{aligned} & E[\left\vert \frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}[\phi_{j}(z_{i},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\hat{\gamma}_{\ell},\lambda_{0})+\phi_{j}(z_{i},\gamma_{0},\lambda_{0})]\right\vert |Z_{\ell}^{c}]\\ & \leq\frac{1}{\sqrt{n}}\sum_{i\in I_{\ell}}E[\left\vert \phi_{j}(z_{i},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z_{i},\hat{\gamma}_{\ell},\lambda_{0})+\phi_{j}(z_{i},\gamma_{0},\lambda_{0})\right\vert |Z_{\ell}^{c}]\\ & \leq\sqrt{n}\int\left\vert \phi_{j}(z,\hat{\gamma}_{\ell},\hat{\lambda }_{\ell})-\phi_{j}(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi_{j}(z,\hat{\gamma }_{\ell},\lambda_{0})+\phi_{j}(z_{i},\gamma_{0},\lambda_{0})\right\vert F_{0}(dz)\overset{p}{\longrightarrow}0,\end{aligned}$$ so $\hat{R}_{2}\overset{p}{\longrightarrow}0$ follows by the conditional Markov and triangle inequalities. The second hypothesis of Assumption 5 is just $\hat{R}_{2}\overset{p}{\longrightarrow}0.$ $Q.E.D.$ **Proof of Lemma 12**: By Assumption 1 and the hypotheses that $\hat{\gamma}_{i}\in\Gamma$ and $\hat{\lambda}_{i}\in\Lambda$ we have $\hat {R}_{3}=\hat{R}_{4}=0.$ By Lemma A1 we have $\hat{R}_{1}\overset{p}{\longrightarrow}0$ and $\hat{R}_{2}\overset{p}{\longrightarrow}0.$ The conclusion then follows by the triangle inequality. $Q.E.D.$ **Proof of Theorem 13:** Note that for $\varepsilon=y-\gamma_{0}(w)$ $$\begin{aligned} \phi(z,\hat{\gamma},\lambda_{0})-\phi(z,\gamma_{0},\lambda_{0}) & =\lambda_{0}(x)[\hat{\gamma}(w)-\gamma_{0}(w)],\\ \phi(z,\gamma_{0},\hat{\lambda})-\phi(z,\gamma_{0},\lambda_{0}) & =[\hat{\lambda}(x)-\lambda_{0}(x)]\varepsilon,\\ \phi(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell},\lambda_{0})+\phi _{j}(z,\gamma_{0},\lambda_{0}) & =-[\hat{\lambda}(x)-\lambda_{0}(x)][\hat{\gamma}(x)-\gamma_{0}(x)].\end{aligned}$$ The first part of Assumption 4 ii) then follows by$$\begin{aligned} \int[\phi(z,\hat{\gamma}_{\ell},\lambda_{0})-\phi(z,\gamma_{0},\lambda _{0})]^{2}F_{0}(dz) & =\int\lambda_{0}(x)^{2}[\hat{\gamma}(w)-\gamma _{0}(w)]^{2}F_{0}(dz)\\ & \leq C\int[\hat{\gamma}(w)-\gamma_{0}(w)]^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.\end{aligned}$$ The second part of Assumption 4 ii) follows by$$\begin{aligned} \int[\phi(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\gamma_{0},\lambda _{0})]^{2}F_{0}(dz) & =\int[\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)]^{2}\varepsilon^{2}F_{0}(dz)\\ & =\int\left[ \hat{\lambda}_{\ell}(x)-\lambda_{0}(x)\right] ^{2}E[\varepsilon^{2}|x]F_{0}(dz)\\ & \leq C\int\left[ \hat{\lambda}_{\ell}(x)-\lambda_{0}(x)\right] ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.\end{aligned}$$ Next, note that by the Cauchy-Schwartz inequality, $$\begin{aligned} & \sqrt{n}\int|\phi(z,\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\phi (z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell},\lambda _{0})+\phi(z,\gamma_{0},\lambda_{0})|F_{0}(dz)\\ & =\sqrt{n}\int\left\vert [\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)][\hat {\gamma}_{\ell}(w)-\gamma_{0}(w)]\right\vert F_{0}(dx)\\ & \leq\sqrt{n}\{\int[\hat{\lambda}_{\ell}(x)-\lambda_{0}(x)]^{2}F_{0}(dx)\}^{1/2}\{\int[\hat{\gamma}_{\ell}(w)-\gamma_{0}(w)]^{2}F_{0}(dw)\}^{1/2}.\end{aligned}$$ Then the first rate condition of Assumption 5 holds under the first rate condition of Theorem 13 while the second condition of Assumption 5 holds under the last hypothesis of Theorem 13. Then eq. (\[no effec\]) holds by Lemma 12, and the conclusion by rearranging the terms in eq. (\[no effec\]). *Q.E.D.* **Proof of Lemma 14:** Follows by Lemma A1 and the triangle inequality. *Q.E.D.* **Proof of Lemma 15:** Let $\hat{M}(\beta)=\partial\hat{\psi}(\beta)/\partial\beta$ when it exists, $\tilde{M}_{\ell}=n^{-1}\sum_{i\in I_{\ell}}\partial\psi(z_{i},\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell })/\partial\beta,$ and $\bar{M}_{\ell}=n^{-1}\sum_{i\in I_{\ell}}\partial \psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})/\partial\beta.$ By the law of large numbers, and Assumption 5 iii), $\sum_{\ell=1}^{L}\bar{M}_{\ell }\overset{p}{\longrightarrow}M.$ Also, by condition iii) for each $j$ and $k,$ $$E[|\tilde{M}_{\ell jk}-\bar{M}_{\ell jk}||Z^{\ell}]\leq\int\left\vert \partial\psi_{j}(z,\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell })/\partial\beta_{k}-\partial\psi_{j}(z,\beta_{0},\gamma_{0},\lambda _{0})/\partial\beta_{k}\right\vert F_{0}(dz)\overset{p}{\longrightarrow}0.$$ Then by the conditional Markov inequality, for each $\ell,$ $$\tilde{M}_{\ell}-\bar{M}_{\ell}\overset{p}{\longrightarrow}0.$$ It follows by the triangle inequality that $\sum_{\ell=1}^{L}\tilde{M}_{\ell }\overset{p}{\longrightarrow}M.$ Also, with probability approaching one we have for any $\bar{\beta}\overset{p}{\longrightarrow}\beta_{0}$$$\left\Vert \hat{M}(\bar{\beta})-\sum_{\ell=1}^{L}\tilde{M}_{\ell}\right\Vert \leq\left( \frac{1}{n}\sum_{i=1}^{n}d(z_{i})\right) \left\Vert \bar{\beta }-\beta_{0}\right\Vert ^{\zeta^{\prime}}=O_{p}(1)o_{p}(1)\overset{p}{\longrightarrow}0.$$ The conclusion then follows by the triangle inequality. *Q.E.D.* **Proof of Theorem 16:** The conclusion follows in a standard manner from the conclusions of Lemmas 14 and 15. *Q.E.D.* **Proof of Theorem 17:** Let $\hat{\psi}_{i}=\psi(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})$ and $\psi_{i}=\psi(z_{i},\beta _{0},\gamma_{0},\lambda_{0}).$ By standard arguments (e.g. Newey, 1994), it suffices to show that $\sum_{i=1}^{n}\left\Vert \hat{\psi}_{i}-\psi _{i}\right\Vert ^{2}/n\overset{p}{\longrightarrow}0.$ Note that$$\begin{aligned} \hat{\psi}_{i}-\psi_{i} & =\sum_{j=1}^{5}\hat{\Delta}_{ji},\hat{\Delta}_{1i}=\psi(z_{i},\hat{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})-\psi(z_{i},\beta_{0},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell}),\hat{\Delta }_{2i}=m(z_{i},\beta_{0},\hat{\gamma}_{\ell})-m(z_{i},\beta_{0},\gamma_{0}),\\ \hat{\Delta}_{3i} & =\phi(z_{i},\hat{\gamma}_{\ell},\lambda_{0})-\phi (z_{i},\gamma_{0},\lambda_{0}),\hat{\Delta}_{4i}=\phi(z_{i},\gamma_{0},\hat{\lambda}_{\ell})-\phi(z_{i},\gamma_{0},\lambda_{0}),\\ \hat{\Delta}_{5i} & =\phi(z_{i},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell })-\phi(z_{i},\hat{\gamma}_{\ell},\lambda_{0})-\phi(z_{i},\gamma_{0},\hat{\lambda}_{\ell})+\phi(z_{i},\gamma_{0},\lambda_{0}).\end{aligned}$$ By standard arguments it suffices to show that for each $j$ and $\ell,$ $$\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{ji}\right\Vert ^{2}\overset{p}{\longrightarrow}0. \label{var conv}$$ For $j=1$ it follows by a mean value expansion and Assumption 7 with $E[b(z_{i})^{2}]<\infty$ that$$\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{1i}\right\Vert ^{2}=\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \frac{\partial}{\partial\beta }\psi(z_{i},\bar{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell})(\hat{\beta }-\beta)\right\Vert ^{2}\leq\frac{1}{n}\left( \sum_{i\in I_{\ell}}b(z_{i})^{2}\right) \left\Vert \hat{\beta}-\beta\right\Vert ^{2}\overset{p}{\longrightarrow}0,$$ where $\bar{\beta}\,$is a mean value that actually differs from row to row of $\partial\psi(z_{i},\bar{\beta},\hat{\gamma}_{\ell},\hat{\lambda}_{\ell })/\partial\beta$. For $j=2$ note that by Assumption 4,$$E[\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{2i}\right\Vert ^{2}|Z^{\ell}]\leq\int\left\Vert m(z,\beta_{0},\hat{\gamma}_{\ell})-m(z,\beta_{0},\gamma_{0})\right\Vert ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0,$$ so eq. (\[var conv\]) holds by the conditional Markov inequality. For $j=3$ and $j=4$ eq. (\[var conv\]) follows similarly. For $j=5$, it follows from the hypotheses of Theorem 17 that$$E[\frac{1}{n}\sum_{i\in I_{\ell}}\left\Vert \hat{\Delta}_{5i}\right\Vert ^{2}|Z^{\ell}]\leq\int\left\Vert \phi(z,\hat{\gamma}_{\ell},\hat{\lambda }_{\ell})-\phi(z,\gamma_{0},\hat{\lambda}_{\ell})-\phi(z,\hat{\gamma}_{\ell },\lambda_{0})+\phi(z,\gamma_{0},\lambda_{0})\right\Vert ^{2}F_{0}(dz)\overset{p}{\longrightarrow}0.$$ Then eq. (\[var conv\]) holds for $j=5$ by the conditional Markov inequality. *Q.E.D.* Appendix B: Local Robustness and Derivatives of Expected Moments. ================================================================= In this Appendix we give conditions sufficient for the LR property of equation (\[lrdef\]) to imply the properties in equations (\[lrdef2\]) and (\[nlremainder\]). As discussed following equation (\[nlremainder\]), it may be convenient when specifying regularity conditions for specific moment functions to work directly with (\[lrdef2\]) and/or (\[nlremainder\]). <span style="font-variant:small-caps;">Assumption B1:</span> *There are linear sets* $\Gamma$ *and* $\Lambda$ *and a set* $G$ *such that i)* $\bar{\psi}(\gamma,\lambda)$ *is Frechet differentiable at* $(\gamma_{0},\lambda_{0});$ *ii) for all* $G\in$ ** $G$ *the vector* $(\gamma(F_{\tau}),\lambda(F_{\tau}))$ *is Frechet differentiable at* $\tau=0;$ *iii) the closure of* $\{\partial(\gamma(F_{\tau}),\lambda(F_{\tau}))/\partial\tau:G\in$ ** $G\}$ *is* $\Gamma\times\Lambda$*.* <span style="font-variant:small-caps;">Theorem B1:</span> *If Assumption B1 is satisfied and equation (\[lrdef\]) is satisfied for all* $G\in$ ** $\mathcal{G}$ *then equation (\[lrdef2\]) is satisfied.* Proof: Let $\bar{\psi}^{\prime}(\gamma,\lambda)$ denote the Frechet derivative of $\bar{\psi}(\gamma,\lambda)$ at $(\gamma_{0},\lambda_{0})$ in the direction $(\gamma,\lambda),$ which exists by i). By ii), the chain rule for Frechet derivatives (e.g. Proposition 7.3.1 of Luenberger, 1969), and by eq. *(\[lrdef\])* it follows that for $(\Delta_{\gamma}^{G},\Delta_{\lambda}^{G})=\partial(\gamma(F_{\tau}),\lambda(F_{\tau}))/\partial\tau,$$$\bar{\psi}^{\prime}(\Delta_{\gamma}^{G},\Delta_{\lambda}^{G})=\frac {\partial\bar{\psi}(\gamma(F_{\tau}),\lambda(F_{\tau}))}{\partial\tau}=0.$$ By $\bar{\psi}^{\prime}(\gamma,\lambda)$ being a continuous linear function and iii) it follows that $\bar{\psi}^{\prime}(\gamma,\lambda)=0$ for all $(\gamma,\lambda)\in\Gamma\times\Lambda.$ Therefore, for any $\gamma\in\Gamma$ and $\lambda\in\Lambda,$$$\bar{\psi}^{\prime}(\gamma-\gamma_{0},0)=0,\bar{\psi}^{\prime}(0,\lambda -\lambda_{0})=0.$$ Equation *(\[lrdef2\])* then follows by i). *Q.E.D.* <span style="font-variant:small-caps;">Theorem B2:</span> *If equation (\[lrdef2\]) is satisfied and in addition* $\bar{\psi}(\gamma,\lambda_{0})$ *and* $\bar{\psi}(\gamma _{0},\lambda)$ *are twice Frechet differentiable in open sets containing* $\gamma_{0}$ *and* $\lambda_{0}$ *respectively with bounded second derivative then equation* (\[nlremainder\]) *is satisfied.* Proof: Follows by Proposition 7.3.3 of Luenberger (1969). *Q.E.D.* Appendix C: Doubly Robust Moment Functions for Orthogonality Conditions ======================================================================= In this Appendix we generalize the DR estimators for conditional moment restrictions to orthogonality conditions for a general residual $\rho (z,\gamma)$ that is affine in $\gamma$ but need not have the form $y-\gamma(w).$ <span style="font-variant:small-caps;">Assumption C1:</span> *There are linear sets* $\Gamma$ and $\Lambda$ *of functions* $\lambda(x)$ *and* $\gamma(w)$ *that are closed in mean square such that i) For any* $\gamma,\tilde{\gamma}\in\Gamma$ and scalar $\tau,$ $E[\rho(z_{i},\gamma)^{2}]<\infty$ and $\rho(z,(1-\tau )\gamma+\tau\tilde{\gamma})=(1-\tau)\rho(z,\gamma)+\tau\rho(z,\tilde{\gamma})$ ; *ii)* $E[\lambda(x_{i})\rho(z_{i},\gamma_{0})]=0$ for all $\lambda \in\Lambda;$ *iii) there exists* $\lambda_{0}\in\Lambda$ *such that* $E[m(z_{i},\beta_{0},\gamma)]=-E[\lambda_{0}(x_{i})\rho(z_{i},\gamma )]$ *for all* $\gamma\in\Gamma.$ Assumption C1 ii) could be thought of as an identification condition for $\gamma_{0}$. For example, if $\Lambda$ is all functions of $x_{i}$ with finite mean square then ii) is $E[\rho(z_{i},\gamma_{0})|x_{i}]=0,$ the nonparametric conditional moment restriction of Newey and Powell (2003) and Newey (1991). Assumption C1 iii) also has an interesting interpretation. Let $\Pi(a)(x_{i})$ denote the orthogonal mean-square projection of a random variable $a(z_{i})$ with finite second moment on $\Gamma.$ Then by ii) and iii) we have$$\begin{aligned} E[m(z_{i},\beta_{0},\gamma)] & =-E[\lambda_{0}(x_{i})\rho(z_{i},\gamma)]=E[\lambda_{0}(x_{i})\Pi(\rho(\gamma))(x_{i})]\\ & =E[\lambda_{0}(x_{i})\{\Pi(\rho(\gamma))(x_{i})-\Pi(\rho(\gamma_{0}))(x_{i})\}]\\ & =E[\lambda_{0}(x_{i})\{\Pi(\rho(\gamma)-\rho(\gamma_{0}))(x_{i})\}].\end{aligned}$$ Here we see that $E[m(z_{i},\beta_{0},\gamma)]$ is a linear, mean-square continuous function of $\Pi(\rho(\gamma)-\rho(\gamma_{0}))(x_{i}).$ The Riesz representation theorem will also imply that if $E[m(z_{i},\beta_{0},\gamma)]$ is a linear, mean-square continuous function of $\Pi(\rho(\gamma)-\rho (\gamma_{0}))(x_{i})$ then $\lambda_{0}(x)$ exists satisfying Assumption C1 ii). For the case where $w_{i}=x_{i}$ this mean-square continuity condition is necessary for existence of a root-n consistent estimator, as in Newey (1994) and Newey and McFadden (1994). We conjecture that when $w_{i}$ need not equal $x_{i}$ this condition generalizes Severini and Tripathi’s (2012) necessary condition for existence of a root-n consistent estimator of $\beta_{0}$. Noting that Assumptions 1 ii) and iii) are the conditions for double robustness we have <span style="font-variant:small-caps;">Theorem C1:</span> *If Assumption C1 is satisfied then* $\psi (z,\beta,\gamma,\lambda)=m(z,\beta,\gamma)+\lambda(x)\rho(z,\gamma)$ *is doubly robust.* It is interesting to note that $\lambda_{0}(x)$ satisfying Assumption C1 iii) need not be unique. When the closure of $\{\Pi(\rho(\gamma))(x_{i}):\gamma \in\Gamma\}$ is not all of $\Lambda$ then there will exist $\tilde{\lambda}\in\Lambda$ such that $\tilde{\lambda}\neq0$ and $$E[\tilde{\lambda}(x_{i})\rho(z_{i},\gamma)]=E[\tilde{\lambda}(x_{i})\Pi (\rho(\gamma))(x_{i})]=0\text{ for all }\gamma\in\Gamma.$$ In that case Assumption C1 iii) will also be satisfied for $\lambda_{0}(x_{i})+\tilde{\lambda}(x_{i}).$ We can think of this case as one where $\gamma_{0}$ is overidentified, similarly to Chen and Santos (2015). As discussed in Ichimura and Newey (2017), the different $\lambda_{0}(x_{i})$ would correspond to different first step estimators. The partial robustness results of the last Section can be extended to the orthogonality condition setting of Assumption C1. Let $\Lambda^{\ast}$ be a closed linear subset of $\Lambda,$ such as finite dimensional linear set and let $\gamma^{\ast}$ be such that $E[\lambda(x_{i})\rho(z_{i},\gamma^{\ast })]=0$ for all $\lambda\in\Lambda^{\ast}$. Note that if $\lambda_{0}\in \Lambda^{\ast}$ it follows by Theorem C1 that$$E[m(z_{i},\beta_{0},\gamma^{\ast})]=-E[\lambda_{0}(x_{i})\rho(z_{i},\gamma^{\ast})]=0.$$ <span style="font-variant:small-caps;">Theorem C2:</span> *If* $\Lambda^{\ast}$ *is a closed linear subset of* $\Lambda$*,* $E[\lambda(x_{i})\rho(z_{i},\gamma^{\ast})]=0$ *for all* $\lambda\in\Lambda^{\ast}$*, and Assumption C2 iii) is satisfied with* $\lambda_{0}\in\Lambda^{\ast}$ *then*$$E[m(z_{i},\beta_{0},\gamma^{\ast})]=0.$$ $.$ Appendix D: Regularity Conditions for Plug-in Estimators ======================================================== In this Appendix we formulate regularity conditions for root-n consistency and asymptotic normality of the plug-in estimator $\tilde{\beta}$ as described in Section 2, where $m(z,\beta,\gamma)$ need not be LR. These conditions are based on Assumptions 4-6 applied to the influence adjustment $\phi (z,\gamma,\lambda)$ corresponding to $m(z,\beta,\gamma)$ and $\hat{\gamma}.$ For this purpose we treat $\hat{\lambda}$ as any object that can approximate $\lambda_{0}(x),$ not just as an estimator of $\lambda_{0}.$ <span style="font-variant:small-caps;">Theorem D1:</span> *If Assumptions 4-6 are satisfied, Assumption 7* is satisfied with $m(z,\beta,\gamma)$ replacing $\psi(z,\beta,\gamma ,\lambda),$ ** $\tilde{\beta}\overset{p}{\longrightarrow}\beta_{0},$ ** $\hat{W}\overset{p}{\longrightarrow}W$*,* $M^{\prime}WM$ *is nonsingular,* $E[\left\Vert \psi(z_{i},\beta_{0},\gamma _{0},\lambda_{0})\right\Vert ^{2}]<\infty,$ *and*$$\hat{R}_{5}=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i},\hat{\gamma}_{i},\hat{\lambda}_{i})\overset{p}{\longrightarrow}0,$$ *then for* $\Omega=E[\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})\psi(z_{i},\beta_{0},\gamma_{0},\lambda_{0})^{\prime}],$$$\sqrt{n}(\hat{\beta}-\beta_{0})\overset{d}{\longrightarrow}N(0,V),V=(M^{\prime }WM)^{-1}M^{\prime}W\Omega WM(M^{\prime}WM)^{-1}.$$ The condition $\hat{R}_{5}\overset{p}{\longrightarrow}0$ was discussed in Section 7. It is interesting to note that $\hat{R}_{5}\overset{p}{\longrightarrow}0$ appears to be a complicated condition that seems to depend on details of the estimator $\hat{\gamma}_{i}$ in a way that Assumptions 4-7 do not. In this way the regularity conditions for the LR estimator seem to be more simple and general than those for the plug-in estimator. Acknowledgements Whitney Newey gratefully acknowledges support by the NSF. Helpful comments were provided by M. Cattaneo, B. Deaner, J. Hahn, M. Jansson, Z. Liao, A. Pakes, R. Moon, A. de Paula, V. Semenova, and participants in seminars at Cambridge, Columbia, Cornell, Harvard-MIT, UCL, USC, Yale, and Xiamen. B. Deaner provided capable research assistance. **REFERENCES** <span style="font-variant:small-caps;">Ackerberg, D., X. Chen, and J. Hahn</span> (2012): “A Practical Asymptotic Variance Estimator for Two-step Semiparametric Estimators,” *The Review of Economics and Statistics* 94: 481–498. <span style="font-variant:small-caps;">Ackerberg, D., X. Chen, J. Hahn, and Z. Liao</span> (2014): “Asymptotic Efficiency of Semiparametric Two-Step GMM,” *The Review of Economic Studies* 81: 919–943. <span style="font-variant:small-caps;">Ai, C. [and]{} X. Chen</span> (2003): Efficient Estimation of Models with Conditional Moment Restrictions Containing Unknown Functions, *Econometrica* 71, 1795-1843. <span style="font-variant:small-caps;">Ai, C. [and]{} X. Chen</span> (2007): “Estimation of Possibly Misspecified Semiparametric Conditional Moment Restriction Models with Different Conditioning Variables,” *Journal of Econometrics* 141, 5–43. <span style="font-variant:small-caps;">Ai, C. [and]{} X. Chen</span> (2012): “The Semiparametric Efficiency Bound for Models of Sequential Moment Restrictions Containing Unknown Functions,” *Journal of Econometrics* 170, 442–457. <span style="font-variant:small-caps;">Andrews, D.W.K.</span> (1994): Asymptotics for Semiparametric Models via Stochastic Equicontinuity, *Econometrica* 62, 43-72. <span style="font-variant:small-caps;">Athey, S., G. Imbens, and S. Wager</span> (2017): “Efficient Inference of Average Treatment Effects in High Dimensions via Approximate Residual Balancing,” *Journal of the Royal Statistical Society, Series B,* forthcoming. <span style="font-variant:small-caps;">Bajari, P., V. Chernozhukov, H. Hong, and D. Nekipelov</span> (2009): “Nonparametric and Semiparametric Analysis of a Dynamic Discrete Game,” working paper, Stanford. <span style="font-variant:small-caps;">Bajari, P., H. Hong, J. Krainer, and D. Nekipelov</span> (2010): “Estimating Static Models of Strategic Interactions,” *Journal of Business and Economic Statistics* 28, 469-482. <span style="font-variant:small-caps;">Bang, and J.M. Robins</span> (2005): “Doubly Robust Estimation in Missing Data and Causal Inference Models,” *Biometrics* 61, 962–972. <span style="font-variant:small-caps;">Belloni, A., D. Chen, V. Chernozhukov, and C. Hansen</span> (2012): Sparse Models and Methods for Optimal Instruments with an Application to Eminent Domain, *Econometrica* 80, 2369–2429. <span style="font-variant:small-caps;">Belloni, A., V. Chernozhukov, and Y. Wei</span> (2013): Honest Confidence Regions for Logistic Regression with a Large Number of Controls, arXiv preprint arXiv:1304.3969. <span style="font-variant:small-caps;">Belloni, A., V. Chernozhukov, and C. Hansen</span> (2014): “Inference on Treatment Effects after Selection among High-Dimensional Controls,” *The Review of Economic Studies* 81, 608–650. <span style="font-variant:small-caps;">Belloni, A., V. Chernozhukov, I. Fernandez-Val, and C. Hansen</span> (2016): “Program Evaluation and Causal Inference with High-Dimensional Data,” *Econometrica* 85, 233-298. <span style="font-variant:small-caps;">Bera, A.K., G. Montes-Rojas, and W. Sosa-Escudero</span> (2010): “General Specification Testing with Locally Misspecified Models,” *Econometric Theory* 26, 1838–1845. <span style="font-variant:small-caps;">Bickel, P.J.</span> (1982): “On Adaptive Estimation,” *Annals of Statistics* 10, 647-671. <span style="font-variant:small-caps;">Bickel, P.J. and Y. Ritov</span> (1988): “Estimating Integrated Squared Density Derivatives: Sharp Best Order of Convergence Estimates,” *Sankhyā: The Indian Journal of Statistics, Series A* 238, 381-393.   <span style="font-variant:small-caps;">Bickel, P.J., C.A.J. Klaassen, Y. Ritov, [and]{} J.A. Wellner</span> (1993): *Efficient and Adaptive Estimation for Semiparametric Models*, Springer-Verlag, New York. <span style="font-variant:small-caps;">Bickel, P.J. and Y. Ritov</span> (2003): “Nonparametric Estimators Which Can Be ”Plugged-in," *Annals of Statistics* 31, 1033-1053. <span style="font-variant:small-caps;">Bonhomme, S., and M. Weidner</span> (2018): “Minimizing Sensitivity to Misspecification,” working paper. <span style="font-variant:small-caps;">Cattaneo, M.D., and M. Jansson</span> (2017): “Kernel-Based Semiparametric Estimators: Small Bandwidth Asymptotics and Bootstrap Consistency,” *Econometrica*, forthcoming. <span style="font-variant:small-caps;">Cattaneo, M.D., M. Jansson, and X. Ma</span> (2017): “Two-step Estimation and Inference with Possibly Many Included Covariates,” working paper. <span style="font-variant:small-caps;">Chamberlain, G.</span> (1987): Asymptotic Efficiency in Estimation with Conditional Moment Restrictions, *Journal of Econometrics* 34, 1987, 305–334. <span style="font-variant:small-caps;">Chamberlain, G.</span> (1992): Efficiency Bounds for Semiparametric Regression, *Econometrica* 60, 567–596. <span style="font-variant:small-caps;">Chen, X. and X. Shen</span> (1997): Sieve Extremum Estimates for Weakly Dependent Data, *Econometrica* 66, 289-314. <span style="font-variant:small-caps;">Chen, X., O.B. Linton, [and]{} I. [van Keilegom]{}</span> (2003): Estimation of Semiparametric Models when the Criterion Function Is Not Smooth, *Econometrica* 71, 1591-1608. <span style="font-variant:small-caps;">Chen, X., and Z. Liao</span> (2015): “Sieve Semiparametric Two-Step GMM Under Weak Dependence”, *Journal of Econometrics* 189, 163–186. <span style="font-variant:small-caps;">Chen, X., and A. Santos</span> (2015): Overidentification in Regular Models, working paper. <span style="font-variant:small-caps;">Chernozhukov, V., C. Hansen, and M. Spindler</span> (2015): “Valid Post-Selection and Post-Regularization Inference: An Elementary, General Approach,” *Annual Review of Economics* 7: 649–688. <span style="font-variant:small-caps;">Chernozhukov, V., G.W. Imbens and W.K. Newey</span> (2007): “Instrumental Variable Identification and Estimation of Nonseparable Models,” *Journal of Econometrics* 139, 4-14. <span style="font-variant:small-caps;">Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey</span> (2017): “Double/Debiased/Neyman Machine Learning of Treatment Effects,” *American Economic Review Papers and Proceedings* 107, 261-65. <span style="font-variant:small-caps;">Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, J. Robins</span> (2018): "Debiased/Double Machine Learning for Treatment and Structural Parameters,*Econometrics Journal* 21, C1-C68. <span style="font-variant:small-caps;">Chernozhukov, V., J.A. Hausman, and W.K. Newey</span> (2018): “Demand Analysis with Many Prices,” working paper, MIT. <span style="font-variant:small-caps;">Chernozhukov, V., W.K. Newey, J. Robins</span> (2018): “Double/De-Biased Machine Learning Using Regularized Riesz Representers,” arxiv. <span style="font-variant:small-caps;">Escanciano, J-C., D. Jacho-Cha'vez, and A. Lewbel</span> (2016): Identification and Estimation of Semiparametric Two Step Models, *Quantitative Economics* 7, 561-589. <span style="font-variant:small-caps;">Farrell, M.</span> (2015): “Robust Inference on Average Treatment Effects with Possibly More Covariates than Observations,” *Journal of Econometrics* 189, 1–23. <span style="font-variant:small-caps;">Firpo, S. and C. Rothe</span> (2017): “Semiparametric Two-Step Estimation Using Doubly Robust Moment Conditions,” working paper. <span style="font-variant:small-caps;">Graham, B.W.</span> (2011): “Efficiency Bounds for Missing Data Models with Semiparametric Restrictions,” *Econometrica* 79, 437–452. <span style="font-variant:small-caps;">Hahn, J. (1998):</span> “On the Role of the Propensity Score in Efficient Semiparametric Estimation of Average Treatment Effects,” *Econometrica* 66, 315-331. <span style="font-variant:small-caps;">Hahn, J. and G. Ridder</span> (2013): “Asymptotic Variance of Semiparametric Estimators With Generated Regressors,” *Econometrica* 81, 315-340. <span style="font-variant:small-caps;">Hahn, J. and G. Ridder</span> (2016): Three-stage Semi-Parametric Inference: Control Variables and Differentiability,“ working paper.” <span style="font-variant:small-caps;">Hahn, J., Z. Liao, and G. Ridder</span> (2016): “Nonparametric Two-Step Sieve M Estimation and Inference,” working paper, UCLA. <span style="font-variant:small-caps;">Hasminskii, R.Z. and I.A. Ibragimov</span> (1978): “On the Nonparametric Estimation of Functionals,” *Proceedings of the 2nd Prague Symposium on Asymptotic Statistics*, 41-51. <span style="font-variant:small-caps;">Hausman, J.A., and W.K. Newey</span> (2016): “Individual Heterogeneity and Average Welfare,” *Econometrica* 84, 1225-1248. <span style="font-variant:small-caps;">Hausman, J.A., and W.K. Newey</span> (2017): “Nonparametric Welfare Analysis,” *Annual Review of Economics* 9, 521–546. <span style="font-variant:small-caps;">Hirano, K., G. Imbens, and G. Ridder</span> (2003): “Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score,” *Econometrica* 71: 1161–1189. <span style="font-variant:small-caps;">Hotz, V.J. and R.A. Miller</span> (1993): “Conditional Choice Probabilities and the Estimation of Dynamic Models,” *Review of Economic Studies* 60, 497-529. <span style="font-variant:small-caps;">Huber, P. (1981):</span> *Robust Statistics,* New York: Wiley. <span style="font-variant:small-caps;">Ichimura, H.</span> (1993): “Estimation of Single Index Models,” *Journal of Econometrics* 58, 71-120. <span style="font-variant:small-caps;">Ichimura, H., [and]{} S. Lee</span> (2010): Characterization of the Asymptotic Distribution of Semiparametric M-Estimators, *Journal of Econometrics* 159, 252–266. <span style="font-variant:small-caps;">Ichimura, H. and W.K. Newey</span> (2017): “The Influence Function of Semiparametric Estimators,” CEMMAP Working Paper, CWP06/17. <span style="font-variant:small-caps;">Kandasamy, K., A. Krishnamurthy, B. P'oczos, L. Wasserman, J.M. Robins</span> (2015): “Influence Functions for Machine Learning: Nonparametric Estimators for Entropies, Divergences and Mutual Informations,” arxiv. <span style="font-variant:small-caps;">Lee, Lung-fei</span> (2005): A $C(\alpha)$-type Gradient Test in the GMM Approach, working paper. <span style="font-variant:small-caps;">Luenberger, D.G.</span> (1969): *Optimization by Vector Space Methods*, New York: Wiley. <span style="font-variant:small-caps;">Murphy, K.M. and R.H. Topel</span> (1985): “Estimation and Inference in Two-Step Econometric Models,” *Journal of Business and Economic Statistics* 3, 370-379. <span style="font-variant:small-caps;">Newey, W.K.</span> (1984): “A Method of Moments Interpretation of Sequential Estimators,” *Economics Letters* 14, 201-206. <span style="font-variant:small-caps;">Newey, W.K.</span> (1990): “Semiparametric Efficiency Bounds,” *Journal of Applied Econometrics* 5, 99-135. <span style="font-variant:small-caps;">Newey, W.K.</span> (1991): Uniform Convergence in Probability and Stochastic Equicontinuity, *Econometrica* 59, 1161-1167. <span style="font-variant:small-caps;">Newey, W.K.</span> (1994a): “The Asymptotic Variance of Semiparametric Estimators,” *Econometrica* 62, 1349-1382. <span style="font-variant:small-caps;">Newey, W.K.</span> (1994b): Kernel Estimation of Partial Means and a General Variance Estimator, *Econometric Theory* 10, 233-253. <span style="font-variant:small-caps;">Newey, W.K.</span> (1997): Convergence Rates and Asymptotic Normality for Series Estimators, *Journal of Econometrics* 79, 147-168. <span style="font-variant:small-caps;">Newey, W.K. (</span>1999): Consistency of Two-Step Sample Selection Estimators Despite Misspecification of Distribution, *Economics Letters* 63, 129-132. <span style="font-variant:small-caps;">Newey, W.K., [and]{} D. McFadden</span> (1994): Large Sample Estimation and Hypothesis Testing," in *Handbook of Econometrics*, Vol. 4, ed. by R. Engle, and D. McFadden, pp. 2113-2241. North Holland. <span style="font-variant:small-caps;">Newey, W.K., [and]{} J.L. Powell</span> (1989): “Instrumental Variable Estimation of Nonparametric Models,” presented at Econometric Society winter meetings, 1988. <span style="font-variant:small-caps;">Newey, W.K., [and]{} J.L. Powell</span> (2003): “Instrumental Variable Estimation of Nonparametric Models,” *Econometrica* 71, 1565-1578. <span style="font-variant:small-caps;">Newey, W.K., F. Hsieh, [and]{} J.M. Robins</span> (1998): Undersmoothing and Bias Corrected Functional Estimation," MIT Dept. of Economics working paper 72, 947-962. <span style="font-variant:small-caps;">Newey, W.K., F. Hsieh, [and]{} J.M. Robins</span> (2004): Twicing Kernels and a Small Bias Property of Semiparametric Estimators, *Econometrica* 72, 947-962. <span style="font-variant:small-caps;">Newey, W.K., and J. Robins</span> (2017): “Cross Fitting and Fast Remainder Rates for Semiparametric Estimation,” arxiv. <span style="font-variant:small-caps;">Neyman, J.</span> (1959): Optimal Asymptotic Tests of Composite Statistical Hypotheses, *Probability and Statistics, the Harald Cramer Volume*, ed., U. Grenander, New York, Wiley. <span style="font-variant:small-caps;">Pfanzagl, J., and W. Wefelmeyer</span> (1982): "Contributions to a General Asymptotic Statistical Theory. Springer Lecture Notes in Statistics. <span style="font-variant:small-caps;">Pakes, A. and G.S. Olley</span> (1995): “A Limit Theorem for a Smooth Class of Semiparametric Estimators,” *Journal of Econometrics* 65, 295-332. <span style="font-variant:small-caps;">Powell, J.L., J.H. Stock, and T.M. Stoker</span> (1989): “Semiparametric Estimation of Index Coefficients,” *Econometrica* 57, 1403-1430. <span style="font-variant:small-caps;">Robins, J.M., A. Rotnitzky, and L.P. Zhao</span> (1994): “Estimation of Regression Coefficients When Some Regressors Are Not Always Observed,” *Journal of the American Statistical Association* 89: 846–866. <span style="font-variant:small-caps;">Robins, J.M. and A. Rotnitzky</span> (1995): “Semiparametric Efficiency in Multivariate Regression Models with Missing Data,” *Journal of the American Statistical Association* 90:122–129. <span style="font-variant:small-caps;">Robins, J.M., A. Rotnitzky, and L.P. Zhao</span> (1995): “Analysis of Semiparametric Regression Models for Repeated Outcomes in the Presence of Missing Data,” *Journal of the American Statistical Association* 90,106–121. <span style="font-variant:small-caps;">Robins, J.M.,and A. Rotnitzky (2001):</span> Comment on Semiparametric Inference: Question and an Answer Likelihood by P.A. Bickel and J. Kwon, *Statistica Sinica* 11, 863-960. <span style="font-variant:small-caps;">Robins, J.M., A. Rotnitzky, and M. van der Laan</span>  (2000): "Comment on ’On Profile Likelihood’ by S. A. Murphy and A. W. van der Vaart, *Journal of the American Statistical Association* 95, 431-435. <span style="font-variant:small-caps;">Robins, J., M. Sued, Q. Lei-Gomez, and A. Rotnitzky</span> (2007): “Comment: Performance of Double-Robust Estimators When Inverse Probability’ Weights Are Highly Variable,” *Statistical Science* 22, 544–559. <span style="font-variant:small-caps;">Robins, J.M., L. Li, E. Tchetgen, and A. van der Vaart</span> (2008): “Higher Order Influence Functions and Minimax Estimation of Nonlinear Functionals,” *IMS Collections Probability and Statistics: Essays in Honor of David A. Freedman, Vol 2,* 335-421. <span style="font-variant:small-caps;">Robins, J.M., L. Li, R. Mukherjee, E. Tchetgen, and A. van der Vaart</span> (2017): “Higher Order Estimating Equations for High-Dimensional Models,” *Annals of Statistics,* forthcoming. <span style="font-variant:small-caps;">Robinson, P.M.</span> (1988): "\`Root-N-consistent Semiparametric Regression," *Econometrica* 56, 931-954. <span style="font-variant:small-caps;">Rust, J.</span> (1987): “Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher,” *Econometrica* 55, 999-1033. <span style="font-variant:small-caps;">Santos, A.</span> (2011): “Instrumental Variable Methods for Recovering Continuous Linear Functionals,” *Journal of Econometrics*, 161, 129-146. <span style="font-variant:small-caps;">Scharfstein D.O., A. Rotnitzky, and J.M. Robins (1999):</span> Rejoinder to Adjusting For Nonignorable Drop-out Using Semiparametric Non-response Models, *Journal of the American Statistical Association* 94, 1135-1146. <span style="font-variant:small-caps;">Severini, T. and G. Tripathi (2006): "</span>Some Identification Issues in Nonparametric Linear Models with Endogenous Regressors," *Econometric Theory* 22, 258-278. <span style="font-variant:small-caps;">Severini, T. and G. Tripathi (2012):</span> “Efficiency Bounds for Estimating Linear Functionals of Nonparametric Regression Models with Endogenous Regressors,” *Journal of Econometrics* 170, 491-498. <span style="font-variant:small-caps;">Schick, A.</span> (1986): “On Asymptotically Efficient Estimation in Semiparametric Models,” *Annals of Statistics* 14, 1139-1151. <span style="font-variant:small-caps;">Stoker, T.</span> (1986): “Consistent Estimation of Scaled Coefficients,” *Econometrica* 54, 1461-1482. <span style="font-variant:small-caps;">Tamer, E.</span> (2003): “Incomplete Simultaneous Discrete Response Model with Multiple Equilibria,” *Review of Economic Studies* 70, 147-165. <span style="font-variant:small-caps;">van der Laan, M. and Rubin</span> (2006): “Targeted Maximum Likelihood Learning,” U.C. Berkeley Division of Biostatistics Working Paper Series. Working Paper 213. <span style="font-variant:small-caps;">[van der Vaart]{}, A.W.</span> (1991): On Differentiable Functionals, *The Annals of Statistics,* 19, 178-204. <span style="font-variant:small-caps;">[van der Vaart]{}, A.W.</span> (1998): *Asymptotic Statistics,* Cambride University Press, Cambridge, England. <span style="font-variant:small-caps;">[van der Vaart]{}, A.W.</span> (2014): “Higher Order Tangent Spaces and Influence Functions,” Statistical Science 29, 679–686. <span style="font-variant:small-caps;">Wooldridge, J.M.</span> (1991): On the Application of Robust, Regression-Based Diagnostics to Models of Conditional Means and Conditional Variances, *Journal of Econometrics* 47, 5-46.
--- abstract: 'In state space models, smoothing refers to the task of estimating a latent stochastic process given noisy measurements related to the process. We propose an unbiased estimator of smoothing expectations. The lack-of-bias property has methodological benefits: independent estimators can be generated in parallel, and confidence intervals can be constructed from the central limit theorem to quantify the approximation error. To design unbiased estimators, we combine a generic debiasing technique for Markov chains with a Markov chain Monte Carlo algorithm for smoothing. The resulting procedure is widely applicable and we show in numerical experiments that the removal of the bias comes at a manageable increase in variance. We establish the validity of the proposed estimators under mild assumptions. Numerical experiments are provided on toy models, including a setting of highly-informative observations, and a realistic Lotka-Volterra model with an intractable transition density.' author: - | Pierre E. Jacob[^1]\ Department of Statistics, Harvard University\ Fredrik Lindsten and Thomas B. Schön\ Department of Information Technology, Uppsala University bibliography: - 'Biblio.bib' title: '**Smoothing with Couplings of Conditional Particle Filters**' --- \#1 [*Keywords:*]{} couplings, particle filtering, particle smoothing, debiasing techniques, parallel computation. Introduction\[sec:introduction\] ================================ Goal and content ---------------- In state space models, the observations are treated as noisy measurements related to an underlying latent stochastic process. The problem of smoothing refers to the estimation of trajectories of the underlying process given the observations [@cappe:ryden:2004]. For finite state spaces and linear Gaussian models, smoothing can be performed exactly. In general models, numerical approximations are required, and many state-of-the-art methods are based on particle methods [@douc:moulines:2014; @kantas2015particle]. Following this line of work, we propose a new method for smoothing in general state space models. Unlike existing methods, the proposed estimators are unbiased, which has direct benefits for parallelization and for the construction of confidence intervals. The proposed method combines recently proposed conditional particle filters [@andrieu:doucet:holenstein:2010] with debiasing techniques for Markov chains [@glynn2014exact]. Specifically, we show in Section \[sec:unbiasedsmoothing\] how to remove the bias of estimators constructed with conditional particle filters, in exchange for an increase of variance; this variance can then be controlled with tuning parameters, and arbitrarily reduced by averaging over independent replicates. The validity of the proposed approach relies on the finiteness of the computational cost and of the variance of the proposed estimators, which we establish under mild conditions in Section \[sec:newsmoother:theory\]. Methodological improvements are presented in Section \[sec:newsmoother:practical\], and comparisons with other smoothers in Section \[sec:comparison\]. Numerical experiments are provided in Section \[sec:numerics\], and Section \[sec:discussion\] concludes. Smoothing in state space models \[sec:intro:smoothing\] ------------------------------------------------------- The latent stochastic process $(x_{t})_{t\geq 0}$ takes values in $\mathbb{X}\subset \mathbb{R}^{d_x}$, and the observations $(y_t)_{t\geq 1}$ are in $\mathbb{Y}\subset \mathbb{R}^{d_y}$ for some $d_x,d_y \in\mathbb{N}$. A model specifies an initial distribution $m_0(dx_{0}|\theta)$ and a transition kernel $f(dx_{t}| x_{t-1},\theta)$ for the latent process. We will assume that we have access to deterministic functions $M$ and $F$, and random variables $U_t$ for $t\geq 0$, such that $M(U_0,\theta)$ follows $m_0(dx_0|\theta)$ and $F(x_{t-1},U_t,\theta)$ follows $f(dx_t|x_{t-1},\theta)$; we refer to these as random function representations of the process [see @diaconis1999iterated]. Conditionally upon the latent process, the observations are independent and their distribution is given by a measurement kernel $g(dy_{t}| x_{t},\theta)$. The model is parameterized by $\theta\in\Theta\subset \mathbb{R}^{d_\theta}$, for $d_\theta\in\mathbb{N}$. Filtering consists in approximating the distribution $p(dx_{t}| y_{1:t},\theta)$ for all times $t\geq 1$, whereas smoothing refers to the approximation of $p(dx_{0:T}|y_{1:T},\theta)$ for a fixed time horizon $T$, where for $s,t\in\mathbb{N}$, we write $s:t$ for the set $\{s,\ldots,t\}$, and $v_{s:t}$ for the vector $(v_s,\ldots,v_t)$. The parameter $\theta$ is hereafter fixed and removed from the notation, as is usually done in the smoothing literature [see Section 4 in @kantas2015particle]; we discuss unknown parameters in Section \[sec:discussion\]. Denote by $h$ a test function from $\mathbb{X}^{T+1}$ to $\mathbb{R}$, of which we want to compute the expectation with respect to the smoothing distribution $\pi(dx_{0:T})=p(dx_{0:T}|y_{1:T})$; we write $\pi(h)$ for $\int_{\mathbb{X}^{T+1}} h(x_{0:T}) \pi(dx_{0:T})$. For instance, with $h:x_{0:T}\mapsto x_t$ where $t\in 0:T$, $\pi(h)$ is the smoothing expectation $\mathbb{E}[x_t|y_{1:T}]$. Postponing a discussion on existing smoothing methods to Section \[sec:comparison\], we first describe the conditional particle filter [CPF, @andrieu:doucet:holenstein:2010], which is a variant of the particle filter [@doucet:defreitas:gordon:2001]. Given a “reference” trajectory $X = x_{0:T}$, a CPF generates a new trajectory $X^\prime = x_{0:T}^\prime$ as described in Algorithm \[alg:conditional-particle-filter\], which defines a Markov kernel on the space of trajectories; we will write $x^\prime_{0:T} \sim \text{CPF}(x_{0:T},\cdot)$. This Markov kernel leaves $\pi$ invariant and ergodic averages of the resulting chains consistently estimate integrals with respect to $\pi$, under mild conditions [@andrieu:doucet:holenstein:2010; @ChopinS:2015; @LindstenDM:2015; @andrieuvihola2013uniform; @kuhlenschmidt2018stability; @Lee2018ccbpf]. We denote by $(X^{(n)})_{n\geq 0}$ a chain starting from a path $X^{(0)}$, and iterating through $X^{(n)}\sim\text{CPF}(X^{(n-1)},\cdot)$ for $n\geq 1$. 1. 2. <!-- --> 1. 2. 3. <!-- --> 1. 2. In step 2.1. of Algorithm \[alg:conditional-particle-filter\], the resampling distribution $r(da^{1:N-1}|w^{1:N})$ refers to a distribution on $\{1,\ldots,N\}^{N-1}$ from which “ancestors” are drawn according to particle weights. The resampling distribution is an algorithmic choice; specific schemes for the conditional particle filter are described in @ChopinS:2015. Here we will use multinomial resampling throughout. In step 2.3., “normalize the weights” means dividing them by their sum. Instead of bootstrap particle filters [@gordon:salmon:smith:1993], where particles are propagated from the model transition, more sophisticated filters can readily be used in the CPF procedure. For instance, performance gains can be obtained with auxiliary particle filters [@pitt1999filtering; @johansen2008note], as illustrated in Section \[sec:numerics:hiddenar\]. In presenting algorithms we focus on bootstrap particle filters for simplicity. When the transition density is tractable, extensions of the CPF include backward sampling [@whiteleycommentonpmcmc; @LindstenS:2013] and ancestor sampling [@LindstenJS:2014], which is beneficial in the proposed approach as illustrated in Section \[sec:numerics:hiddenar\]. The complexity of a standard CPF update is of order $NT$, and the memory requirements are of order $T + N\log N$ [@jacob2015path]. The proposed method relies on CPF kernels but is different from Markov chain Monte Carlo (MCMC) estimators: it involves independent copies of unbiased estimators of $\pi(h)$. Thus it will be amenable to parallel computation and confidence intervals will be constructed in a different way than with standard MCMC output [e.g. Chapter 7 in @gelman2010handbook]; see Section \[sec:comparison\] for a comparison with existing smoothers. Debiasing Markov chains \[sec:debiasing\] ----------------------------------------- We briefly recall the debiasing technique of @glynn2014exact, see also @McLeish:2011 [@Rhee:Glynn:2012; @vihola2015unbiased] and references therein. Denote by $(X^{(n)})_{n\geq 0}$ and $({\tilde{X}}^{(n)})_{n\geq 0}$ two Markov chains with invariant distribution $\pi$, initialized from a distribution $\pi_0$. Assume that, for all $n\geq 0$, $X^{(n)}$ and ${\tilde{X}}^{(n)}$ have the same marginal distribution, and that $\lim_{n\to\infty} \mathbb{E}[h(X^{(n)})] = \pi(h)$. Writing limit as a telescopic sum, and swapping infinite sum and expectation, which will be justified later on, we obtain $$\begin{aligned} \pi(h) &= \mathbb{E}[h(X^{(0)})] + \sum_{n=1}^\infty \mathbb{E}[h(X^{(n)}) - h(\tilde{X}^{(n-1)})] = \mathbb{E}[h(X^{(0)}) + \sum_{n=1}^\infty (h(X^{(n)}) - h(\tilde{X}^{(n-1)}))].\end{aligned}$$ Then, if it exists, the random variable $H_0 = h(X^{(0)}) + \sum_{n=1}^\infty (h(X^{(n)}) - h(\tilde{X}^{(n-1)}))$, is an unbiased estimator of $\pi(h)$. Furthermore, if the chains are coupled in such a way that there exists a time $\tau$, termed the *meeting time*, such that $X^{(n)}={\tilde{X}}^{(n-1)}$ almost surely for all $n\geq \tau$, then $H_0$ can be computed as $$H_0 = h(X^{(0)}) + \sum_{n=1}^{\tau - 1} (h(X^{(n)}) - h(\tilde{X}^{(n-1)})). \label{eq:RGestimator}$$ We refer to $H_0$ as a Rhee–Glynn estimator. Given that the cost of producing $H_0$ increases with $\tau$, it will be worth keeping in mind that we would prefer $\tau$ to take small values with large probability. The main contribution of the present article is to couple CPF chains and to use them in a Rhee–Glynn estimation procedure. Section \[sec:newsmoother:theory\] provides guarantees on the cost and the variance of $H_0$ under mild conditions, and Section \[sec:newsmoother:practical\] contains alternative estimators with reduced variance and practical considerations. Unbiased smoothing \[sec:unbiasedsmoothing\] ============================================ Coupled conditional particle filters \[sec:ccpf\] ------------------------------------------------- Our goal is to couple CPF chains $(X^{(n)})_{n\geq 0}$ and $({\tilde{X}}^{(n)})_{n\geq 0}$ such that the meeting time has finite expectation, in order to enable a Rhee–Glynn estimator for smoothing. A coupled conditional particle filter (CCPF) is a Markov kernel on the space of pairs of trajectories, such that $(X^\prime,{\tilde{X}}^\prime)\sim \text{CCPF}((X,{\tilde{X}}), \cdot)$ implies that $X^\prime\sim \text{CPF}(X, \cdot)$ and ${\tilde{X}}^\prime \sim \text{CPF}({\tilde{X}}, \cdot)$. Algorithm \[alg:coupled-conditional-particle-filter\] describes CCPF in pseudo-code, conditional upon $X = x_{0:T}$ and ${\tilde{X}}= {\tilde{x}}_{0:T}$. Two particle systems are initialized and propagated using common random numbers. The resampling steps and the selection of trajectories at the final step are performed jointly using couplings of discrete distributions. To complete the description of the CCPF procedure, we thus need to specify these couplings (for steps 2.1. and 3.1. in Algorithm \[alg:coupled-conditional-particle-filter\]). With the Rhee–Glynn estimation procedure in mind, we aim at achieving large meeting probabilities $\mathbb{P}(X^\prime = {\tilde{X}}^\prime | X,{\tilde{X}})$, so as to incur short meeting times on average. 1. 2. 3. <!-- --> 1. 2. 3. <!-- --> 1. 2. Coupled resampling \[sec:couplingparticlesystems\] -------------------------------------------------- The temporal index $t$ is momentarily removed from the notation: the task is that of sampling pairs $(a,{\tilde{a}})$ such that $\mathbb{P}(a=j)=w^{j}$ and $\mathbb{P}({\tilde{a}}=j)={\tilde{w}}^{j}$ for all $j\in 1:N$; this is a sufficient condition for CPF kernels to leave $\pi$ invariant [@andrieu:doucet:holenstein:2010]. A joint distribution on $\{1,\ldots,N\}^{2}$ is characterized by a matrix $P$ with non-negative entries $P^{ij}$, for $i,j\in\{ 1,\ldots,N\}$, that sum to one. The value $P^{ij}$ represents the probability of the event $(a,{\tilde{a}}) = (i,j)$. We consider the set $\mathcal{J}(w,{\tilde{w}})$ of matrices $P$ such that $P\mathds{1}=w$ and $P^{\mathsf{T}}\mathds{1}={\tilde{w}}$, where $\mathds{1}$ denotes a column vector of $N$ ones, $w = w^{1:N}$ and ${\tilde{w}}= {\tilde{w}}^{1:N}$. Matrices $P\in \mathcal{J}(w,{\tilde{w}})$ are such that $\mathbb{P}(a=j)=w^{j}$ and $\mathbb{P}({\tilde{a}}=j)={\tilde{w}}^{j}$ for $j\in 1:N$. Any choice of probability matrix $P\in\mathcal{J}(w,{\tilde{w}})$, and of a way of sampling $(a,{\tilde{a}})\sim P$, leads to a *coupled* resampling scheme. In order to keep the complexity of sampling $N$ pairs from $P$ linear in $N$, we focus on a particular choice. Other choices of coupled resampling schemes are given in @deligiannidis2015correlated [@jacob2016coupling; @sen2018coupling], following earlier works such as @pitt2002smooth [@lee2008towards]. We consider the *index-coupled* resampling scheme, used by @ChopinS:2015 in their theoretical analysis of the CPF, and by @jasra2015multilevel in a multilevel Monte Carlo context, see also Section 2.4 in @jacob2016coupling. The scheme amounts to a maximal coupling of discrete distributions on $\{1,\ldots,N\}$ with probabilities $w^{1:N}$ and ${\tilde{w}}^{1:N}$, respectively. This coupling maximizes the probability of the event $\{a = \tilde{a}\}$ under the marginal constraints. How to sample from a maximal coupling of discrete distributions is described e.g. in @lindvall2002lectures. The scheme is intuitive at the initial step of the CCPF, when $x_0^j = {\tilde{x}}_0^j$ for all $j=1,\ldots,N-1$: one would want pairs of ancestors $(a_0,{\tilde{a}}_0)$ to be such that $a_0 = {\tilde{a}}_0$, so that pairs of resampled particles remain identical. At later steps, the number of identical pairs across both particle systems might be small, or even null. In any case, at step 2.2. of Algorithm \[alg:coupled-conditional-particle-filter\], the same random number $U_{t}^j$ is used to compute $x^j_{t}$ and ${\tilde{x}}^j_{t}$ from their ancestors. If $a_{t-1}^j = {\tilde{a}}_{t-1}^j$, we select ancestor particles that were, themselves, computed with common random numbers at the previous step, and we give them common random numbers again. Thus this scheme maximizes the number of consecutive steps at which common random numbers are used to propagate each pair of particles. We now discuss why propagating pairs of particles with common random numbers might be desirable. Under assumptions on the random function representation of the latent process, using common random numbers to propagate pairs of particles results in the particles contracting. For instance, in an auto-regressive model where $F(x,U,\theta) = \theta x + U$, where $\theta \in (-1,1)$ and $U$ is the innovation term, we have $|F(x,U,\theta) - F({\tilde{x}},U,\theta)| = |\theta| |x-{\tilde{x}}|$, thus a pair of particles propagated with common variables $U$ contracts at a geometric rate. We can formulate assumptions directly on the function $x\mapsto \mathbb{E}_U[F(x,U,\theta)]$, such as Lipschitz conditions with respect to $x$, after having integrated $U$ out, for fixed $\theta$. Discussions on these assumptions can be found in @diaconis1999iterated, and an alternative method that would not require them is mentioned in Section \[sec:discussion\]. Rhee–Glynn smoothing estimator \[sec:rgsmoothing\] -------------------------------------------------- We now put together the Rhee–Glynn estimator of Section \[sec:debiasing\] with the CCPF algorithm of Section \[sec:ccpf\]. In passing we generalize the Rhee–Glynn estimator slightly by starting the telescopic sum at index $k\geq 0$ instead of zero, and denote it by $H_k$; $k$ becomes a tuning parameter, discussed in Section \[sec:newsmoother:practical\]. The procedure is fully described in Algorithm \[alg:rheeglynnsmoother\]; CPF and CCPF refer to Algorithms \[alg:conditional-particle-filter\] and \[alg:coupled-conditional-particle-filter\] respectively. By convention the sum from $k+1$ to $\tau-1$ in the definition of $H_k$ is set to zero whenever $k+1>\tau-1$. Thus the estimator $H_k$ is equal to $h(X^{(k)})$ on the event $\{k+1>\tau-1\}$. Recall that $h(X^{(k)})$ is in general a biased estimator of $\pi(h)$, since there is no guarantee that a CPF chain reaches stationarity within $k$ iterations. Thus the term $\sum_{n=k+1}^{\tau - 1}(h(X^{(n)}) - h({\tilde{X}}^{(n-1)}))$ acts as a bias correction. 1. 2. 1. 2. 3. At step 1. of Algorithm \[alg:rheeglynnsmoother\], the paths $X^{(0)}$ and ${\tilde{X}}^{(0)}$ can be sampled independently or not from $\pi_0$. In the experiments we will initialize chains independently and $\pi_0$ will refer to the distribution of a path randomly chosen among the trajectories of a particle filter. Theoretical properties\[sec:newsmoother:theory\] ================================================ We give three sufficient conditions for the validity of Rhee–Glynn smoothing estimators. \[assumption:upperbound\] The measurement density of the model is bounded from above: there exists $\bar{g} < \infty$ such that, for all $y\in \mathbb{Y}$ and $x\in\mathbb{X}$, $g(y | x) \leq \bar{g}$. \[assumption:couplingmatrix\] The resampling probability matrix $P$, with rows summing to $w^{1:N}$ and columns summing to ${\tilde{w}}^{1:N}$, is such that, for all $i\in \{1,\ldots,N\}$, $P^{ii} \geq w^i {\tilde{w}}^i$. Furthermore, if $w^{1:N} = {\tilde{w}}^{1:N}$, then $P$ is a diagonal matrix with entries given by $w^{1:N}$. \[assumption:mixing\] Let $(X^{(n)})_{n \geq 0}$ be a Markov chain generated by the conditional particle filter and started from $\pi_0$, and $h$ a test function of interest. Then $\mathbb{E}\left[h(X^{(n)})\right] \xrightarrow[n\to \infty]{} \pi(h)$. Furthermore, there exists $\delta > 0$, $n_0 < \infty$ and $C<\infty$ such that, for all $n\geq n_0$, $\mathbb{E}\left[h(X^{(n)})^{2+\delta}\right]\leq C$. The first assumption is satisfied for wide classes of models where the measurements are assumed to be some transformation of the latent process with added noise. However, it would not be satisfied for instance in stochastic volatility models where it is often assumed that $Y|X=x\sim \mathcal{N}(0, \exp(x)^2)$ or variants thereof [e.g. @fulop2013efficient]. There, the measurement density would diverge when $y$ is exactly zero and $x\to -\infty$. A similar assumption is discussed in Section 3 of @whiteley2013stability. One can readily check that the second assumption always holds for the index-coupled resampling scheme. The third assumption relates to the validity of MCMC estimators generated by the CPF algorithm, addressed under general assumptions in @ChopinS:2015 [@LindstenDM:2015; @andrieuvihola2013uniform]. Our main result states that the proposed estimator is unbiased, has a finite variance, and that the meeting time $\tau$ has tail probabilities bounded by those of a geometric variable, which implies in particular that the estimator has a finite expected cost. Under Assumptions \[assumption:upperbound\] and \[assumption:couplingmatrix\], for any initial distribution $\pi_0$, any number of particles $N\geq 2$ and time horizon $T\geq 1$, there exists $\varepsilon>0$, which might depend on $N$ and $T$, such that for all $n\geq 2$, $$\mathbb{P}(\tau > n) \leq (1-\varepsilon)^{n-1},$$ and therefore $\mathbb{E}[\tau]<\infty$. Under the additional Assumption \[assumption:mixing\], the Rhee–Glynn smoothing estimator $H_k$ of Algorithm \[alg:rheeglynnsmoother\] is such that, for any $k\geq 0$, $\mathbb{E}[H_k] = \pi(h)$ and $\mathbb{V}[H_k] < \infty$. \[thm:finitevariance\] The proof is in Appendices \[sec:proof:intermed\] and \[sec:proof:unbiased\]. Some aspects of the proof, not specific to the smoothing setting, are similar to the proofs of Theorem 1 in @rhee:phd, Theorem 2.1 in @McLeish:2011, Theorem 7 in @vihola2015unbiased, and results in @glynn2014exact. It is provided in univariate notation but the Rhee–Glynn smoother can estimate multivariate smoothing functionals, in which case the theorem applies component-wise. Improvements and tuning \[sec:newsmoother:practical\] ===================================================== Since $H_\ell$ is unbiased for all $\ell\geq 0$, we can compute $H_\ell$ for various values of $\ell$ between two integers $k\leq m$, and average these estimators to obtain $H_{k:m}$ defined as $$\begin{aligned} \label{eq:timeaverage} H_{k:m} & = \frac{1}{m-k+1}\sum_{n = k}^m \{h(X^{(n)}) + \sum_{\ell = n + 1}^{\tau - 1} (h(X^{(\ell)}) - h({\tilde{X}}^{(\ell-1)}))\} \nonumber \\ &= \frac{1}{m-k+1}\sum_{n = k}^m h(X^{(n)}) + \sum_{n =k + 1}^{\tau - 1} \frac{\min(m-k+1, n-k)}{m-k+1} (h(X^{(n)}) - h({\tilde{X}}^{(n-1)})).\end{aligned}$$ The term $(m-k+1)^{-1} \sum_{n = k}^m h(X^{(n)})$ is a standard ergodic average of a CPF chain, after $m$ iterations and discarding the first $k-1$ steps as burn-in. It is a biased estimator of $\pi(h)$ in general since $\pi_0$ is different from $\pi$. The other term acts as a bias correction. On the event $\tau - 1< k+1$ the correction term is equal to zero. As $k$ increases the bias of the term $(m-k+1)^{-1} \sum_{n = k}^m h(X^{(n)})$ decreases. The variance inflation of the Rhee–Glynn estimator decreases too, since the correction term is equal to zero with increasing probability. On the other hand, it can be wasteful to set $k$ to an overly large value, in the same way that it is wasteful to discard too many iterations as burn-in when computing MCMC estimators. In practice we propose to choose $k$ according to the distribution of $\tau$, which can be sampled from exactly by running Algorithm \[alg:rheeglynnsmoother\], as illustrated in the numerical experiments of Section \[sec:numerics\]. Conditional upon a choice of $k$, by analogy with MCMC estimators we can set $m$ to a multiple of $k$, such as $2k$ or $5k$. Indeed the proportion of discarded iterations is approximately $k/m$, and it appears desirable to keep this proportion low. We stress that the proposed estimators are unbiased and with a finite variance for any choice of $k$ and $m$; tuning $k$ and $m$ only impacts variance and cost. For a given choice of $k$ and $m$, the estimator $H_{k:m}$ can be sampled $R$ times independently in parallel. We denote the independent copies by $H_{k:m}^{(r)}$ for $r\in 1:R$. The smoothing expectation of interest $\pi(h)$ can then be approximated by $\bar{H}_{k:m}^R = R^{-1}\sum_{r=1}^R H_{k:m}^{(r)}$, with a variance that decreases linearly with $R$. From the central limit theorem the confidence interval $[\bar{H}_{k:m}^R + z_{\alpha/2} \hat{\sigma}^R/\sqrt{R}, \bar{H}_{k:m}^R + z_{1-\alpha/2} \hat{\sigma}^R/\sqrt{R}]$, where $\hat{\sigma}^R$ is the empirical standard deviation of $(H_{k:m}^{(r)})_{r=1}^R$ and $z_a$ is the $a$-th quantile of a standard Normal distribution, has $1-\alpha$ asymptotic coverage as $R\to \infty$. The central limit theorem is applicable as a consequence of Theorem \[thm:finitevariance\]. The variance of the proposed estimator can be further reduced by Rao–Blackwellization. In Eq. , the random variable $h(X^{(n)})$ is obtained by applying the test function $h$ of interest to a trajectory drawn among $N$ trajectories, denoted by say $x_{0:T}^k$ for $k=1,\ldots,N$, with probabilities $w_T^{1:N}$; see step 3 in Algorithms \[alg:conditional-particle-filter\] and \[alg:coupled-conditional-particle-filter\]. Thus the random variable $\sum_{k=1}^N w_T^{k}h(x_{0:T}^{k})$ is the conditional expectation of $h(X^{(n)})$ given the trajectories and $w_T^{1:N}$, which has the same expectation as $h(X^{(n)})$. Thus any term $h(X^{(n)})$ or $h({\tilde{X}}^{(n)})$ in $H_{k:m}$ can be replaced by similar conditional expectations. This enables the use of all the paths generated by the CPF and CCPF kernels, and not only the selected ones. As in other particle methods the choice of the number of particles $N$ is important. Here, the estimator $\bar{H}_{k:m}^R$ is consistent as $R\to \infty$ for any $N\geq 2$, but $N$ plays a role both on the cost and of the variance of each $H^{(r)}_{k:m}$. We can generate unbiased estimators for different values of $N$ and compare their costs and variances in preliminary runs. The scaling of $N$ with the time horizon $T$ is explored numerically in Section \[sec:numerics:hiddenar\]. If possible, one can also employ other algorithms than the bootstrap particle filter, as illustrated in Section \[sec:numerics:hiddenar\] with the auxiliary particle filter. Comparison with existing smoothers \[sec:comparison\] ===================================================== The proposed method combines elements from both particle smoothers and MCMC methods, but does not belong to either category. We summarize advantages and drawbacks below, after having discussed the cost of the proposed estimators. Each estimator $H_{k:m}$ requires two draws from $\pi_0$, here taken as the distribution of a trajectory selected from a particle filter with $N$ particles. Then, the estimator as described in Algorithm \[alg:rheeglynnsmoother\] requires a draw from the CPF kernel, $\tau-1$ draws from the CCPF kernel, and finally $m-\tau$ draws of the CPF kernel on the events $\{m>\tau\}$. The cost of a particle filter and of an iteration of CPF is usually dominated by the propagation of $N$ particles and the evaluation of their weights. The cost of an iteration of CCPF is approximately twice larger. Overall the cost of $H_{k:m}$ is thus of order $C(\tau,m,N) = N\times (3+2(\tau-1)+\max(0,m-\tau))$, for fixed $T$. The finiteness of the expected cost $\mathbb{E}[C(\tau,m,N)]$ is a consequence of Theorem \[thm:finitevariance\]. The average $\bar{H}_{k:m}^R$ satisfies a central limit theorem parametrized by the number of estimators $R$, as discussed in Section \[sec:newsmoother:practical\]; however, since the cost of $H_{k:m}$ is random, it might be more relevant to consider central limit theorems parametrized by computational cost, as in @glynn1992asymptotic. The asymptotic inefficiency of the proposed estimators can be defined as $\mathbb{E}[C(\tau,m,N)]\times\mathbb{V}[H_{k:m}]$, which can be approximated with independent copies of $H_{k:m}$ and $\tau$, obtained by running Algorithm \[alg:rheeglynnsmoother\]. State-of-the-art particle smoothers include fixed-lag approximations [@kitagawa2001monte; @cappe:ryden:2004; @olsson2008sequential], forward filtering backward smoothers [@GodsillDW:2004; @del2010forward; @douc2011sequential; @taghavi2013adaptive], and smoothers based on the two-filter formula [@briers2010smoothing; @kantas2015particle]. These particle methods provide consistent approximations as $N\to\infty$, with associated mean squared error decreasing as $1/N$ [Section 4.4 of @kantas2015particle]; except for fixed-lag approximations for which some bias remains. The cost is typically of order $N$ with efficient implementations described in @fearnheadwyncolltawn2010 [@kantas2015particle; @olsson2017efficient], and is linear in $T$ for fixed $N$. Parallelization over the $N$ particles is mostly feasible, the main limitation coming from the resampling step [@murray2015parallel; @lee2015forest; @whiteley2016role; @paige2014asynchronous; @murray2016anytime]. The memory cost of particle filters is of order $N$, or $N\log N$ if trajectories are kept [@jacob2015path], see also @Koskela2018. Assessing the accuracy of particle approximations from a single run of these methods remains a major challenge; see @lee2015variance [@olsson2017numerically] for recent breakthroughs. Furthermore, we will see in Section \[sec:numerics:unlikely\] that the bias of particle smoothers cannot always be safely ignored. On the other hand, we will see in Section \[sec:numerics:pz\] that the variance of particle smoothers can be smaller than that of the proposed estimators, for a given computational cost. Thus, in terms of mean squared error per unit of computational cost, the proposed method is not expected to provide benefits. The main advantage of the proposed method over particle smoothers lies in the construction of confidence intervals, and the possibility of parallelizing over independent runs as opposed to interacting particles. Additionally, a user of particle smoothers who would want more precise results would increase the number of particles $N$, if enough memory is available, discarding previous runs. On the other hand, the proposed estimator $\bar{H}_{k:m}^R$ can be refined to arbitrary precision by drawing more independent copies of $H_{k:m}$, for a constant memory requirement. Other popular smoothers belong to the family of MCMC methods. Early examples include Gibbs samplers, updating components of the latent process conditionally on other components and on the observations [e.g. @carter1994gibbs]. The CPF kernel described in Section \[sec:intro:smoothing\] can be used in the standard MCMC way, averaging over as many iterations as possible [@andrieu:doucet:holenstein:2010]. The bias of MCMC estimators after a finite number of iterations is hard to assess, which makes the choice of burn-in period difficult. Asymptotically valid confidence intervals can be produced in various ways, for instance using the CODA package [@plummer2006coda]; see also @vats2018strong. On the other hand, parallelization over the iterations is intrinsically challenging with MCMC methods [@rosenthal2000parallel]. Therefore the proposed estimators have some advantages over existing methods, the main drawback being a potential increase in mean squared error for a given (serial) computational budget, as illustrated in the numerical experiments. Numerical experiments\[sec:numerics\] ===================================== We illustrate the tuning of the proposed estimators, their advantages and their drawbacks through numerical experiments. All estimators of this section employ the Rao–Blackwellization technique described in Section \[sec:newsmoother:practical\], and multinomial resampling is used within all filters. Hidden auto-regressive model\[sec:numerics:hiddenar\] ----------------------------------------------------- Our first example illustrates the proposed method, the impact of the number of particles $N$ and that of the time horizon $T$, and the benefits of auxiliary particle filters. We consider a linear Gaussian model, with $x_{0}\sim\mathcal{N}\left(0,1\right)$ and $x_{t}=\eta x_{t-1}+\mathcal{N}\left(0,1\right)$ for all $t \geq 1$, with $\eta=0.9$. We assume that $y_{t}\sim\mathcal{N}\left(x_{t},1\right)$ for all $t \geq 1$. We first generate $T = 100$ observations from the model, and consider the task of estimating all smoothing means, which corresponds to the test function $h: x_{0:T}\mapsto x_{0:T}$. With CPF kernels using bootstrap particle filters, with $N = 256$ particles and ancestor sampling [@LindstenJS:2014], we draw meeting times $\tau$ independently, and represent a histogram of them in Figure \[fig:ar1:meetings\]. Based on these meeting times, we can choose $k$ as a large quantile of the meeting times, for instance $k = 10$, and $m$ as a multiple of $k$, for instance $m = 2k = 20$. For this choice, we find the average compute cost of each estimator to approximately equal that of a particle filter with $28\times 256$ particles, with a memory usage equivalent to $2\times 256$ particles. How many of these estimators can be produced in a given wall-clock time depends on available hardware. With $R=100$ independent estimators, we obtain $95\%$ confidence intervals indicated by black error bars in Figure \[fig:ar1:smoothingmeans\]. The true smoothing means, obtained by Kalman smoothing, are indicated by a line. The method is valid for all $N$, which prompts the question of the optimal choice of $N$. Intuitively, larger values of $N$ lead to smaller meeting times. However, the meeting time cannot be less than $2$ by definition, which leads to a trade-off. We verify this intuition by numerical simulations with $1,000$ independent runs. For $N=16$, $N=128$, $N=256$, $N=512$ and $N=1,024$, we find average meeting times of $97$, $15$, $7$, $4$ and $3$ respectively. After adjusting for the different numbers of particles, the expected cost of obtaining a meeting is approximately equivalent with $N=16$ and $N=512$, but more expensive for $N=1,024$. In practice, for specific integrals of interest, one can approximate the cost and the variance of the proposed estimators for various values of $N$, $k$ and $m$ using independent runs, and use the most favorable configuration in subsequent, larger experiments. Next we investigate the effect of the time horizon $T$. We expect the performance of the CPF kernel to decay as $T$ increases for a fixed $N$. We compensate by increasing $N$ linearly with $T$. Table \[table:effecthorizon\] reports the average meeting times obtained from $R=500$ independent runs. We see that the average meeting times are approximately constant or slightly decreasing over $T$, implying that the linear scaling of $N$ with $T$ is appropriate or even conservative, in agreement with the literature [e.g. @huggins2015sequential]. The table contains the average meeting times obtained with and without ancestor sampling [@LindstenJS:2014]; we observe significant reductions of average meeting times with ancestor sampling, but it requires tractable transition densities. Finally, for the present model we can employ an auxiliary particle filter, in which particles are propagated conditionally on the next observation. Table \[table:effecthorizon\] shows a significant reduction in expected meeting time. The combination of auxiliary particle filter and ancestor sampling naturally leads to the smallest expected meeting times. A hidden auto-regressive model with an unlikely observation {#sec:numerics:unlikely} ----------------------------------------------------------- We now illustrate the benefits of the proposed estimators in an example taken from @ruiz2016particle where particle filters exhibit a significant bias. The latent process is defined as $x_{0}\sim\mathcal{N}\left(0,0.1^{2}\right)$ and $x_{t}=\eta x_{t-1}+\mathcal{N}\left(0,0.1^{2}\right)$; we take $\eta=0.9$ and consider $T=10$ time steps. The process is observed only at time $T=10$, where $y_{T}=1$ and we assume $y_{T}\sim\mathcal{N}\left(x_{T},0.1^{2}\right)$. The observation $y_{T}$ is unlikely under the model. Therefore the filtering distributions and the smoothing distributions have little overlap, particularly for times $t$ close to $T$. This toy model is a stylized example of settings with highly-informative observations [@ruiz2016particle; @del2015sequential]. We consider the task of estimating the smoothing mean $\mathbb{E}[x_9|y_{10}]$. We run particle filters for different values of $N$, $10,000$ times independently, and plot kernel density estimators of the distributions of the estimators of $\mathbb{E}[x_9|y_{10}]$ in Figure \[fig:unlikely:pf\]. The dashed vertical line represents the estimand $\mathbb{E}[x_9|y_{10}]$, obtained analytically. We see that the bias diminishes when $N$ increases, but that it is still significant with $N=16,384$ particles. For any fixed $N$, if we were to ignore the bias and produce confidence intervals using the central limit theorem based on independent particle filter estimators, the associated coverage would go to zero as the number of independent runs would increase. In contrast, confidence intervals obtained with the proposed unbiased estimators are shown in Figure \[fig:unlikely:rg\]. For each value of $N$, the average meeting time was estimated from $100$ independent runs (without ancestor sampling), and then $k$ was set to that estimate, and $m$ equal to $k$. Then, $R=10,000$ independent estimators were produced, and confidence intervals were computed as described in Section \[sec:newsmoother:practical\]. This leads to precise intervals for each choice of $N$. The average costs associated with $N=128$, $N=256$, $N=512$ and $N=1024$ were respectively matching the costs of particle filters with $3814$, $4952$, $9152$ and $13,762$ particles. To conclude, if we match computational costs and compare mean squared errors, the proposed method is not necessarily advantageous. However, if the interest lies in confidence intervals with adequate coverage, the proposed approach comes with guarantees thanks to the lack of bias and the central limit theorem for i.i.d. variables. Prey-predator model \[sec:numerics:pz\] --------------------------------------- Our last example involves a model of plankton–zooplankton dynamics taken from @jones2010bayesian, in which the transition density is intractable [@breto2009time; @jacob2015sequential]. The bootstrap particle filter is still implementable, and one can either keep the entire trajectories of the particle filter, or perform fixed-lag approximations to perform smoothing. On the other hand, backward and ancestor sampling are not implementable. The hidden state $x_t = (p_t, z_t)$ represents the population size of phytoplankton and zooplankton, and the transition from time $t$ to $t+1$ is given by a Lotka–Volterra equation, $$\frac{dp_t}{dt} = \alpha p_t - c p_t z_t , \quad \text{and}\quad \frac{dz_t}{dt} = e c p_t z_t -m_l z_t -m_q z_t^2,$$ where the stochastic daily growth rate $\alpha$ is drawn from $\mathcal{N}(\mu_\alpha,\sigma_\alpha^2)$ at every integer time $t$. The propagation of each particle involves solving the above equation numerically using a Runge-Kutta method in the `odeint` library [@ahnert2011odeint]. The initial distribution is given by $\log p_0 \sim \mathcal{N}(\log 2 , 1)$ and $\log z_0 \sim \mathcal{N}(\log 2, 1)$. The parameters $c$ and $e$ represent the clearance rate of the prey and the growth efficiency of the predator. Both $m_l$ and $m_q$ parameterize the mortality rate of the predator. The observations $y_t$ are noisy measurements of the phytoplankton $p_t$, $\log y_t \sim \mathcal{N}(\log p_t, 0.2^2)$; $z_t$ is not observed. We generate $T = 365$ observations using $\mu_\alpha = 0.7, \sigma_\alpha = 0.5$, $c = 0.25$, $e = 0.3$, $m_l = 0.1$, $m_q = 0.1$. We consider the problem of estimating the mean population of zooplankton at each time $t\in0:T$, denoted by $\mathbb{E}[z_t|y_{1:T}]$, given the data-generating parameter. The distribution of meeting times obtained with $N=4,096$ particles over $R=1,000$ experiments is shown in Figure \[fig:pz:meetings\]. Based on this graph, we choose $k=7$, $m=2k=14$, and produce $R=1,000$ independent estimators of the smoothing means $\mathbb{E}[z_t|y_{1:T}]$. We compute the smoothing means with a long CPF chain, taken as ground truth. We then compute the relative variance of our estimators, defined as their variance divided by the square of the smoothing means. We find the average cost of the proposed estimator to be equivalent to that of a particle filter with $78,377$ particles. To approximately match the cost, we thus run particle filters with $2^{16}=65,536$ particles, with and without fixed-lag smoothing with a lag of $10$. The resulting relative variances are shown in Figure \[fig:pz:relvar\]. We see that the proposed estimators yield a larger variance than particle filters, but that the difference is manageable. Fixed-lag smoothing provides significant variance reduction, particularly for earlier time indices. We can also verify that the bias of fixed-lag smoothing is negligible in the present example; this would however be hard to assess with fixed-lag smoothers alone. Discussion\[sec:discussion\] ============================ The performance of the proposed estimator is tied to the meeting time. As in @ChopinS:2015, the coupling inequality [@lindvall2002lectures] can be used to relate the meeting time with the mixing of the underlying conditional particle filter kernel. The proposed approach can be seen as a framework to parallelize CPF chains and to obtain reliable confidence intervals over independent replicates. Any improvement in the CPF directly translates into more efficient Rhee–Glynn estimators, as we have illustrated in Section \[sec:numerics:hiddenar\] with auxiliary particle filters and ancestor sampling. The methods proposed e.g. in @SinghLM:2017 [@del2015sequential; @guarniero2015iterated; @gerber2015sequential; @heng2017controlled] could also be used in Rhee–Glynn estimators, with the hope of obtaining shorter meeting times and smaller variance. We have considered the estimation of latent processes given known parameters. In the case of unknown parameters, joint inference of parameters and latent processes can be done with MCMC methods, and particle MCMC methods in particular [@andrieu:doucet:holenstein:2010]. Couplings of generic particle MCMC methods could be achieved by combining couplings proposed in the present article with those described in @jacob2017unbiased for Metropolis–Hastings chains. Furthermore, for fixed parameters, coupling the particle independent Metropolis–Hastings algorithm of @andrieu:doucet:holenstein:2010 would lead to unbiased estimators of smoothing expectations that would not require coupled resampling schemes (see Section \[sec:couplingparticlesystems\]). The appeal of the proposed smoother, namely parallelization over independent replicates and confidence intervals, would be shared by perfect samplers. These algorithms aim at the more ambitious task of sampling exactly from the smoothing distribution [@leedoucetperfectsimulation]. It remains unknown whether the proposed approach could play a role in the design of perfect samplers. We have established the validity of the Rhee–Glynn estimator under mild conditions, but its theoretical study as a function of the time horizon and the number of particles deserves further analysis [see @Lee2018ccbpf for a path forward]. Finally, together with Fisher’s identity [@douc:moulines:2014], the proposed smoother provides unbiased estimators of the score for models where the transition density is tractable. This could help maximizing the likelihood via stochastic gradient ascent. **Acknowledgements.** The authors thank Marco Cuturi, Mathieu Gerber, Jeremy Heng and Anthony Lee for helpful discussions. This work was initiated during the workshop on *Advanced Monte Carlo methods for complex inference problems* at the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK held in April 2014. We would like to thank the organizers for a great event which led to this work. Intermediate result on the meeting probability \[sec:proof:intermed\] ===================================================================== Before proving Theorem \[thm:finitevariance\], we introduce an intermediate result on the probability of the chains meeting at the next step, irrespective of their current states. The result provides a lower-bound on the probability of meeting in one step, for coupled chains generated by the coupled conditional particle filter (CCPF) kernel. Let $N\geq 2$ and $T\geq 1$ be fixed. Under Assumptions \[assumption:upperbound\] and \[assumption:couplingmatrix\], there exists $\varepsilon>0$, depending on $N$ and $T$, such that $$\forall X \in \mathbb{X}^{T+1}, \quad \forall {\tilde{X}}\in \mathbb{X}^{T+1}, \quad \mathbb{P}(X' = {\tilde{X}}' | X, {\tilde{X}}) \geq \varepsilon,$$ where $(X',{\tilde{X}}') \sim \text{CCPF}((X,{\tilde{X}}), \cdot)$. Furthermore, if $X = {\tilde{X}}$, then $X' = {\tilde{X}}'$ almost surely. \[lemma:meetingprobability\] The constant $\varepsilon$ depends on $N$ and $T$, and on the coupled resampling scheme being used. Lemma \[lemma:meetingprobability\] can be used, together with the coupling inequality [@lindvall2002lectures], to prove the ergodicity of the conditional particle filter kernel, which is akin to the approach of @ChopinS:2015. The coupling inequality states that the total variation distance between $X^{(n)}$ and ${\tilde{X}}^{(n-1)}$ is less than $2\mathbb{P}(\tau > n)$, where $\tau$ is the meeting time. By assuming ${\tilde{X}}^{(0)}\sim\pi$, ${\tilde{X}}^{(n)}$ follows $\pi$ at each step $n$, and we obtain a bound for the total variation distance between $X^{(n)}$ and $\pi$. Using Lemma \[lemma:meetingprobability\], we can bound the probability $\mathbb{P}(\tau > n)$ from above by $(1-\varepsilon)^n$, as in the proof of Theorem \[thm:finitevariance\] below. This implies that the computational cost of the proposed estimator has a finite expectation for all $N\geq 2$ and $T\geq 1$. *Proof of Lemma \[lemma:meetingprobability\]*. We write ${{\mathbb{P}}_{x_{0:t},\tilde x_{0:t}}}$ and ${{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}$ for the conditional probability and expectation, respectively, with respect to the law of the particles generated by the CCPF procedure conditionally on the reference trajectories up to time $t$, $(x_{0:t}, \tilde x_{0:t})$. Furthermore, let $\mathcal{F}_t$ denote the filtrations generated by the CCPF at time $t$. We denote by $x_{0:t}^k$, for $k\in1:N$, the surviving trajectories at time $t$. Let $I_t \subseteq 1:N-1$ be the set of common particles at time $t$ defined by $I_t = \{j \in 1:N-1 : x_{0:t}^j = \tilde x_{0:t}^j \}$. The meeting probability can then be bounded by: $$\begin{gathered} {{\mathbb{P}}_{x_{0:T},\tilde x_{0:T}}}(x_{0:T}^\prime = \tilde x_{0:T}^\prime) = {{\mathbb{E}}_{x_{0:T},\tilde x_{0:T}}}\left[{\mathds{1}}\!\left(x_{0:T}^{b_T} = \tilde x_{0:T}^{\tilde{b}_T} \right)\right] \geq \sum_{k=1}^{N-1} {{\mathbb{E}}_{x_{0:T},\tilde x_{0:T}}}[{\mathds{1}}\!\left(k \in I_T\right) P_T^{kk}] \\ = (N-1){{\mathbb{E}}_{x_{0:T},\tilde x_{0:T}}}[{\mathds{1}}\!\left(1\in I_T \right) P_T^{11}] \geq \frac{N-1}{ (N\bar{g})^2} {{\mathbb{E}}_{x_{0:T},\tilde x_{0:T}}}[{\mathds{1}}\!\left(1\in I_T \right) g_T(x_T^1) g_T(\tilde x_T^1)],\end{gathered}$$ where we have used Assumptions \[assumption:upperbound\] and \[assumption:couplingmatrix\]. Now, let $\psi_t : {\mathbb{X}}^t \mapsto {\mathbb{R}}_+$ and consider $$\begin{aligned} \label{eq:crude:h} {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[{\mathds{1}}\!\left( 1\in I_t \right) \psi_t(x_{0:t}^1) \psi_t(\tilde x_{0:t}^1)] = {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[{\mathds{1}}\!\left( 1\in I_t \right) \psi_t(x_{0:t}^1)^2],\end{aligned}$$ since the two trajectories agree on $\{1\in I_t\}$. We have $$\begin{aligned} {\mathds{1}}\!\left( 1\in I_t \right) \geq \sum_{k=1}^{N-1} {\mathds{1}}\!\left(k\in I_{t-1} \right) {\mathds{1}}\!\left(a_{t-1}^1 = \tilde a_{t-1}^1 = k \right),\end{aligned}$$ and thus $$\begin{gathered} \label{eq:crude:h2} {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[{\mathds{1}}\!\left( 1\in I_t \right) \psi_t(x_{0:t}^1)^2] \\ \geq {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[\sum_{k=1}^{N-1} {\mathds{1}}\!\left(k\in I_{t-1} \right) {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[ {\mathds{1}}\!\left(a_{t-1}^1 = \tilde a_{t-1}^1 = k \right) \psi_t(x_{0:t}^1)^2 \mid \mathcal{F}_{t-1} ]] \\ = (N-1){{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[{\mathds{1}}\!\left(1\in I_{t-1} \right) {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[ {\mathds{1}}\!\left(a_{t-1}^1 = \tilde a_{t-1}^1 = 1 \right) \psi_t(x_{0:t}^1)^2 \mid \mathcal{F}_{t-1} ]].\end{gathered}$$ The inner conditional expectation can be computed as $$\begin{gathered} \label{eq:cruce:h2-inner} {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[ {\mathds{1}}\!\left(a_{t-1}^1 = \tilde a_{t-1}^1 = 1 \right) \psi_t(x_{0:t}^1)^2 \mid \mathcal{F}_{t-1} ] \\ =\sum_{k,\ell=1}^N P_{t-1}^{k\ell} {\mathds{1}}\!\left(k=\ell=1\right) \int \psi_t((x_{0:t-1}^k, x_t ))^2 f(dx_t|x_{t-1}^k) \\ = P_{t-1}^{11} \int \psi_t((x_{0:t-1}^1, x_t))^2 f(dx_t|x_{t-1}^1) \\ \geq \frac{g_{t-1}(x_{t-1}^1) g_{t-1}(\tilde x_{t-1}^1) }{(N\bar{g})^2} \left( \int \psi_t((x_{0:t-1}^1, x_t )) f(dx_t|x_{t-1}^1) \right)^2,\end{gathered}$$ where we have again used Assumptions \[assumption:upperbound\] and \[assumption:couplingmatrix\]. Note that this expression is independent of the final states of the reference trajectories, $(x_t, \tilde x_t)$, which can thus be dropped from the conditioning. Furthermore, on $\{1\in I_{t-1}\}$ it holds that $x_{0:t-1}^1 = \tilde x_{0:t-1}^1$ and therefore, combining Eqs. – we get $$\begin{gathered} {{\mathbb{E}}_{x_{0:t},\tilde x_{0:t}}}[{\mathds{1}}\!\left( 1\in I_t \right) \psi_t(x_{0:t}^1) \psi_t(\tilde x_{0:t}^1)] \\ \geq \frac{(N-1)}{(N\bar{g})^2}{{\mathbb{E}}_{x_{0:t-1},\tilde x_{0:t-1}}}\Big[{\mathds{1}}\!\left(1\in I_{t-1} \right) g_{t-1}(x_{t-1}^1) \int \psi_t((x_{0:t-1}^1, x_t )) f(dx_t|x_{t-1}^1) \\ \times g_{t-1}(\tilde x_{t-1}^1) \int \psi_t((\tilde x_{0:t-1}^1, x_t )) f(dx_t|\tilde x_{t-1}^1) \Big].\end{gathered}$$ Thus, if we define for $t=1,\ldots,T-1$, $\psi_t(x_{0:t}) = g_t(x_t) \int \psi_{t+1}(x_{0:t+1}) f(dx_{t+1}|x_t)$, and $\psi_T(x_{0:T}) = g_T(x_T)$, it follows that $$\begin{aligned} {{\mathbb{P}}_{x_{0:T},\tilde x_{0:T}}}(x_{0:T}^\prime= \tilde x_{0:T}^\prime) &\geq \frac{(N-1)^{\mathsf{T}}}{(N\bar{g})^{2T}} {{\mathbb{E}}_{x_{0},\tilde x_{0}}}[{\mathds{1}}\!\left(1\in I_1 \right) \psi_1(x_1^1)\psi_1(\tilde x_1^1)] \\ &= \frac{(N-1)^{\mathsf{T}}}{(N\bar{g})^{2T}} {{\mathbb{E}}_{x_{0},\tilde x_{0}}}[\psi_1(x_1^1)^2] \geq \frac{(N-1)^{\mathsf{T}}}{(N\bar{g})^{2T}} Z^2 > 0,\end{aligned}$$ where $Z > 0$ is the normalizing constant of the model, $Z=\int m_0(dx_0) \prod_{t=1}^{\mathsf{T}}g_t(x_t) f(dx_t|x_{t-1})$. This concludes the proof of Lemma \[lemma:meetingprobability\]. For any fixed $T$, the bound goes to zero when $N\to \infty$. The proof fails to capture accurately the behaviour of $\varepsilon$ in Lemma \[lemma:meetingprobability\] as a function of $N$ and $T$. Indeed, we observe in the numerical experiments of Section \[sec:numerics\] that meeting times decrease when $N$ increases. Proof of Theorem \[thm:finitevariance\] \[sec:proof:unbiased\] ============================================================== The proof is similar to those presented in @rhee:phd, in @McLeish:2011, @vihola2015unbiased, and @glynn2014exact. We can first upper-bound $\mathbb{P}\left(\tau>n\right)$, for all $n\geq2$, using Lemma \[lemma:meetingprobability\] [e.g. @williams1991probability exercise E.10.5]. We obtain for all $n\geq2$, $$\mathbb{P}\left(\tau>n\right)\leq\left(1-\varepsilon\right)^{n-1}.\label{eq:meetingtime:survival2}$$ This ensures that $\mathbb{E}[\tau]$ is finite; and that $\tau$ is almost surely finite. We then introduce the random variables $Z_{m}=\sum_{n=0}^{m} \Delta^{(n)}$ for all $m\geq 1$. Since $\tau$ is almost surely finite, and since $\Delta^{(n)} = 0$ for all $n \geq \tau$, then $Z_m\to Z_\tau = H_0$ almost surely when $m\to\infty$. We prove that $(Z_m)_{m\geq 1}$ is a Cauchy sequence in $L_2$, i.e. $\sup_{m'\geq m} \mathbb{E}\left[ (Z_{m'} - Z_m)^2 \right]$ goes to $0$ as $m\to\infty$. We write $$\begin{aligned} \label{eq:zcauchy} \mathbb{E}[(Z_{m'} - Z_m)^2] &= \sum_{n = m + 1}^{m'}\sum_{\ell = m + 1}^{m'} \mathbb{E}[\Delta^{(n)}\Delta^{(\ell)}].\end{aligned}$$ We use Cauchy-Schwarz inequality to write $(\mathbb{E}[\Delta^{(n)}\Delta^{(\ell)}])^2 \leq \mathbb{E}[(\Delta^{(n)})^2]\mathbb{E}[(\Delta^{(\ell)})^2]$, and we note that $(\Delta^{(n)})^2= \Delta^{(n)}\mathds{1}(\tau>n)$. Together with Hölder’s inequality with $p=1+\delta/2$, and $q=(2+\delta)/\delta$, where $\delta$ is as in Assumption \[assumption:mixing\], we can write $$\begin{aligned} \mathbb{E}\left[(\Delta^{(n)})^{2}\right] & \leq\mathbb{E}\left[(\Delta^{(n)})^{2+\delta}\right]^{1/(1+\delta/2)}\left(\left(1-\varepsilon\right)^{\delta/(2+\delta)}\right)^{n-1}.\end{aligned}$$ Furthermore, using Assumption \[assumption:mixing\] and Minkowski’s inequality, we obtain the bound $$\begin{aligned} \forall n\geq n_0, \qquad & \mathbb{E}\left[(\Delta^{(n)})^{2+\delta}\right]^{1/(1+\delta/2)}\leq C_{1},\end{aligned}$$ where $C_1$ is independent of $n$. The above inequalities lead to the terms $\mathbb{E}[\Delta^{(n)}\Delta^{(\ell)}]$ being upper bounded by an expression of the form $C_1 \eta^n \eta^\ell$, where $\eta \in (0,1)$. Thus we can compute a bound on Eq. , by computing geometric series, and finally conclude that $(Z_m)_{m \geq 1}$ is a Cauchy sequence in $L_2$. By uniqueness of the limit, since $(Z_m)_{m \geq 1}$ goes almost surely to $H_0$, $(Z_m)_{m \geq 1}$ goes to $H_0$ in $L_2$. This shows that $H_0$ has finite first two moments. We can retrieve the expectation of $H_0$ by $$\mathbb{E}Z_{m}=\sum_{n=0}^{m}\mathbb{E}[\Delta^{(n)}]=\mathbb{E}\left[h(X^{(m)})\right] \xrightarrow[m\to \infty]{} \pi(h),$$ according to Assumption \[assumption:mixing\]. This concludes the proof of Theorem \[thm:finitevariance\] for $H_k$ with $k=0$, and a similar reasoning applies for any $k\geq 0$. [^1]: The authors gratefully acknowledge the Swedish Foundation for Strategic Research (SSF) via the projects *Probabilistic Modeling and Inference for Machine Learning* (contract number: ICA16-0015) and ASSEMBLE (contract number: RIT15-0012), the Swedish Research Council (VR) via the projects *Learning of Large-Scale Probabilistic Dynamical Models* (contract number: 2016-04278) and *NewLEADS - New Directions in Learning Dynamical Systems* (contract number: 621-2016-06079), and the National Science Foundation through grant DMS-1712872.
--- abstract: 'We analytically derive the upper bound on the overall efficiency of single-photon generation based on cavity quantum electrodynamics (QED), where cavity internal loss is treated explicitly. The internal loss leads to a tradeoff relation between the internal generation efficiency and the escape efficiency, which results in a fundamental limit on the overall efficiency. The corresponding lower bound on the failure probability is expressed only with an “internal cooperativity," introduced here as the cooperativity parameter with respect to the cavity internal loss rate. The lower bound is obtained by optimizing the cavity external loss rate, which can be experimentally controlled by designing or tuning the transmissivity of the output coupler. The model used here is general enough to treat various cavity-QED effects, such as the Purcell effect, on-resonant or off-resonant cavity-enhanced Raman scattering, and vacuum-stimulated Raman adiabatic passage. A repumping process, where the atom is reused after its decay to the initial ground state, is also discussed.' author: - 'Hayato Goto,$^1$ Shota Mizukami,$^2$ Yuuki Tokunaga,$^3$ and Takao Aoki$^2$' title: 'Fundamental Limit on the Efficiency of Single-Photon Generation Based on Cavity Quantum Electrodynamics' --- *Introduction*. Single-photon sources are a key component for photonic quantum information processing and quantum networking [@Kimble2008a]. Single-photon sources based on cavity quantum electrodynamics (QED) [@Eisaman2011a; @Rempe2015a; @Kuhn2010a; @Law1997a; @Vasilev2010a; @Maurer2004a; @Barros2009a; @Kuhn1999a; @Duan2003a] are particularly promising, because they enable deterministic emission into a single mode, which is desirable for low-loss and scalable implementations. Many single-photon generation schemes have been proposed and studied using various cavity-QED effects, such as the Purcell effect [@Eisaman2011a; @Rempe2015a; @Kuhn2010a], on-resonant [@Kuhn2010a; @Law1997a; @Vasilev2010a] or off-resonant [@Maurer2004a; @Barros2009a] cavity-enhanced Raman scattering, and vacuum-stimulated Raman adiabatic passage (vSTIRAP) [@Eisaman2011a; @Rempe2015a; @Kuhn2010a; @Vasilev2010a; @Kuhn1999a; @Duan2003a; @Maurer2004a]. The overall efficiency of single-photon generation based on cavity QED is composed of two factors: the internal generation efficiency $\eta_{\mathrm{in}}$ (probability that a photon is generated inside the cavity) and the escape efficiency $\eta_{\mathrm{esc}}$ (probability that a generated photon is extracted to a desired external mode). The upper bounds on $\eta_{\mathrm{in}}$, based on the cooperativity parameter $C$ [@Rempe2015a], have been derived for some of the above schemes [@Rempe2015a; @Kuhn2010a; @Law1997a; @Vasilev2010a]. $C$ is inversely proportional to the total cavity loss rate, $\kappa=\kappa_{\mathrm{ex}}+\kappa_{\mathrm{in}}$, where $\kappa_{\mathrm{ex}}$ and $\kappa_{\mathrm{in}}$ are the external and internal loss rates, respectively [@comment-loss]. Note that $\kappa_{\mathrm{ex}}$ can be experimentally controlled by designing or tuning the transmissivity of the output coupler. Thus, $\eta_{\mathrm{in}}$ is maximized by setting $\kappa_{\mathrm{ex}}$ to a small value so that $\kappa \approx \kappa_{\mathrm{in}}$. However, a low $\kappa_{\mathrm{ex}}$ results in a low escape efficiency $\eta_{\mathrm{esc}}=\kappa_{\mathrm{ex}}/\kappa$, which limits the channelling of the generated photons into the desired mode. There is therefore a *tradeoff* relation between $\eta_{\mathrm{in}}$ and $\eta_{\mathrm{esc}}$ with respect to $\kappa_{\mathrm{ex}}$, and $\kappa_{\mathrm{ex}}$ should be optimized to maximize the overall efficiency. This tradeoff relation has not been examined in previous studies, where the internal loss rate $\kappa_{\mathrm{in}}$ has not been treated explicitly. Additionally, previous studies on the photon-generation efficiency have not taken account of a repumping process, where the atom decays to the initial ground state via spontaneous emission and is “reused" for cavity-photon generation [@Barros2009a]. In this paper, we analytically derive the upper bound on the overall efficiency of single-photon generation based on cavity QED, by taking into account both the cavity internal loss and the repumping process. We use the model shown in Fig. \[fig-system\], which is able to describe most of the previously proposed generation schemes, with or without the repumping process, in a unified and generalized manner. In particular, we show that the lower bound on the failure probability for single-photon generation, $P_F$, for the case of no repumping, is given by [@comment-on-Goto; @Goto2008a; @Goto2010a] $$\begin{aligned} P_F \ge \frac{2}{\displaystyle 1+\sqrt{1+2C_{\mathrm{in}}}} \approx \sqrt{\frac{2}{C_{\mathrm{in}}}}, \label{eq-PF}\end{aligned}$$ where we have introduced the “internal cooperativity," $$\begin{aligned} C_{\mathrm{in}}= \frac{g^2}{2\kappa_{\mathrm{in}} \gamma }, \label{eq-Cin}\end{aligned}$$ as the cooperativity parameter with respect to $\kappa_{\mathrm{in}}$ instead of $\kappa$ for the standard definition, $C=g^2/(2\kappa \gamma )$ [@Rempe2015a]. The approximation in Eq. (\[eq-PF\]) holds when $C_{\mathrm{in}} \gg 1$. The lower bound on $P_F$ in Eq. (\[eq-PF\]) is obtained when $\kappa_{\mathrm{ex}}$ is set to its optimal value, $$\begin{aligned} \kappa_{\mathrm{ex}}^{\mathrm{opt}} \equiv \kappa_{\mathrm{in}} \sqrt{1+2C_{\mathrm{in}}}, \label{eq-optimal-kex}\end{aligned}$$ and is simply expressed as $2\kappa_{\mathrm{in}}/\kappa^{\mathrm{opt}}$, where $\kappa^{\mathrm{opt}} \equiv \kappa_{\mathrm{in}} + \kappa_{\mathrm{ex}}^{\mathrm{opt}}$ [@comment-kex]. Note that the experimental values of $(g,\gamma,\kappa_{\mathrm{in}})$ determine which regime the system should be in: the Purcell regime ($\kappa \gg g \gg \gamma$), the strong-coupling regime ( $g \gg (\kappa, \gamma)$), or the intermediate regime ($\kappa \approx g \gg \gamma$). The remainder of this paper is organized as follows. First, we show that the present model is applicable to various cavity-QED single-photon generation schemes. Next, we provide the basic equations for the present analysis. Using these equations, we analytically derive an upper bound on the success probability, $P_S=1-P_F$, of single-photon generation. From here, we optimize $\kappa_{\mathrm{ex}}$ and derive Ineq. (\[eq-PF\]). We then briefly discuss the condition for typical optical cavity-QED systems. Finally, the conclusion and outlook are presented. *Model*. As shown in Fig. \[fig-system\], we consider a cavity QED system with a $\Lambda$-type three-level atom in a one-sided cavity. The atom is initially prepared in $|u\rangle$. The $|u\rangle$-$|e\rangle$ transition is driven with an external classical field, while the $|g\rangle$-$|e\rangle$ transition is coupled to the cavity. This system is general enough to describe most of the cavity QED single-photon generation schemes. For instance, by first exciting the atom to $|e\rangle$ with a resonant $\pi$ pulse (with time-dependent $\Omega$), or fast adiabatic passage (with time-dependent $\Delta_u$), the atom is able to decay to $|g\rangle$ with a decay rate enhanced by the Purcell effect [@Purcell1946a], generating a single photon. Here, the Purcell regime is assumed. [@Rempe2015a; @Kuhn2010a; @Eisaman2011a]. Another example is where the atom is weakly excited with small $\Omega$ and a cavity photon is generated by cavity-enhanced Raman scattering. Here, $\kappa \gg g$ is assumed in the on-resonant case ($\Delta_e=\Delta_u=0$) [@Kuhn2010a; @Law1997a; @Vasilev2010a], while $\Delta_e \gg g$ is assumed in the off-resonant case ($\Delta_u=0$) [@Barros2009a; @Maurer2004a]. A third example is based on vSTIRAP [@Rempe2015a; @Eisaman2011a; @Kuhn2010a; @Vasilev2010a; @Kuhn1999a; @Duan2003a; @Maurer2004a], where $\Omega$ is gradually increased, and where the strong-coupling regime \[$g \gg (\kappa, \gamma)$\] and small detunings ($|\Delta_e|, |\Delta_u| \ll g$) are assumed. *Basic equations*. The starting point of our study is the following master equation describing the cavity-QED system: $$\begin{aligned} \dot{\rho} =& \mathcal{L} \rho, ~ \mathcal{L} =\mathcal{L}_{\mathcal{H}} + \mathcal{J}_u + \mathcal{J}_g + \mathcal{J}_o + \mathcal{J}_{\mathrm{ex}} + \mathcal{J}_{\mathrm{in}}, \label{eq-master} \\ \mathcal{L}_{\mathcal{H}} \rho =& -\frac{i}{\hbar} \left( \mathcal{H} \rho - \rho \mathcal{H}^{\dagger} \right),~ \mathcal{H}=H -i\hbar \left( \gamma \sigma_{e,e} + \kappa a^{\dagger} a \right), \nonumber \\ H =& \hbar \Delta_e \sigma_{e,e} + \hbar \Delta_u \sigma_{u,u} \nonumber \\ &+ i\hbar \Omega (\sigma_{e,u} - \sigma_{u,e} ) + i\hbar g (a \sigma_{e,g} - a^{\dagger} \sigma_{g,e} ), \label{eq-Hamiltonian} \\ \mathcal{J}_{u} \rho =& 2 \gamma r_u \sigma_{u,e} \rho \sigma_{e,u},~ \mathcal{J}_{g} \rho = 2 \gamma r_g \sigma_{g,e} \rho \sigma_{e,g}, \nonumber \\ \mathcal{J}_{o} \rho =& 2 \gamma r_o \sigma_{o,e} \rho \sigma_{e,o},~ \mathcal{J}_{\mathrm{ex}} \rho = 2 \kappa_{\mathrm{ex}} a \rho a^{\dagger},~ \mathcal{J}_{\mathrm{in}} \rho = 2 \kappa_{\mathrm{in}} a \rho a^{\dagger}, \nonumber\end{aligned}$$ where $\rho$ is the density operator describing the state of the system; the dot denotes differentiation with respect to time; $H$ is the Hamiltonian for the cavity-QED system; $a$ and $a^{\dagger}$ are respectively the annihilation and creation operators for cavity photons; $|o\rangle$ is, if it exists, a ground state other than $|u\rangle$ and $|g\rangle$; $r_u$, $r_g$, and ${r_o=1-r_u-r_g}$ are respectively the branching ratios for spontaneous emission from $|e\rangle$ to $|u\rangle$, $|g\rangle$, and $|o\rangle$; and $\sigma_{j,l}=|j\rangle \langle l|$ ($j,l=u, g, e, o$) are atomic operators. In the present work, we assume no pure dephasing [@comment-dephasing]. The transitions corresponding to the terms in Eqs. (\[eq-master\]) and (\[eq-Hamiltonian\]) are depicted in Fig. \[fig-transition\], where the second ket vectors denote cavity photon number states. Once the state of the system becomes $|g\rangle |0\rangle$ or $|o\rangle |0\rangle$ by quantum jumps, the time evolution stops. Among the quantum jumps, $\mathcal{J}_{\mathrm{ex}}$ corresponds to the success case where a cavity photon is emitted into the external mode, and the others result in failure of emission. Taking this fact into account, we obtain the following formal solution of the master equation [@Carmichael]: $$\begin{aligned} \rho_c (t) =& \mathcal{V}_{\mathcal{H}}(t,0) \rho_0 + \int_0^t \! dt' \mathcal{J}_{\mathrm{ex}} \mathcal{V}_{\mathcal{H}} (t',0) \rho_0 \nonumber \\ &+ \int_0^t \! dt' \mathcal{V}_c (t,t') \mathcal{J}_u \mathcal{V}_{\mathcal{H}} (t',0) \rho_0, \label{eq-rho-c}\end{aligned}$$ where $\rho_c$ denotes the density operator conditioned on no quantum jumps of $\mathcal{J}_g$, $\mathcal{J}_o$, and $\mathcal{J}_{\mathrm{in}}$, $\rho_0 = |u\rangle |0\rangle \langle u| \langle 0|$ is the initial density operator, and $\mathcal{V}_{\mathcal{H}}$ and $\mathcal{V}_c$ are the quantum dynamical semigroups defined as follows: $$\begin{aligned} \frac{d}{dt} \mathcal{V}_{\mathcal{H}}(t,t') = \mathcal{L}_{\mathcal{H}}(t) \mathcal{V}_{\mathcal{H}}(t,t'),~ \frac{d}{dt} \mathcal{V}_c(t,t') = \mathcal{L}_c(t) \mathcal{V}_c(t,t'), \nonumber\end{aligned}$$ where $\mathcal{L}_c =\mathcal{L}_{\mathcal{H}} + \mathcal{J}_u + \mathcal{J}_{\mathrm{ex}}$ is the Liouville operator for the conditioned time evolution. Note that $\rho_c(t) = \mathcal{V}_c(t,0) \rho_0$. The trace of $\rho_c$ decreases from unity for ${t>0}$. This decrease corresponds to the failure probability due to $\mathcal{J}_g$, $\mathcal{J}_o$, and $\mathcal{J}_{\mathrm{in}}$ [@Carmichael; @Plenio1998a]. Note that $\rho_{\mathcal{H}}(t)=\mathcal{V}_{\mathcal{H}}(t,0) \rho_0$ can be expressed with a state vector as follows: $$\begin{aligned} \rho_{\mathcal{H}}(t)= |\psi (t) \rangle \langle \psi (t)|,~ i\hbar |\dot{\psi} \rangle = \mathcal{H} |\psi \rangle,~ |\psi (0) \rangle = |u \rangle |0 \rangle. \nonumber\end{aligned}$$ Setting $|\psi \rangle = \alpha_u |u \rangle |0 \rangle + \alpha_e |e \rangle |0 \rangle + \alpha_g |g \rangle |1 \rangle$, the non-Hermitian Schrödinger equation is given by $$\begin{aligned} & \dot{\alpha_u}= -i\Delta_u \alpha_u -\Omega \alpha_e, \label{eq-alpha-u} \\ & \dot{\alpha_e}= -(\gamma + i \Delta_e) \alpha_e + \Omega \alpha_u + g \alpha_g, \label{eq-alpha-e} \\ & \dot{\alpha_g}= -\kappa \alpha_g - g \alpha_e. \label{eq-alpha-g}\end{aligned}$$ Using the state vector and the amplitudes, Eq. (\[eq-rho-c\]) becomes $$\begin{aligned} \rho_c (t) =& |\psi (t) \rangle \langle \psi (t)| + 2 \kappa_{\mathrm{ex}} \int_0^t \! dt' |\alpha_g (t')|^2 |g \rangle |0 \rangle \langle g| \langle 0| \nonumber \\ &+ 2 \gamma r_u \int_0^t \! dt' |\alpha_e (t')|^2 \mathcal{V}_c (t,t') \rho_0. \label{eq-rho-c-2}\end{aligned}$$ *Upper bound on success probability*. A successful photon generation and extraction event is defined by the condition that the final atom-cavity state is $|g\rangle|0\rangle$, and that the quantum jump $\mathcal{J}_{\mathrm{ex}}$ has occurred. The success probability, $P_S$, of the single-photon generation is therefore formulated by $P_S = \langle g| \langle 0| \rho_c(T) |g \rangle |0 \rangle$ for a sufficiently long time $T$. Using Eq. (\[eq-rho-c-2\]), we obtain $$\begin{aligned} P_S =& 2 \kappa_{\mathrm{ex}} \int_0^T \! dt |\alpha_g (t)|^2 \nonumber \\ &+ 2 \gamma r_u \int_0^T \! dt |\alpha_e (t)|^2 \langle g| \langle 0| \mathcal{V}_c (T,t) \rho_0 |g \rangle |0 \rangle. \label{eq-PS-formula}\end{aligned}$$ Here we assume the following inequality: $$\begin{aligned} \langle g| \langle 0| \mathcal{V}_c (T,t) \rho_0 |g \rangle |0 \rangle \le \langle g| \langle 0| \mathcal{V}_c (T,0) \rho_0 |g \rangle |0 \rangle = P_S. \label{eq-Vc}\end{aligned}$$ This assumption is natural because $\mathcal{V}_c (t,t')$ should be designed to maximize $P_S$ [@comment-Vc]. Thus we obtain $$\begin{aligned} P_S \le \frac{2 \kappa_{\mathrm{ex}} I_g} {\displaystyle 1-2 \gamma r_u I_e}, \label{eq-PS-inequality}\end{aligned}$$ where ${I_g = \int_0^T \! dt |\alpha_g (t)|^2}$ and ${I_e = \int_0^T \! dt |\alpha_e (t)|^2}$. The two integrals, $I_g$ and $I_e$, can be evaluated as follows. First, we have $$\begin{aligned} \frac{d}{dt} \langle \psi |\psi \rangle = -2\gamma |\alpha_e|^2 - 2\kappa |\alpha_g|^2 ~\Rightarrow~ 2\gamma I_e + 2\kappa I_g \approx 1, \label{eq-norm}\end{aligned}$$ where ${\langle \psi (0)|\psi (0) \rangle =1}$ and ${\langle \psi (T)|\psi (T) \rangle \approx 0}$ have been used assuming a sufficiently long time $T$. Next, using Eq. (\[eq-alpha-g\]), we obtain $$\begin{aligned} & I_e = \int_0^T \! dt \frac{|\dot{\alpha_g}(t) + \kappa \alpha_g (t)|^2}{g^2} \nonumber \\ &= \int_0^T \! dt \frac{|\dot{\alpha_g}(t)|^2 + \kappa^2 |\alpha_g (t)|^2}{g^2} + \frac{\kappa}{g^2} \left[ |\alpha_g(T)|^2 - |\alpha_g(0)|^2 \right] \nonumber \\ &\approx \frac{I'_g}{g^2} +\frac{\kappa^2}{g^2} I_g, \label{eq-Ie}\end{aligned}$$ where we have used ${|\alpha_g(0)|^2=0}$ and ${|\alpha_g(T)|^2\approx 0}$ and have set ${I'_g = \int_0^T \! dt |\dot{\alpha}_g (t)|^2}$. Using Eqs. (\[eq-norm\]) and (\[eq-Ie\]), we obtain $$\begin{aligned} I_g &= \frac{C}{\kappa (1+2C)} \left( 1- \frac{I'_g}{\kappa C} \right), \label{eq-Ig-result} \\ I_e &= \frac{1}{2\gamma} \left[ 1- \frac{2C}{1+2C} \left( 1- \frac{I'_g}{\kappa C} \right) \right]. \label{eq-Ie-result}\end{aligned}$$ Substituting Eqs. (\[eq-Ig-result\]) and (\[eq-Ie-result\]) into Ineq. (\[eq-PS-inequality\]), the upper bound on $P_S$ is finally obtained as follows: $$\begin{aligned} P_S &\le \frac{\kappa_{\mathrm{ex}}}{\kappa} \frac{2C}{1+2C} \frac{\displaystyle 1-\frac{I'_g}{\kappa C}} {\displaystyle 1-r_u + r_u \frac{2C}{1+2C} \left( 1-\frac{I'_g}{\kappa C} \right)} \nonumber \\ &\le \left( 1- \frac{\kappa_{\mathrm{in}}}{\kappa} \right) \left( 1- \frac{1}{1+2C} \right) \sum_{n=0}^{\infty} \left( \frac{r_u}{1+2C} \right)^n, \label{eq-PS}\end{aligned}$$ where we have used $0\le 1 - I'_g/(\kappa C) \le 1$ [@comment-Ig]. The equality approximately holds when the system varies slowly and the following condition holds: $$\begin{aligned} \frac{1}{\kappa} \int_0^T \! dt |\dot{\alpha_g}(t)|^2 \ll C.\end{aligned}$$ The upper bound on the success probability given by Ineq. (\[eq-PS\]) is a unified and generalized version of previous results [@Kuhn2010a; @Law1997a; @Vasilev2010a; @comment-storage; @Gorshkov2007a; @Dilley2012a], which did not treat explicitly internal loss, detunings, or repumping. The upper bound has a simple physical meaning. The first factor is the escape efficiency $\eta_{\mathrm{esc}}$. The product of the second and third factors is the internal generation efficiency $\eta_{\mathrm{in}}$. Each term of the third factor represents the probability that the decay from $|e \rangle$ to $|u \rangle$ occurs $n$ times. Note that $\eta_{\mathrm{in}}$ is increased by the repumping process. So far, the photons generated by repumping after decay to $|u\rangle$ are counted, as in some experiments [@Barros2009a]. However, such photons may have time delays or different pulse shapes from photons generated without repumping, and are therefore not useful for some applications, such as photonic qubits. If the photons generated by repumping are not counted, we should consider the state conditioned further on no quantum jump of $\mathcal{J}_u$. In this case, the upper bound on the success probability is obtained by modifying Ineq. (\[eq-PS\]) with $r_u=0$. The contribution of the repumping to $P_S$, denoted by $P_{\mathrm{rep}}$, is given by the second term in the right-hand side of Eq. (\[eq-PS-formula\]). Using Eqs. (\[eq-Ie-result\]) and (\[eq-PS\]), we can derive an upper bound on $P_{\mathrm{rep}}$ as follows: $$\begin{aligned} P_{\mathrm{rep}} \le 2\gamma r_u I_e P_S &\le \frac{\kappa_{\mathrm{ex}}}{\kappa} \frac{2C}{1+2C} \sum_{n=1}^{\infty} \left( \frac{r_u}{1+2C} \right)^n \nonumber \\ &= \frac{\kappa_{\mathrm{ex}}}{\kappa} \frac{2C}{1+2C} \frac{r_u}{1+2C-r_u}. \label{eq-Prepump}\end{aligned}$$ Thus, the contribution of the repumping is negligible when $C \gg 1$ or when $r_u \ll 1$. *Fundamental limit on single-photon generation based on cavity QED*. The reciprocal of the upper bound on $P_S$ is simplified as $$\begin{aligned} \left( 1 + \frac{\kappa_{\mathrm{in}}}{\kappa_{\mathrm{ex}}} \right) \left[ 1 + \frac{1-r_u}{2C_{\mathrm{in}}} \left( 1 + \frac{\kappa_{\mathrm{ex}}}{\kappa_{\mathrm{in}}} \right) \right].\end{aligned}$$ This can be easily minimized with respect to $\kappa_{\mathrm{ex}}$, which results in the following lower bound on $P_F$: $$\begin{aligned} P_F \ge \frac{2}{\displaystyle 1+\sqrt{1+2C_{\mathrm{in}}/(1-r_u)}}, \label{eq-PF-ru}\end{aligned}$$ where the lower bound is obtained when $\kappa_{\mathrm{ex}}$ is set to $$\begin{aligned} \kappa_{\mathrm{ex}}^{\mathrm{opt}} \equiv \kappa_{\mathrm{in}} \sqrt{1+2C_{\mathrm{in}}/(1-r_u)}. \label{eq-optimal-kex-ru}\end{aligned}$$ In the case of no repumping, Eqs. (\[eq-PF-ru\]) and (\[eq-optimal-kex-ru\]) are modified by $r_u=0$. This leads to Ineq. (\[eq-PF\]) and Eq. (\[eq-optimal-kex\]). The approximate lower bound in Ineq. (\[eq-PF\]) can be derived more directly from Ineq. (\[eq-PS\]) (${r_u=0}$) using the arithmetic-geometric mean inequality as follows: $$\begin{aligned} P_F \ge \frac{\kappa_{\mathrm{in}}}{\kappa} + \frac{1}{2C+1} -\frac{\kappa_{\mathrm{in}}}{\kappa} \frac{1}{2C+1} \approx \frac{\kappa_{\mathrm{in}}}{\kappa} + \frac{\kappa \gamma}{g^2} \ge \sqrt{\frac{2}{C_{\mathrm{in}}}}, \nonumber\end{aligned}$$ where ${\kappa_{\mathrm{in}} \ll \kappa}$ and ${C \gg 1}$ have been assumed. Note that $\kappa$ is cancelled out by multiplying the two terms [@comment-arithmetic-geometric]. *Typical optical cavity-QED systems*. In optical cavity-QED systems where a single atom or ion is coupled to a single cavity mode [@Law1997a; @Vasilev2010a; @Barros2009a; @Maurer2004a; @Kuhn1999a; @Duan2003a], the cavity-QED parameters are expressed as follows [@Rempe2015a]: $$\begin{aligned} g &= \sqrt{\frac{\mu_{g,e}^2 \omega_{g,e}}{2\epsilon_0 \hbar A_{\mathrm{eff}} L}}, \label{eq-g} \\ \kappa_{\mathrm{in}} &= \frac{c}{2L} \alpha_{\mathrm{loss}}, \label{eq-kappa-in} \\ r_g \gamma &= \frac{\mu_{g,e}^2 \omega_{g,e}^3}{6 \pi \epsilon_0 \hbar c^3}, \label{eq-gamma}\end{aligned}$$ where $\epsilon_0$ is the permittivity of vacuum, $c$ is the speed of light in vacuum, $\mu_{g,e}$ and $\omega_{g,e}$ are the dipole moment and frequency of the $|g\rangle$-$|e\rangle$ transition, respectively, $L$ is the cavity length, $A_{\mathrm{eff}}$ is the effective cross-section area of the cavity mode at the atomic position, and $\alpha_{\mathrm{loss}}$ is the one-round-trip cavity internal loss. Substituting Eqs. (\[eq-g\])–(\[eq-gamma\]) into the definition of $C_{\mathrm{in}}$, we obtain $$\begin{aligned} \frac{2C_{\mathrm{in}}}{1-r_u} &= \frac{1}{\alpha_{\mathrm{loss}}} \frac{1}{r_A} \frac{r_g}{1-r_u} \le \frac{1}{\alpha_{\mathrm{loss}}} \frac{1}{r_A}, \label{eq-Cin-formula}\end{aligned}$$ where $\lambda = 2\pi c/\omega_{g,e}$ is the wavelength corresponding to $\omega_{g,e}$, $r_A=A_{\mathrm{eff}}/\sigma$ is the ratio of the cavity-mode area to the atomic absorption cross section ${\sigma = 3\lambda^2/(2\pi)}$, and the inequality comes from $r_g/(1-r_u) \le 1$. (The equality holds when ${r_o=0}$.) Note that the cavity length $L$ and the dipole moment $\mu_{g,e}$ are cancelled out. From Ineq. (\[eq-PF-ru\]), it turns out that the single-photon generation efficiency is limited only by the one-round-trip internal loss, ${\alpha_{\mathrm{loss}}}$, and the area ratio, $r_A$, even when counting photons generated by repumping. *Conclusion and outlook*. By analytically solving the master equation for a general cavity-QED model, we have derived an upper bound on the success probability of single-photon generation based on cavity QED in a unified way. We have taken cavity internal loss into account, which results in a tradeoff relation between the internal generation efficiency and the escape efficiency with respect to the cavity external loss rate $\kappa_{\mathrm{ex}}$. By optimizing $\kappa_{\mathrm{ex}}$, we have derived a lower bound on the failure probability. The lower bound is inversely proportional to the square root of the internal cooperativity $C_{\mathrm{in}}$. This gives the fundamental limit of single-photon generation efficiency based on cavity QED. The optimal value of $\kappa_{\mathrm{ex}}$ has also been given explicitly. The repumping process, where the atom decays to the initial ground state via spontaneous emission and is reused for cavity-photon generation has also been taken into account. For typical optical cavity-QED systems, the lower bound is determined only by the one-round-trip internal loss and the ratio between the cavity-mode area and the atomic absorption cross section. This result holds even when the photons generated by repumping are counted. The lower bound is achieved in the limit that the variation of the system is sufficiently slow. When the short generation time is desirable, optimization of the control parameters will be necessary. This problem is left for future work. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank Kazuki Koshino, Donald White and Samuel Ruddell for their useful comments. This work was supported by JST CREST Grant Number JPMJCR1771, Japan. [19]{} H. J. Kimble, Nature **453**, 1023 (2008). M. D. Eisaman, J. Fan, A. Migdall, and S. V. Polyakov, Rev. Sci. Instrum. **82**, 071101 (2011), and references therein. A. Reiserer and G. Rempe, Rev. Mod. Phys. **87**, 1379 (2015). A. Kuhn and D. Ljunggren, Contemp. Phys. **51**, 289 (2010). C. K. Law and H. J. Kimble, J. Mod. Opt. **44**, 2067 (1997). G. S. Vasilev, D. Ljunggren, and A. Kuhn, New J. Phys. **12**, 063024 (2010). H. G. Barros, A. Stute, T. E. Northup, C. Russo, P. O, Schmidt, and R. Blatt, New J. Phys. **11**, 103004 (2009). C. Maurer, C. Becher, C. Russo, J. Eschner, and R. Blatt, New J. Phys. **6**, 94 (2004). A. Kuhn, M. Hennrich, T. Bondo, and G. Rempe, Appl. Phys. B **69**, 373 (1999). L.-M. Duan, A. Kuzmich, and H. J. Kimble, Phys. Rev. A **67**, 032305 (2003). The external loss is due to the extraction of cavity photons to the desired external mode via transmission of the mirror, while the internal loss is due to undesirable scattering and absorption inside the cavity. It is notable that similar lower bounds on failure probabilities, inversely proportional to $\sqrt{C_{\mathrm{in}}}$, have been derived for quantum gate operations based on cavity QED [@Goto2008a; @Goto2010a]. This fact implies that $C_{\mathrm{in}}$ should be regarded as a figure of merit of cavity-QED systems for quantum applications. In Refs. [@Goto2008a; @Goto2010a], the critical atom number [@Rempe2015a], which is the inverse of the cooperativity, was used instead of the internal cooperativity. Note that in Ref. [@Goto2008a], $\kappa$ should be interpreted as $\kappa_{\mathrm{in}}$ because in this case the external field is unnecessary and we can set $\kappa_{\mathrm{ex}}=0$. H. Goto and K. Ichimura, Phys. Rev. A **77**, 013816 (2008). H. Goto and K. Ichimura, Phys. Rev. A **82**, 032311 (2010). Interestingly, this optimal value of $\kappa_{\mathrm{ex}}$ is exactly the same as that for a quantum gate operation in Ref. [@Goto2010a]. E. M. Purcell, Phys. Rev. **69**, 681 (1946). Pure dephasing may degrade single-photon efficiency, and therefore not affect the upper bound on the efficiency. In typical optical cavity-QED systems where a single atom or ion is coupled to a single cavity mode [@Law1997a; @Vasilev2010a; @Barros2009a; @Maurer2004a; @Kuhn1999a; @Duan2003a], pure dephasing is actually negligible. H. J. Carmichael, in *An Open Systems Approach to Quantum Optics*, edited by W. Beiglböck, Lecture Notes in Physics Vol. m18, (Springer-Verlag, Berlin, 1993). M. B. Plenio and P. L. Knight, Rev. Mod. Phys. **70**, 101 (1998). If $\langle g| \langle 0| \mathcal{V}_c (T,t) \rho_0 |g \rangle |0 \rangle > \langle g| \langle 0| \mathcal{V}_c (T,0) \rho_0 |g \rangle |0 \rangle$, then we should use $\mathcal{V}_c (T,t)$ for the single-photon generation, instead of $\mathcal{V}_c (T,0)$. Note that $I'_g \ge 0$ by definition and $1 - I'_g/(\kappa C) \ge 0$ because $I_g \ge 0$, by definition, in Eq. (\[eq-Ig-result\]). Interestingly, it is known that photon storage with cavity-QED systems without internal loss also has a similar upper bound, $2C/(2C+1)$, on the success probability [@Gorshkov2007a; @Dilley2012a]. This, together with the results for quantum gate operations [@Goto2008a; @Goto2010a], implies the universality of the upper bound. A. V. Gorshkov, A. André, M. D. Lukin, and A. S. S[ø]{}rensen, Phys. Rev. A **76**, 033804 (2007). J. Dilley, P. Nisbet-Jones, B. W. Shore, and A. Kuhn, Phys. Rev. A **85**, 023834 (2012). A similar technique has been applied to the derivation of an upper bound on the success probability of a quantum gate operation based on cavity QED [@Goto2008a].
--- abstract: 'We present a study of a peculiar nebula MF16 associated with an Ultraluminous X-ray Source NGC6946 ULX-1. We use integral-field and long-slit spectral data obtained with the 6-m telescope (Russia). The nebula was for a long time considered powered by strong shocks enhancing both high-excitation and low-excitation lines. However, kinematical properties point to rather moderate expansion rates ($V_S \sim 100\div 200$[$km\,s^{-1}\,$]{}). The total power of the emission-line source exceeds by one or two orders of magnitude the power observed expansion rate can provide, that points towards the existence of an additional source of excitation and ionization. Using CLOUDY96.01 photoionization code we derive the properties of the photoionizing source. Its total UV/EUV luminosity must be about $10^{40}$ erg/s.' date: '??? and in revised form ???' --- Introduction {#sec:intro} ============ Quite a large number of Ultraluminous X-ray Sources (ULXs) are associated with emission-line nebulae (ULX Nebulae, ULXNe), mostly large-scale bubbles powered by shock waves [@pamir (Pakull & Mirioni, 2003)]. However, several exceptions are known like the nebula associated with HoII X-1 [@lehmann (Lehmann , 2005)], that is clearly a photoionized [H [ii]{}]{} region. Another well-known example is the nebula MF16 coincident with the ULX NGC6946 ULX1. The attention to MF16 was first drawn by [@BF_94], who identified the object as a Supernova Remnant (SNR), according to the emission-line spectrum with bright collisionally-excited lines. It was long considered an unusually luminous SNR, because of its huge optical emission-line ($L_{H\alpha} = 1.9\times10^{39}erg\,s^{-1}$, according to [@BF_94], for the tangential size $20\times34pc$) and X-ray ($L_X = 2.5\times10^{39}erg\,s^{-1}$ in the $0.5-8$keV range, according to the luminosities given by [@RoCo]). However, it was shown by [@RoCo], that the spectral, spatial and timing properties of the X-ray source do not agree with the suggestion of a bright SNR, but rather suppose a point source with a typical “ULX-like” X-ray spectrum: cool Multicolor Disk (MCD) and a Power Law (PL) component. So, apart from the physical nature of the object, MF16 should be considered a [*ULX nebula*]{}, one of a new class of objects, described by [@pamir]. Optical Spectroscopy {#sec:obs} ==================== All the data were obtained on the SAO 6m telescope, Russia. Two spectrographs were used: panoramic MultiPupil Fiber Spectrograph MPFS [@MPFSdesc (Afanasiev , 2001)] and SCORPIO focal reducer [@scorpio (Afanasiev & Moiseev, 2005)] in long-slit mode. The details of data reduction processes and analysis technique will be presented in [@mf16_main]. Panoramic spectroscopy has the advantage of providing unbiased flux estimates. However, SCORPIO results have much higher signal-to-noise ratio and reveal rich emission-line spectrum of [\[Fe [iii]{}\]]{}. We also confirm the estimates of the total nebula emission-line luminosities by [@bfs]. $H\beta$ line luminosity obtained from our MPFS data is $L(H\beta) = (7.2\pm0.2)\times10^{37}erg\,s^{-1}$. Using line ratios for the integral spectrum we estimate the mean parameters of emitting gas as: $n_e \simeq 500\pm 100 \,cm^{-3}$, $T_e \simeq (1.9\pm0.2) \times 10^4 K$. Interstellar absorption is estimated as $A_V \sim 1{^{\rm m}\!\!\!.\,}3$, close to the Galactic value ($A_V^{Gal} = 1{^{\rm m}\!\!\!.\,}14$, according to [@schlegel_abs]) We confirm the estimate of the expansion rate obtained by [@dunne], coming to the conclusion that the expansion velocity is $V_S \lesssim 200$[$km\,s^{-1}\,$]{}. In this case the total emission-line luminosity can be estimated using for example the equations be [@DoSutI]: $$\begin{array}{l} F_{H\beta} = 7.44 \times 10^{-6} \left( \frac{V_s}{100 km\, s^{-1}} \right)^{2.41} \times \left( \frac{n_2}{cm^{-3}}\right) + \\ \qquad{} 9.86 \times 10^{-6} \left( \frac{V_s}{100 km \,s^{-1}} \right)^{2.28} \times \left( \frac{n_1}{cm^{-3}}\right) \, erg\, cm^{-2} s^{-1} \end{array}$$ Here $V_S$ is the shock velocity and $n_1$ the pre-shock hydrogen density. If the surface area is known, one can obtain the total luminosity in $H\beta$ from here. For $V_S = 200km/s$ and $n_1 = 1cm^{-3}$ it appears to be $L(H\beta) \simeq 1.6 \times 10^{36}$[ergs s$^{-1}$]{}, that is too low compared to the observed value. So we suggest an additional source of power providing most of the energy of the optical nebula. Photoionization Modelling ========================= We have computed a grid of CLOUDY96.01 [@cloudy98 (Ferland , 1998)] photoionization models in order to fit MF16 spectrum avoiding shock waves. We have fixed X-ray spectrum known from [*Chandra*]{} observations [@RoCo (Roberts & Colbert, 2003)], assuming all the plasma is situated at 10pc from the central point source, and introduced a blackbody source with the temperature changing from $10^3$ to $10^6$K and integral flux densities from 0.01 to 100 $erg\,cm^{-2}\,s^{-1}$. The best fit parameters are $\lg T(K) = 5.15\pm 0.05, F = 0.6\pm 0.1 erg\,cm^{-2}\,s^{-1}$, that suggests quite a luminous ultraviolet source: $L_{UV} = (7.5\pm0.5) \times 10^{39} erg\,s^{-1}$.The UV source is more than 100 times brighter then what can be predicted by extrapolating the thermal component of the best-fit model for X-ray data [@RoCo (Roberts & Colbert, 2003)]. Ultraluminous UV sources? ========================= At least for one source we have indications that its X-ray spectrum extends in the EUV region. It is interesting to analyse the implications in the frameworks of two most popular hypotheses explaining the ULX phenomenon. For the standard disk of [@ss73] the inner temperature scales as: $$T_{in} \simeq 1~keV\, \left(\frac{M}{M_\odot}\right)^{-1/4} \left(\frac{\dot{M}}{\dot{M}_{cr}}\right)^{1/4}$$ In Fig. \[fig:seds\] we present the reconstructed Spectral Energy Distribution of NGC6946 ULX-1 including optical identification by [@bfs] and the best-fit blackbody from our model. For comparison, a set of MCD SEDs for IMBHs accreting at 1% of critical rate is shown. To explain the high EUV luminosity and roughly flat SED in the EUV region, a rather high IMBH mass is needed, $M \gtrsim 10^4 $[$M_\odot$]{}. For supercritical disk this relation breaks [@poutanen (Poutanen , 2006)], and the outcoming radiation becomes much softer, except for the X-rays escaping along the disk axis [@superkarpov (Fabrika , 2007)]. Most part of the luminosity is supposed to be reprocessed into EUV and UV quanta, creating the nearly-flat SED of NGC6946 ULX1. In optical/UV range contribution of the donor star may become significant. ![NGC6946 ULX1 SED reconstruction. Optical source $d$ [@bfs (Blair , 2000)] is shown by an asterisk, and the upward arrow above indicates the unabsorbed optical luminosity: it is the lower estimate because only Galactic absorption was taken into account, $A_V = 1{^{\rm m}\!\!\!.\,}14$ according to [@schlegel_abs]. Dashed line represents the best-fit blackbody from our CLOUDY fitting. Thin solid lines are MCD models for accreting IMBHs with infinite outer disk radii. Mass accretion rate was set everywhere to $0.01 \dot{M}_{cr}$. []{data-label="fig:seds"}](abolmasov_fig1.eps){width="\textwidth"} In [@mf16_main] we make estimates for the detectability of ULXs with GALEX, coming to the conclusion that at least some of them (the sources with lower Galactic absorption) may be bright enough targets even for low-resolution spectroscopy. Conclusions =========== We conclude that MF16 is most likely a dense shell illuminated from inside. This can be a certain stage of the evolution of a ULXN, when the central source is bright and the shell itself rather compact. We suggest that ULXs must be luminous EUV sources as well in some cases, and may be also luminous UV sources. This work was supported by the RFBR grants NN 05-02-19710, 04-02-16349, 06-02-16855. Abolmasov, P., Fabrika, S., Sholukhova, O. & Afanasiev, V. 2005 in[*Science Perspectives for 3D Spectroscopy* ]{}, ed. M. Kissler-Patig, M. M. Roth. & J. R. Walsh (Springer Berlin / Heidelberg) Abolmasov, P., Fabrika, S., Sholukhova, O. 2007 [*in preparation*]{} Afanasiev V.L., Dodonov S.N., Moiseev A.V., 2001, in [ *Stellar dynamics: from classic to modern*]{}, eds. Osipkov L.P., Nikiforov I.I., Saint Petersburg, 103 Afanasiev, V., Moiseev, A., 2005 Astronomy Letters, 31, 194 Begelman, M. C. 2002, [*ApJ*]{}, 568, L97 Blair, W. P., Fesen, R. A. 1994 [*ApJ*]{}, 424, L10371.8 Blair, W. P., Fesen, R. A., Schlegel, E. M. 2001 [*The Astronomical Journal*]{}, 121, 1497 Colbert, E. J. M., Miller, E. C. 2005 in [*The Tenth Marcel Grossmann Meeting*]{}. Eds.: Mário Novello, Santiago Perez Bergliaffa, Remo Ruffini. Singapore: World Scientific Publishing.Part A, p. 530 Dopita, M. A., Sutherland, R. S. 1996 [*ApJSS*]{}, 102, 161 Dunne, B. C., Gruendl, R. A., Chu, Y.-H. 2000, [*AJ*]{}, 119, 1172 van Dyk, S. D., Sramek, R. A., Weiler, K. W. 1994, apj, 425, 77 Fabian, A. C., Terlevich, R. 1996 [*MNRAS*]{}, 280, L5 Fabrika, S., Mescheryakov, A., 2001, In [*Galaxies and their Constituents at the Highest Angular Resolutions* ]{}, Proceedings of IAU Symposium N205, R. T. Schilizzi (Ed.), p. 268, astro-ph/0103070 Fabrika, S. [ *Supercritical disk and jets of SS433* ]{} 2004, *ARAA*, vol. 12 Fabrika, S., Abolmasov, P., Sholukhova, O. 2005, in [*Science Perspectives for 3D Spectroscopy* ]{}, eds. Kissler-Patig, M., Roth., M. M. & Walsh, J. R. Fabrika, S., Karpov, S., Abolmasov, P. 2007 [*in preparation*]{} Ferland, G. J. Korista, K.T. Verner, D.A. Ferguson, J.W. Kingdon, J.B. Verner, E.M. 1998, [*PASP*]{}, 110, 761 King, A. R., Davies, M. B., Ward, M. J., Fabbiano, G., Elvis, M. 2001, å, 552, 109 Lehmann, I., Becker, T., Fabrika, S., Roth, M., Miyaji, T., Afanasiev, V., Sholukhova, O., Sánchez, S., Greiner, J., Hasinger, G., Constantini, E., Surkov., A, Burenkov, A. 2005 å, 431, 847 Liu, J.-F., Bregman, N. 2005 [*ApJSS*]{}, 157, 59L Matonick, D. M., Fesen, R. A., 1997 [*ApJSS*]{}, 112, 49 Osterbrock, D. E. “Astrophysics of Gaseous Nebulae” 1974, San Francisco, eds. W. H. Freeman and Company Pakull, M. W., Mirioni, L. 2003 RevMexAA (Serie de Conferencias), 15, 197 Poutanen, J., Fabrika, S., Butkevich, A., Abolmasov, P. 2006 [*in press*]{} Roberts, T. P., Colbert, E. J. M. 2003 [*MNRAS*]{}, 341, 49 Schlegel, D. J., Finkbeiner, P. F., Davis, M. 1998, [*ApJ*]{}, 500, 525 Shakura, N. I., Sunyaev, R. A. 1973, å, 24, 337 Swartz, A. D., Ghosh, K. K., Tennant, A. F., Wu, K., 2004 [*ApJSS*]{}, 154, 519
--- abstract: | Flexible and performant Persistency Service is a necessary component of any HEP Software Framework. The building of a modular, non-intrusive and performant persistency component have been shown to be very difficult task. In the past, it was very often necessary to sacrifice modularity to achieve acceptable performance. This resulted in the strong dependency of the overall Frameworks on their Persistency subsystems. Recent development in software technology has made possible to build a Persistency Service which can be transparently used from other Frameworks. Such Service doesn’t force a strong architectural constraints on the overall Framework Architecture, while satisfying high performance requirements. Java Data Object standard (JDO) has been already implemented for almost all major databases. It provides truly transparent persistency for any Java object (both internal and external). Objects in other languages can be handled via transparent proxies. Being only a thin layer on top of a used database, JDO doesn’t introduce any significant performance degradation. Also Aspect-Oriented Programming (AOP) makes possible to treat persistency as an orthogonal Aspect of the Application Framework, without polluting it with persistence-specific concepts. All these techniques have been developed primarily (or only) for the Java environment. It is, however, possible to interface them transparently to Frameworks built in other languages, like for example C++. Fully functional prototypes of flexible and non-intrusive persistency modules have been build for several other packages, as for example FreeHEP AIDA and LCG Pool AttributeSet (package Indicium). author: - Julius Hřivnáč title: Transparent Persistence with Java Data Objects --- JDO === Requirements on Transparent Persistence --------------------------------------- The Java Data Object (JDO) [@JDO1],[@JDO2],[@Standard],[@Portal] standard has been created to satisfy several requirements on the object persistence in Java: - [**Object Model independence on persistency**]{}: - Java types are automatically mapped to native storage types. - 3rd party objects can be persistified (even when their source is not available). - The source of the persistent class is the same as the source of the transient class. No additional code is needed to make a class persistent. - All classes can be made persistent (if it has a sense). - [**Illusion of in-memory access to data**]{}: - Dirty instances (i.e. objects which have been changed after they have been read) are implicitly updated in the database. - Catching, synchronization, retrieval and lazy loading are done automatically. - All objects, referenced from a persistent object, are automatically persistent ([*Persistence by reachability*]{}). - [**Portability across technologies**]{}: - A wide range of storage technologies (relational databases, object-oriented databases, files,…) can be transparently used. - All JDO implementations are exchangeable. - [**Portability across platforms**]{} is automatically available in Java. - [**No need for a different language**]{} (DDL, SQL,…) to handle persistency (incl. queries). - [**Interoperability with Application Servers**]{} (EJB [@EJB],…). Architecture of Java Data Objects --------------------------------- The Java Data Objects standard (Java Community Process Open Standard JSR-12) [@Standard] has been created to satisfy the requirements listed in the previous paragraph. ![image](Enhancement.eps){width="135mm"} The persistence capability is added to a class by the Enhancer (as shown in Figure \[Enhancement\]): - Enhancer makes a transient class PersistenceCapable by adding it all data and methods needed to provide the persistence functionality. After enhancement, the class implements PersistenceCapable interface (as shown in Figure \[PersistenceCapable\]). - Enhancer is generally applied to a class-file, but it can be also part of a compiler or a loader. - Enhancing effects can be modified via Persistence Descriptor (XML file). - All enhancers are compatible. Classes enhanced with one JDO implementation will work automatically with all other implementations. ![Enhancer makes any class PersistenceCapable.[]{data-label="PersistenceCapable"}](PersistenceCapable.eps){width="80mm"} The main object, a user interacts with, is the PersistenceManager. It mediates all interactions with the database, it manages instances lifecycle and it serves as a factory for Transactions, Queries and Extents (as described in Figure \[PersistenceManager\]). ![All interactions with JDO are mediated by PersistenceManager.[]{data-label="PersistenceManager"}](PersistenceManager.eps){width="80mm"} Available Implementations ------------------------- After about a year since the JDO standardization, there are already many implementations available supporting all existing storage technologies. JDO Implementations ------------------- ### Commercial JDO Implementations Following commercial implementations of JDO standard exist: enJin(Versant), FastObjects(Poet), FrontierSuit(ObjectFrontier), IntelliBO (Signsoft), JDOGenie(Hemisphere), JRelay(Object Industries), KODO(SolarMetric), LiDO(LIBeLIS), OpenFusion(Prism), Orient(Orient), PE:J(HYWY), … These implementation often have a free community license available. ### Open JDO Implementations There are already several open JDO implementations available: - [**JDORI**]{} [@JDORI] (Sun) is the reference and standard implementation. It currently only works with the FOStore files. Support for a relational database via JDBC implementation is under development. It is the most standard, but not the most performant implementation. - [**TJDO**]{} [@TJDO] (SourceForge) is a high quality implementation originally written by the TreeActive company, later put on the GPL license. It supports all important relational databases. It supports an automatic creation of the database schema. It implements full JDO standard. - [**XORM**]{} [@XORM] (SourceForge) does not yet support full JDO standard. It does not automatically generate a database schema, on the other hand, it allows a reuse of existing schemas. - [**JORM**]{} [@JORM] (JOnAS/ObjectWeb) has a fully functional object-relational mapping, the full JDO implementation is under development. - [**OJB**]{} [@OJB] (Apache) has a mature object-relational engine. Full JDO interface is not yet provided. Supported Databases ------------------- All widely used databases are already supported either by their provider or by a third party: - [**RDBS and ODBS**]{}: Oracle, MS SQL Server, DB2, PointBase, Cloudscape, MS Access, JDBC/ODBC Bridge, Sybase, Interbase, InstantDB, Informix, SAPDB, Postgress, MySQL, Hypersonic SQL, Versant,… - [**Files**]{}: XML, FOSTORE, flat, C-ISAM,… The performance of JDO implementations is determined by the native performance of a database. JDO itself introduces a very small overhead. HEP Applications using JDO ========================== Trivial Application ------------------- A simple application using JDO to write and read data is shown in Listing \[Trivial\]. [|l|]{} $//\ Initialization$\ $PersistenceManagerFactory\ pmf = JDOHelper.getPersistenceManagerFactory(properties);$\ $PersistenceManager\ pm = pmf.getPersistenceManager();$\ $Transaction\ tx = pm.currentTransaction();$\ \ $//\ Writing$\ $tx.begin();$\ $\dots$\ $Event\ event = \dots;$\ $pm.makePersistent(event);$\ $\dots$\ $tx.commit();$\ \ $//\ Searching\ using\ Java-like\ query\ language\ translated\ internally\ to\ DB\ native\ query\ language$\ $//\ (SQL\ available\ too\ for\ RDBS)$\ $tx.begin();$\ $Extent\ extent = pm.getExtent(Track.class, true);$\ $String\ filter = "pt > 20.0";$\ $Query\ query = pm.newQuery(extent, filter);$\ $Collection\ results = query.execute();$\ $\dots$\ $tx.commit();$\ Indicium -------- Indicium [@Indicium] has been created to satisfy the LCG [@LCG] Pool [@Pool] requirements on the Metadata management: “To define, accumulate, search, filter and manage Attributes (Metadata) external/additional to existing (Event) data.” Those metadata are a generalization of the traditional Paw ntuple concept. They are used in the first phase of the analysis process to make a pre-selection of Event for further processing. They should be efficient. They are apparently closely related to Collections (of Events). The Indicium package provides an implementation of the AttributeSet (Event Metadata, Tags) for the LCG/Pool project in Java and C++ (with the same API). The core of Indicium is implemented in Java. All expressed requirements can only be well satisfied by the system which allows in principle any object to act as an AttributeSet. Such system can be easily built when we realize that mentioned requirements are satisfied by JDO: - [**AttributeSet**]{} is simply any Object with a reference to another (Event) Object. - [**Explicit Collection**]{} is just any standard Java Collection. - [**Implicit Collection**]{} (i.e. all objects of some type T within a Database) is directly the JDO Extent. Indicium works with any JDO/DB implementation. As all the requirements are directly satisfied by the JDO itself, the Indicium only implements a simple wrapper and a code for database management (database creation, opening, …). That is in fact the only database-specific code. It is easy to switch between various JDO/DB implementations via a simple properties file. The default Indicium implementation contains configuration for JDORI with FOStore file format and TJDO with Cloudscape or MySQL databases, others are simple to add. The data stored by Indicium are accessible also via native database protocols (like JDBC or SQL) and tools using them. As it has been already mentioned, Indicium provides just a simple convenience layer on top of JDO trying to capture standard AttributeSet usage patterns. There are four ways how AttributeSet can be defined: - [**Assembled**]{} AttributeSet is fully constructed at run-time in a way similar to classical Paw ntuples. - [**Generated**]{} AttributeSet class is generated from a simple XML specification. - [**Implementing**]{} AttributeSet can be written by hand to implement the standard AttributeSet Interface. - [**FreeStyle**]{} AttributeSet can be just about any class. It can be managed by the Indicium infrastructure, only some convenience functionality may be lost. To satisfy also the requirements of C++ users, the C++ interface of Indicium has been created in the form of JACE [@JACE] proxies. This way, C++ users can directly use Indicium Java classes from a C++ program. CIndicium Architecture is shown in Figure \[AttributeSet\], an example of its use is shown in Listing \[CIndicium\]. ![image](AttributeList.eps){width="135mm"} [|l|]{} $//\ Construct\ Signature$\ $Signature\ signature("AssembledClass");$\ $signature.add("j", "int", "Some Integer Number");$\ $signature.add("y", "double", "Some Double Number");$\ $signature.add("s", "String", "Some String");$\ \ $//\ Obtain\ Accessor\ to\ database$\ $Accessor\ accessor = AccessorFactory::createAccessor("MyDB.properties");$\ \ $//\ Create\ Collection$\ $accessor.createCollection("MyCollection", signature, true);$\ \ $//\ Write\ AttributeSets\ into\ database$\ $AssembledAttributeSet*\ as;$\ $for (int\ i = 0; i < 100; i++) \{$\ $\ \ as = new\ AssembledAttributeSet(signature);$\ $\ \ as->set("j", ...);$\ $\ \ as->set("y", ...);$\ $\ \ as->set("s", ...);$\ $\ \ accessor.write(*as);$\ $\ \ \}$\ \ $//\ Search\ database$\ $std::string\ filter = "y > 0.5";$\ $Query\ query = accessor.newQuery(filter);$\ $Collection\ collection = query.execute();$\ $std::cout << "First: " << collection.toArray()[0].toString() << std::endl;$\ AIDA Persistence ---------------- JDO has been used to provide a basic persistency service for the FreeHEP [@FreeHEP] reference implementation of AIDA [@AIDA]. Three kinds of extension to the existing implementation have been required: - Implementation of the IStore interface as AidaJDOStore. - Creation of the XML description for each AIDA class (for example see Listing \[AIDA\]). - Several small changes to exiting classes, like creation of wrappers around arrays of primitive types, etc. ------------------------------------------------------------------------------------ $<jdo>$ $\ \ <package\ name="hep.aida.ref.histogram">$ $\ \ \ \ <class\ name="Histogram2D"$ $\ \ \ \ \ \ \ persistence-capable-superclass="hep.aida.ref.histogram.Histogram">$ $\ \ \ \ \ \ \ </class>$ $\ \ \ \ </package>$ $\ \ </jdo>$ ------------------------------------------------------------------------------------ It has become clear, that the AIDA persistence API is not sufficient and it has to be made richer to allow more control over persistent objects, better searching capabilities, etc. Minerva ------- Minerva [@Minerva] is a lightweight Java Framework which implements main Architectural principles of the ATLAS C++ Framework Athena [@Athena]: - [**Algorithm - Data Separation**]{}: Algorithmic code is separated from the data on which it operates. Algorithms can be explicitly called and don’t a have persistent state (except for parameters). Data are potentially persistent and processed by Algorithms. - [**Persistent - Transient Separation**]{}: The Persistency mechanism is implemented by specified components and have no impact on the definition of the transient Interfaces. Low-level Persistence technologies can be replaced without changing the other Framework components (except for possible configuration). A specific definition of Transient-Persistent mapping is possible, but is not required. - [**Implementation Independence**]{}: There are no implementation-specific constructs in the definition of the interfaces. In particular, all Interfaces are defined in an implementation independent way. Also all public objects (i.e. all objects which are exchanged between components and which subsequently appear in the Interface’ definitions) are identifiable by implementation independent Identifiers. - [**Modularity**]{}: All components are explicitly designed with interchangeability in mind. This implies that the main deliverables are simple and precisely defined general interfaces and existing implementation of various modules serves mainly as a Reference implementation. Minerva scheduling is based on InfoBus \[InfoBus\] Architecture: - Algorithms are [*Data Producers*]{} or [*Data Consumers*]{} (or both). - Algorithm declare their supported I/O types. - Scheduling is done implicitly. An Algorithm runs when it has all its inputs ready. - Both Algorithms and Services run as (static or dynamic) Servers. - The environment is naturally multi-threaded. Overview of the Minerva Architecture is shown in Figure \[InfoBus\]. ![image](InfoBus.eps){width="135mm"} It is very easy to configure and run Minerva. For example, one can create a Minerva run with 5 parallel Servers. Two of them are reading Events from two independent databases, one is processing each Event and two last write new processed Events on two new databases depending on the Event characteristics. (See Figure \[Minerva\] for a schema of such run and Listing \[MinervaScript\] for its steering script.) ![Example of a Minerva run.[]{data-label="Minerva"}](Minerva.eps){width="80mm"} --------------------------------------------------- $new\ Algorithm(<Algorithm\ properties>);$ $new\ ObjectOutput(<db3>, <Event\ properties1>);$ $new\ ObjectOutput(<db4>, <Event\ properties2>);$ $new\ ObjectInput(<db1>);$ $new\ ObjectInput(<db2>);$ --------------------------------------------------- : Example of steering script for a Minerva run.[]{data-label="MinervaScript"} Minerva has also simple but powerful modular Graphical User Interface which allows to plug in easily other components as the BeanShell [@BeanShell] command-line interface, the JAS [@JAS] histogramming, the ObjectBrowser [@ObjectBrowser], etc. Figure \[GUI\] and Figure \[ObjectBrowser\] show examples of running Minerva with various interactive plugins loaded. ![image](GUI.eps){width="135mm"} ![image](ObjectBrowser.eps){width="135mm"} Prototypes using JDO ==================== Object Evolution ---------------- It is often necessary to change object’ shape while keeping its content and identity. This functionality is especially needed in the persistency domain to satisfy [*Schema Evolution*]{} (Versioning) or [*Object Mapping*]{} (DB Projection), i.e. retrieving an Object of type A dressed as an Object of another type B. This functionality is not addressed by JDO. In practice, it is handled either on the lower lever (in a database) or on the higher level (in the overall framework, for example EJB). It is, however, possible to implement an Object Evolution for JDO with the help of Dynamic Proxies and Aspects. Let’s suppose that a user wants to read an Object of a type A (of an Interface IA) dressed as an Object of another Interface IB. To enable that, four components should co-operate (as shown in Fig \[Evolution\]): - JDO Enhancer enhances class A so it is PersistenceCapable and it is managed by JDO PersistenceManager. - AspectJ [@AspectJ] adds read-callback with the mapping A $\rightarrow$ IB. This is called automatically when JDO reads an object A. - A simple database of mappers provides a suitable mapping between A and IB. - DynamicProxy delivers the content of the Object A with the interfaces IB:\ $IB\ b = (IB)DynamicProxy.newInstance(A, IB);$. All those manipulations are of course hidden from the End User. ![Support for Object Evolution.[]{data-label="Evolution"}](Evolution.eps){width="80mm"} Foreign References ------------------ HEP data are often stored in sets of independent databases, each one managed independently. This architectures do not directly support references between objects from different databases (while references inside one database are managed directly by the JDO support for Persistence by Reachability). As in the case of the Object Evolution, foreign references are usually resolved either on the lower level (i.e. all databases are managed by one storage manager and JDO operates on top) or on the higher level (for example by the EJB framework). Another possibility is to use a similar Architecture as in the case of Object Evolution with Dynamic Proxy delivering foreign Objects. Let’s suppose, that a User reads an object A, which contains a reference to another object B, which is actually stored in a different database (and thus managed by a different PersistenceManager). The database with the object A doesn’t in fact in this case contain an object B, but a DynamicProxy object. The object B can be transparently retrieved using three co-operating components (as shown on Fig \[References\]): - When reference from an object A to an object B is requested, JDO delivers DynamicProxy instead. - The DynamicProxy asks PersistenceManagerFactory for a PersistenceManager which handles the object B. It then uses that PersistenceManager to get the object B and casts itself into it. - PersistenceManagerFactory gives this information by interrogating DBcatalog (possibly a Grid Service). All those manipulations are of course hidden from the End User. ![Support for Foreign References.[]{data-label="References"}](References.eps){width="80mm"} Summary ======= It has been shown that JDO standard provides suitable foundation of the persistence service for HEP applications. Two major characteristics of persistence solutions based on JDO are: - Not intrusiveness. - Wide range of available JDO implementations, both commercial and free, giving access to all major databases. JDO profits from the native databases functionality and performance (SQL queries,...), but presents it to users in a native Java API. [99]{} More details talk about JDO:\ [http://hrivnac.home.cern.ch/hrivnac/Activities/2002/June/JDO]{} More details talk about JDO:\ [http://hrivnac.home.cern.ch/hrivnac/Activities/2002/November/Indicium]{} Java Data Objects Standard:\ [http://java.sun.com/products/jdo]{} Java Data Objects Portal:\ [http://www.jdocentral.com]{} JDO Reference Implementation (JDORI):\ [http://access1.sun.com/jdo]{} TJDO:\ [http://tjdo.sourceforge.net]{} XORM:\ [http://xorm.sourceforge.net]{} JORM:\ [http://jorm.objectweb.org]{} OJB:\ [http://db.apache.org/ojb/]{} Indicium:\ [http://hrivnac.home.cern.ch/hrivnac/Activities/Packages/Indicium]{} AIDA:\ [http://aida.freehep.org]{} FreeHEP Library:\ [http://java.freehep.org]{} Minerva:\ [http://hrivnac.home.cern.ch/hrivnac/Activities/Packages/Minerva]{} JACE:\ [http://reyelts.dyndns.org:8080/jace/release/docs/index.html]{} Lightweight Scripting for Java (BeanShell):\ [http://www.beanshell.org]{} InfoBus:\ [http://java.sun.com/products/javabeans/infobus/]{} Java Analysis Studio (JAS):\ [http://jas.freehep.org]{} Object Browser:\ [http://hrivnac.home.cern.ch/hrivnac/Activities/Packages/ObjectBrowser/]{} AspectJ:\ [http://www.eclipse.org/aspectj/]{} Enterprise Java Beans (EJB):\ [http://java.sun.com/products/ejb]{} ATLAS C++ Framework (Athena):\ [http://atlas.web.cern.ch/ATLAS/GROUPS/SOFTWARE/OO/architecture/General/index.html]{} LCG Computing Grid Project (LCG):\ [http://wenaus.home.cern.ch/wenaus/peb-app]{} LCG Persistency Framework (Pool):\ [http://lcgapp.cern.ch/project/persist]{}
Preprint hep-ph/0006089 [Improved Conformal Mapping of the Borel Plane]{} U. D. Jentschura and G. Soff [*Institut für Theoretische Physik, TU Dresden, 01062 Dresden, Germany*]{}\ [**Email:**]{} jentschura@physik.tu-dresden.de, soff@physik.tu-dresden.de The conformal mapping of the Borel plane can be utilized for the analytic continuation of the Borel transform to the entire positive real semi-axis and is thus helpful in the resummation of divergent perturbation series in quantum field theory. We observe that the convergence can be accelerated by the application of Padé approximants to the Borel transform expressed as a function of the conformal variable, i.e. by a combination of the analytic continuation via conformal mapping and a subsequent numerical approximation by rational approximants. The method is primarily useful in those cases where the leading (but not sub-leading) large-order asymptotics of the perturbative coefficients are known. 11.15.Bt, 11.10.Jj General properties of perturbation theory;\ Asymptotic problems and properties The problem of the resummation of quantum field theoretic series is of obvious importance in view of the divergent, asymptotic character of the perturbative expansions [@LGZJ1990; @ZJ1996; @Fi1997]. The convergence can be accelerated when additional information is available about large-order asymptotics of the perturbative coefficients [@JeWeSo2000]. In the example cases discussed in [@JeWeSo2000], the location of several poles in the Borel plane, known from the leading and next-to-leading large-order asymptotics of the perturbative coefficients, is utilized in order to construct specialized resummation prescriptions. Here, we consider a particular perturbation series, investigated in [@BrKr1999], where only the [*leading*]{} large-order asymptotics of the perturbative coefficients are known to sufficient accuracy, and the subleading asymptotics have – not yet – been determined. Therefore, the location of only a single pole – the one closest to the origin – in the Borel plane is available. In this case, as discussed in [@CaFi1999; @CaFi2000], the (asymptotically optimal) conformal mapping of the Borel plane is an attractive method for the analytic continuation of the Borel transform beyond its circle of convergence and, to a certain extent, for accelerating the convergence of the Borel transforms. Here, we argue that the convergence of the transformation can be accelerated further when the Borel transforms, expressed as a function of the conformal variable which mediates the analytic continuation, are additionally convergence-accelerated by the application of Padé approximants. First we discuss, in general terms, the construction of the improved conformal mapping of the Borel plane which is used for the resummation of the perturbation series defined in Eqs. (\[gammaPhi4\]) and (\[gammaYukawa\]) below. The method uses as input data the numerical values of a finite number of perturbative coefficients and the leading large-order asymptotics of the perturbative coefficients, which can, under appropriate circumstances, be derived from an empirical investigation of a finite number of coefficients, as it has been done in [@BrKr1999]. We start from an asymptotic, divergent perturbative expansion of a physical observable $f(g)$ in powers of a coupling parameter $g$, $$\label{power} f(g) \sim \sum_{n=0}^{\infty} c_n\,g^n\,,$$ and we consider the generalized Borel transform of the $(1,\lambda)$-type (see Eq. (4) in [@JeWeSo2000]), $$\label{BorelTrans} f^{(\lambda)}_{\rm B}(u) \; \equiv \; f^{(1,\lambda)}_{\rm B}(u) \; = \; \sum_{n=0}^{\infty} \frac{c_n}{\Gamma(n+\lambda)}\,u^n\,.$$ The full physical solution can be reconstructed from the divergent series (\[power\]) by evaluating the Laplace-Borel integral, which is defined as $$\label{BorelIntegral} f(g) = \frac{1}{g^\lambda} \, \int_0^\infty {\rm d}u \,u^{\lambda - 1} \, \exp\bigl(-u/g\bigr)\, f^{(\lambda)}_{\rm B}(u)\,.$$ The integration variable $u$ is referred to as the Borel variable. The integration is carried out either along the real axis or infinitesimally above or below it (if Padé approximants are used for the analytic continuation, modified integration contours have been proposed [@Je2000]). The most prominent issue in the theory of the Borel resummation is the construction of an analytic continuation for the Borel transform (\[BorelTrans\]) from a finite-order partial sum of the perturbation series (\[power\]), which we denote by $$\label{PartialSum} f^{(\lambda),m}_{\rm B}(u) = \sum_{n=0}^{m} \frac{c_n}{\Gamma(n+\lambda)}\,u^n\,.$$ The analytic continuation can be accomplished using the direct application of Padé approximants to the partial sums of the Borel transform $f^{(\lambda),m}_{\rm B}(u)$ [@BrKr1999; @Je2000; @Raczka1991; @Pi1999] or by a conformal mapping [@SeZJ1979; @LGZJ1983; @GuKoSu1995; @CaFi1999; @CaFi2000]. We now assume that the [*leading*]{} large-order asymptotics of the perturbative coefficients $c_n$ defined in Eq. (\[power\]) is factorial, and that the coefficients display an alternating sign pattern. This indicates the existence of a singularity (branch point) along the negative real axis corresponding to the leading large-order growth of the perturbative coefficients, which we assume to be at $u=-1$. For Borel transforms which have only a single cut in the complex plane which extends from $u=-1$ to $u=-\infty$, the following conformal mapping has been recommended as optimal [@CaFi1999], $$\label{DefZ} z = z(u) = \frac{\sqrt{1+u}-1}{\sqrt{1+u}+1}\,.$$ Here, $z$ is referred to as the conformal variable. The cut Borel plane is mapped unto the unit circle by the conformal mapping (\[DefZ\]). We briefly mention that a large variety of similar conformal mappings have been discussed in the literature . It is worth noting that conformal mappings which are adopted for doubly-cut Borel planes have been discussed in [@CaFi1999; @CaFi2000]. We do not claim here that it would be impossible to construct conformal mappings which reflect the position of more than two renormalon poles or branch points in the complex plane. However, we stress that such a conformal mapping is likely to have a more complicated mathematical structure than, for example, the mapping defined in Eq. (27) in [@CaFi1999]. Using the alternative methods described in [@JeWeSo2000], poles (branch points) in the Borel plane corresponding to the subleading asymptotics can be incorporated easily provided their position in the Borel plane is known. In a concrete example (see Table 1 in [@JeWeSo2000]), 14 poles in the Borel plane have been fixed in the denominator of the Padé approximant constructed according to Eqs. (53)–(55) in [@JeWeSo2000], and accelerated convergence of the transforms is observed. In contrast to the investigation [@JeWeSo2000], we assume here that only the [*leading*]{} large-order factorial asymptotics of the perturbative coefficients are known. We continue with the discussion of the conformal mapping (\[DefZ\]). It should be noted that for series whose leading singularity in the Borel plane is at $u = -u_0$ with $u_0 > 0$, an appropriate rescaling of the Borel variable $u \to |u_0|\, u$ is necessary on the right-hand side of Eq. (\[BorelIntegral\]). Then, $f^{(\lambda)}_{\rm B}(|u_0|\,u)$ as a function of $u$ has its leading singularity at $u = -1$ (see also Eq. (41.57) in [@ZJ1996]). The Borel integration variable $u$ can be expressed as a function of $z$ as follows, $$\label{UasFuncOfZ} u(z) = \frac{4 \, z}{(z-1)^2}\,.$$ The $m$th partial sum of the Borel transform (\[PartialSum\]) can be rewritten, upon expansion of the $u$ in powers of $z$, as $$\label{PartialSumConformal} f^{(\lambda),m}_{\rm B}(u) = f^{(\lambda),m}_{\rm B}\bigl(u(z)\bigr) = \sum_{n=0}^{m} C_n\,z^n + {\cal O}(z^{m+1})\,,$$ where the coefficients $C_n$ as a function of the $c_n$ are uniquely determined (see, e.g., Eqs. (36) and (37) of [@CaFi1999]). We define partial sum of the Borel transform, expressed as a function of the conformal variable $z$, as $$f'^{(\lambda),m}_{\rm B}(z) = \sum_{n=0}^{m} C_n\,z^n\,.$$ In a previous investigation [@CaFi1999], Caprini and Fischer evaluate the following transforms, $$\label{CaFiTrans} {\cal T}'_m f(g) = \frac{1}{g^\lambda}\, \int_0^\infty {\rm d}u \,u^{\lambda - 1} \,\exp\bigl(-u/g\bigr)\, f'^{(\lambda),m}_{\rm B}(z(u))\,.$$ Caprini and Fischer [@CaFi1999] observe the apparent numerical convergence with increasing $m$. The limit as $m\to\infty$, provided it exists, is then assumed to represent the complete, physically relevant solution, $$f(g) = \lim_{m\to\infty} {\cal T}'_m f(g)\,.$$ We do not consider the question of the existence of this limit here (for an outline of questions related to these issues we refer to [@CaFi2000]). In the absence of further information on the analyticity domain of the Borel transform (\[BorelTrans\]), we cannot necessarily conclude that $f^{(\lambda)}_{\rm B}{\mathbf (}u(z){\mathbf )}$ as a function of $z$ is analytic inside the unit circle of the complex $z$-plane, or that, for example, the conditions of Theorem 5.2.1 of [@BaGr1996] are fulfilled. Therefore, we propose a modification of the transforms (\[CaFiTrans\]). In particular, we advocate the evaluation of (lower-diagonal) Padé approximants [@BaGr1996; @BeOr1978] to the function $f'^{(\lambda),m}_{\rm B}(z)$, expressed as a function of $z$, $$\label{ConformalPade} f''^{(\lambda),m}_{\rm B}(z) = \bigg[ [\mkern - 2.5 mu [m/2] \mkern - 2.5 mu ] \bigg/ [\mkern - 2.5 mu [(m+1)/2] \mkern - 2.5 mu ] \bigg]_{f'^{(\lambda),m}_{\rm B}}\!\!\!\left(z\right)\,.$$ We define the following transforms, $$\label{AccelTrans} {\cal T}''_m f(g) = \frac{1}{g^\lambda}\, \int_{C_j} {\rm d}u \,u^{\lambda - 1} \,\exp\bigl(-u/g\bigr)\, f''^{(\lambda),m}_{\rm B}\bigl(z(u)\bigr)$$ where the integration contour $C_j$ ($j=-1,0,1$) have been defined in [@Je2000]. These integration contours have been shown to to provide the physically correct analytic continuation of resummed perturbation series for those cases where the evaluation of the standard Laplace-Borel integral (\[BorelIntegral\]) is impossible due to an insufficient analyticity domain of the integrand (possibly due to multiple branch cuts) or due to spurious singularities in view of the finite order of the Padé approximations defined in (\[ConformalPade\]). We should mention potential complications due to multi-instanton contributions, as discussed for example in Ch. 43 of [@ZJ1996] (these are not encountered in the current investigation). In this letter, we use exclusively the contour $C_0$ which is defined as the half sum of the contours $C_{-1}$ and $C_{+1}$ displayed in Fig. 1 in [@Je2000]. At increasing $m$, the limit as $m\to\infty$, provided it exists, is then again assumed to represent the complete, physically relevant solution, $$f(g) = \lim_{m\to\infty} {\cal T}''_m f(g)\,.$$ Because we take advantage of the special integration contours $C_j$, analyticity of the Borel transform $f^{(\lambda)}_{\rm B}{\mathbf (}u(z){\mathbf )}$ inside the unit circle of the complex $z$-plane is not required, and additional acceleration of the convergence is mediated by employing Padé approximants in the conformal variable $z$. [cr@[.]{}lr@[.]{}lr@[.]{}lr@[.]{}l]{} ------------------------------------------------------------------------ $m$ & & & &\ ------------------------------------------------------------------------ 28 & $-0$ & $501~565~232$ & $-0$ & $538~352~234$ & $-0$ & $573~969~740$ & $-0$ & $827~506~173$\ ------------------------------------------------------------------------ 29 & $-0$ & $501~565~232$ & $-0$ & $538~352~233$ & $-0$ & $573~969~738$ & $-0$ & $827~506~143$\ ------------------------------------------------------------------------ 30 & $-0$ & $501~565~231$ & $-0$ & $538~352~233$ & $-0$ & $573~969~738$ & $-0$ & $827~506~136$\ [cr@[.]{}lr@[.]{}lr@[.]{}lr@[.]{}l]{} ------------------------------------------------------------------------ $m$ & & & &\ ------------------------------------------------------------------------ 28 & $-1$ & $669~071~213$ & $-1$ & $800~550~588$ & $-1$ & $928~740~624$ & $-1$ & $852~027~809$\ ------------------------------------------------------------------------ 29 & $-1$ & $669~071~214$ & $-1$ & $800~550~589$ & $-1$ & $928~740~626$ & $-1$ & $852~027~810$\ ------------------------------------------------------------------------ 30 & $-1$ & $669~071~214$ & $-1$ & $800~550~589$ & $-1$ & $928~740~625$ & $-1$ & $852~027~810$\ We consider the resummation of two particular perturbation series discussed in [@BrKr1999] for the anomalous dimension $\gamma$ function of the $\phi^3$ theory in 6 dimensions and the Yukawa coupling in 4 dimensions. The perturbation series for the $\phi^3$ theory is given in Eq. (16) in [@BrKr1999], $$\label{gammaPhi4} \gamma_{\rm hopf}(g) \sim \sum_{n=1}^{\infty} (-1)^n \, \frac{G_n}{6^{2 n - 1}} \, g^n\,,$$ where the coefficients $G_n$ are given in Table 1 in [@BrKr1999] for $n=1,\dots,30$ (the $G_n$ are real and positive). We denote the coupling parameter $a$ used in [@BrKr1999] as $g$; this is done in order to ensure compatibility with the general power series given in Eq. (\[power\]). Empirically, Broadhurst and Kreimer derive the large-order asymptotics $$G_n \sim {\rm const.} \; \times \; 12^{n-1} \, \Gamma(n+2)\,, \qquad n\to\infty\,,$$ by investigating the explicit numerical values of the coefficients $G_1,\dots,G_{30}$. The leading asymptotics of the perturbative coefficients $c_n$ are therefore (up to a constant prefactor) $$\label{LeadingPhi4} c_n \sim (-1)^n \frac{\Gamma(n+2)}{3^n}\,, \qquad n\to\infty\,.$$ This implies that the $\lambda$-parameter in the Borel transform (\[BorelTrans\]) should be set to $\lambda=2$ (see also the notion of an asymptotically optimized Borel transform discussed in [@JeWeSo2000]). In view of Eq. (\[LeadingPhi4\]), the pole closest to the origin of the Borel transform (\[BorelTrans\]) is expected at $$u = u^{\rm hopf}_0 = -3\,,$$ and a rescaling of the Borel variable $u \to 3\,u$ in Eq. (\[BorelIntegral\]) then leads to an expression to which the method defined in Eqs. (\[power\])–(\[AccelTrans\]) can be applied directly. For the Yukawa coupling, the $\gamma$-function reads $$\label{gammaYukawa} {\tilde \gamma}_{\rm hopf}(g) \sim \sum_{n=1}^{\infty} (-1)^n \, \frac{{\tilde G}_n}{2^{2 n - 1}} \, g^n\,,$$ where the ${\tilde G}_n$ are given in Table 2 in [@BrKr1999] for $n=1,\dots,30$. Empirically, i.e. from an investigation of the numerical values of ${\tilde G}_1,\dots,{\tilde G}_{30}$, the following factorial growth in large order is derived [@BrKr1999], $${\tilde G}_n \sim {\rm const.'} \; \times \; 2^{n-1} \, \Gamma(n+1/2)\,, \qquad n\to\infty\,.$$ This leads to the following asymptotics for the perturbative coefficients (up to a constant prefactor), $$c_n \sim (-1)^n \frac{\Gamma(n+1/2)}{2^n} \,, \qquad n\to\infty\,.$$ This implies that an asymptotically optimal choice [@JeWeSo2000] for the $\lambda$-parameter in (\[BorelTrans\]) is $\lambda=1/2$. The first pole of the Borel transform (\[BorelTrans\]) is therefore expected at $$u = {\tilde u}^{\rm hopf}_0 = -2\,.$$ A rescaling of the Borel variable according to $u \to 2\,u$ in (\[BorelIntegral\]) enables the application of the resummation method defined in Eqs. (\[power\])–(\[AccelTrans\]). In Table \[table1\], numerical values for the transforms ${\cal T}''_m \gamma_{\rm hopf}(g)$ are given, which have been evaluated according to Eq. (\[AccelTrans\]). The transformation order is in the range $m=28~,29,~30$, and we consider coupling parameters $g=5.0,~5.5,~6.0$ and $g=10.0$. The numerical values of the transforms display apparent convergence to about 9 significant figures for $g \leq 6.0$ and to about 7 figures for $g=10.0$. In Table \[table2\], numerical values for the transforms ${\cal T}''_m {\tilde \gamma}_{\rm hopf}(g)$ calculated according to Eq. (\[AccelTrans\]) are shown in the range $m=28,~29,~30$ for (large) coupling strengths $g=5.0,~5.5,~6.0$. Additionally, the value $g = 30^2/(4\,\pi)^2 = 5.69932\dots$ is considered as a special case (as it has been done in [@BrKr1999]). Again, the numerical values of the transforms display apparent convergence to about 9 significant figures. At large coupling $g = 12.0$, the apparent convergence of the transforms suggests the following values: $\gamma_{\rm hopf}(12.0) = -0.939\,114\,3(2)$ and ${\tilde \gamma}_{\rm hopf}(12.0) = -3.287\,176\,9(2)$. The numerical results for the Yukawa case, i.e. for the function ${\tilde \gamma}_{\rm hopf}$, have recently been confirmed by an improved analytic, nonperturbative investigation [@BrKr2000prep] which extends the perturbative calculation [@BrKr1999]. We note that the transforms ${\cal T}'_m \gamma_{\rm hopf}(g)$ and ${\cal T}'_m {\tilde \gamma}_{\rm hopf}(g)$ calculated according to Eq. (\[CaFiTrans\]), i.e. by the unmodified conformal mapping, typically exhibit apparent convergence to 5–6 significant figures in the transformation order $m=28,~29,~30$ and at large coupling $g \geq 5$. Specifically, the numerical values for $g=5.0$ are $$\begin{aligned} {\cal T}'_{28} \gamma_{\rm hopf}(g = 5.0) \; &=& \; -0.501~567~294\,, \nonumber\\[2ex] {\cal T}'_{29} \gamma_{\rm hopf}(g = 5.0) \; &=& \; -0.501~564~509\,, \nonumber\\[2ex] {\cal T}'_{30} \gamma_{\rm hopf}(g = 5.0) \; &=& \; -0.501~563~626\,. \nonumber\end{aligned}$$ These results, when compared to the data in Table \[table1\], exemplify the acceleration of the convergence by the additional Padé approximation of the Borel transform [*expressed as a function of the conformal variable*]{} \[see Eq. (\[ConformalPade\])\]. It is not claimed here that the resummation method defined in Eqs. (\[power\])–(\[AccelTrans\]) necessarily provides the fastest possible rate of convergence for the perturbation series defined in Eq. (\[gammaPhi4\]) and (\[gammaYukawa\]). Further improvements should be feasible, especially if particular properties of the input series are known and exploited (see in part the methods described in [@JeWeSo2000]). We also note possible improvements based on a large-coupling expansion [@We1996d], in particular for excessively large values of the coupling parameter $g$, or methods based on order-dependent mappings (see [@SeZJ1979; @LGZJ1983] or the discussion following Eq. (41.67) in [@ZJ1996]). The conformal mapping [@CaFi1999; @CaFi2000] is capable of accomplishing the analytic continuation of the Borel transform (\[BorelTrans\]) beyond the circle of convergence. Padé approximants, applied directly to the partial sums of the Borel transform (\[PartialSum\]), provide an alternative to this method [@Raczka1991; @Pi1999; @BrKr1999; @Je2000; @JeWeSo2000]. Improved rates of convergence can be achieved when the convergence of the transforms obtained by conformal mapping in Eq. (\[PartialSumConformal\]) is accelerated by evaluating Padé approximants as in Eq. (\[ConformalPade\]), and conditions on analyticity domains can be relaxed in a favorable way when these methods are combined with the integration contours from Ref. [@Je2000]. Numerical results for the resummed values of the perturbation series (\[gammaPhi4\]) and (\[gammaYukawa\]) are provided in the Tables \[table1\] and \[table2\]. By the improved conformal mapping and other optimized resummation techniques (see, e.g., the methods introduced in Ref. [@JeWeSo2000]) the applicability of perturbative (small-coupling) expansions can be generalized to the regime of large coupling and still lead to results of relatively high accuracy.\ U.J. acknowledges helpful conversations with E. J. Weniger, I. Nándori, S. Roether and P. J. Mohr. G.S. acknowledges continued support from BMBF, DFG and GSI. [10]{} J. C. LeGuillou and J. Zinn-Justin, [*Large-Order Behaviour of Perturbation Theory*]{} (North-Holland, Amsterdam, 1990). J. Zinn-Justin, [*Quantum Field Theory and Critical Phenomena*]{}, 3rd ed. (Clarendon Press, Oxford, 1996). J. Fischer, Int. J. Mod. Phys. A [**12**]{}, 3625 (1997). U. D. Jentschura, E. Weniger, and G. Soff, Asymptotic Improvement of Resummation and Perturbative Predictions, Los Alamos preprint hep-ph/0005198, submitted. D. Broadhurst and D. Kreimer, Phys. Lett. B [**475**]{}, 63 (2000). I. Caprini and J. Fischer, Phys. Rev. D [**60**]{}, 054014 (1999). I. Caprini and J. Fischer, Convergence of the expansion of the Laplace-Borel integral in perturbative QCD improved by conformal mapping, Los Alamos preprint hep-ph/0002016. U. D. Jentschura, Resummation of Nonalternating Divergent Perturbative Expansions, Los Alamos preprint hep-ph/0001135, Phys. Rev. D (in press). P. A. Raczka, Phys. Rev. D [**43**]{}, R9 (1991). M. Pindor, Padé Approximants and Borel Summation for QCD Perturbation Series, Los Alamos preprint hep-th/9903151. R. Seznec and J. Zinn-Justin, J. Math. Phys. [**20**]{}, 1398 (1979). J. C. L. Guillou and J. Zinn-Justin, Ann. Phys. (N. Y.) [**147**]{}, 57 (1983). R. Guida, K. Konishi, and H. Suzuki, Ann. Phys. (N. Y.) [**241**]{}, 152 (1995). D. J. Broadhurst, P. A. Baikov, V. A. Ilyin, J. Fleischer, O. V. Tarasov, and V. A. Smirnov, Phys. Lett. B [**329**]{}, 103 (1994). G. Altarelli, P. Nason, and G. Ridolfi, Z. Phys. C [**68**]{}, 257 (1995). D. E. Soper and L. R. Surguladze, Phys. Rev. D [**54**]{}, 4566 (1996). K. G. Chetyrkin, J. H. Kühn, and M. Steinhauser, Phys. Lett. B [**371**]{}, 93 (1996). K. G. Chetyrkin, J. H. Kühn, and M. Steinhauser, Nucl. Phys. B [**482**]{}, 213 (1996). K. G. Chetyrkin, R. Harlander, and M. Steinhauser, Phys. Rev. D [**58**]{}, 014012 (1998). G. A. Baker and P. Graves-Morris, [*Padé approximants*]{}, 2nd ed. (Cambridge University Press, Cambridge, 1996). C. M. Bender and S. A. Orszag, [*Advanced Mathematical Methods for Scientists and Engineers*]{} (McGraw-Hill, New York, NY, 1978). D. Broadhurst and D. Kreimer, in preparation (2000). E. J. Weniger, Phys. Rev. Lett. [**77**]{}, 2859 (1996).
\[section\] \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Remark]{} \[theorem\][Fact]{} \[theorem\][Problem]{} @addtoreset ¶[[**P**]{}]{} **A subdiffusive behaviour of recurrent random walk** **in random environment on a regular tree** by Yueyun Hu $\;$and$\;$ Zhan Shi *Université Paris XIII & Université Paris VI* This version: March 11, 2006 =2truecm =2truecm [***Summary.***]{} We are interested in the random walk in random environment on an infinite tree. Lyons and Pemantle [@lyons-pemantle] give a precise recurrence/transience criterion. Our paper focuses on the almost sure asymptotic behaviours of a recurrent random walk $(X_n)$ in random environment on a regular tree, which is closely related to Mandelbrot [@mandelbrot]’s multiplicative cascade. We prove, under some general assumptions upon the distribution of the environment, the existence of a new exponent $\nu\in (0, {1\over 2}]$ such that $\max_{0\le i \le n} |X_i|$ behaves asymptotically like $n^{\nu}$. The value of $\nu$ is explicitly formulated in terms of the distribution of the environment. [***Keywords.***]{} Random walk, random environment, tree, Mandelbrot’s multiplicative cascade. [***2000 Mathematics Subject Classification.***]{} 60K37, 60G50. Introduction {#s:intro} ============ Random walk in random environment (RWRE) is a fundamental object in the study of random phenomena in random media. RWRE on $\z$ exhibits rich regimes in the transient case (Kesten, Kozlov and Spitzer [@kesten-kozlov-spitzer]), as well as a slow logarithmic movement in the recurrent case (Sinai [@sinai]). On $\z^d$ (for $d\ge 2$), the study of RWRE remains a big challenge to mathematicians (Sznitman [@sznitman], Zeitouni [@zeitouni]). The present paper focuses on RWRE on a regular rooted tree, which can be viewed as an infinite-dimensional RWRE. Our main result reveals a rich regime à la Kesten–Kozlov–Spitzer, but this time even in the recurrent case; it also strongly suggests the existence of a slow logarithmic regime à la Sinai. Let $\T$ be a $\deg$-ary tree ($\deg\ge 2$) rooted at $e$. For any vertex $x\in \T \backslash \{ e\}$, let ${\buildrel \leftarrow \over x}$ denote the first vertex on the shortest path from $x$ to the root $e$, and $|x|$ the number of edges on this path (notation: $|e|:= 0$). Thus, each vertex $x\in \T \backslash \{ e\}$ has one parent ${\buildrel \leftarrow \over x}$ and $\deg$ children, whereas the root $e$ has $\deg$ children but no parent. We also write ${\buildrel \Leftarrow \over x}$ for the parent of ${\buildrel \leftarrow \over x}$ (for $x\in \T$ such that $|x|\ge 2$). Let $\omega:= (\omega(x,y), \, x,y\in \T)$ be a family of non-negative random variables such that $\sum_{y\in \T} \omega(x,y)=1$ for any $x\in \T$. Given a realization of $\omega$, we define a Markov chain $X:= (X_n, \, n\ge 0)$ on $\T$ by $X_0 =e$, and whose transition probabilities are $$P_\omega(X_{n+1}= y \, | \, X_n =x) = \omega(x, y) .$$ Let $\P$ denote the distribution of $\omega$, and let $\p (\cdot) := \int P_\omega (\cdot) \P(\! \d \omega)$. The process $X$ is a $\T$-valued RWRE. (By informally taking $\deg=1$, $X$ would become a usual RWRE on the half-line $\z_+$.) For general properties of tree-valued processes, we refer to Peres [@peres] and Lyons and Peres [@lyons-peres]. See also Duquesne and Le Gall [@duquesne-le-gall] and Le Gall [@le-gall] for continuous random trees. For a list of motivations to study RWRE on a tree, see Pemantle and Peres [@pemantle-peres1], p. 106. We define $$A(x) := {\omega({\buildrel \leftarrow \over x}, x) \over \omega({\buildrel \leftarrow \over x}, {\buildrel \Leftarrow \over x})} , \qquad x\in \T, \; |x|\ge 2. \label{A}$$ Following Lyons and Pemantle [@lyons-pemantle], we assume throughout the paper that $(\omega(x,\bullet))_{x\in \T\backslash \{ e\} }$ is a family of i.i.d. [*non-degenerate*]{} random vectors and that $(A(x), \; x\in \T, \; |x|\ge 2)$ are identically distributed. We also assume the existence of $\varepsilon_0>0$ such that $\omega(x,y) \ge \varepsilon_0$ if either $x= {\buildrel \leftarrow \over y}$ or $y= {\buildrel \leftarrow \over x}$, and $\omega(x,y) =0$ otherwise; in words, $(X_n)$ is a nearest-neighbour walk, satisfying an ellipticity condition. Let $A$ denote a generic random variable having the common distribution of $A(x)$ (for $|x| \ge 2$). Define $$p := \inf_{t\in [0,1]} \E (A^t) . \label{p}$$ We recall a recurrence/transience criterion from Lyons and Pemantle ([@lyons-pemantle], Theorem 1 and Proposition 2). [**Theorem A (Lyons and Pemantle [@lyons-pemantle])**]{} [*With $\p$-probability one, the walk $(X_n)$ is recurrent or transient, according to whether $p\le {1\over \deg}$ or $p>{1\over \deg}$. It is, moreover, positive recurrent if $p<{1\over \deg}$.*]{} We study the recurrent case $p\le {1\over \deg}$ in this paper. Our first result, which is not deep, concerns the positive recurrent case $p< {1\over \deg}$. \[t:posrec\] If $p<{1\over \deg}$, then $$\lim_{n\to \infty} \, {1\over \log n} \, \max_{0\le i\le n} |X_i| = {1\over \log[1/(q\deg)]}, \qquad \hbox{\rm $\p$-a.s.}, \label{posrec}$$ where the constant $q$ is defined in $(\ref{q})$, and lies in $(0, {1\over \deg})$ when $p<{1\over \deg}$. Despite the warning of Pemantle [@pemantle] (“there are many papers proving results on trees as a somewhat unmotivated alternative …to Euclidean space"), it seems to be of particular interest to study the more delicate situation $p={1\over \deg}$ that turns out to possess rich regimes. We prove that, similarly to the Kesten–Kozlov–Spitzer theorem for [*transient*]{} RWRE on the line, $(X_n)$ enjoys, even in the recurrent case, an interesting subdiffusive behaviour. To state our main result, we define $$\begin{aligned} \kappa &:=& \inf\left\{ t>1: \; \E(A^t) = {1\over \deg} \right\} \in (1, \infty], \qquad (\inf \emptyset=\infty) \label{kappa} \\ \psi(t) &:=& \log \E \left( A^t \right) , \qquad t\ge 0. \label{psi}\end{aligned}$$ We use the notation $a_n \approx b_n$ to denote $\lim_{n\to \infty} \, {\log a_n \over \log b_n} =1$. \[t:nullrec\] If $p={1\over \deg}$ and if $\psi'(1)<0$, then $$\max_{0\le i\le n} |X_i| \; \approx\; n^\nu, \qquad \hbox{\rm $\p$-a.s.}, \label{nullrec}$$ where $\nu=\nu(\kappa)$ is defined by $$\nu := 1- {1\over \min\{ \kappa, 2\} } = \left\{ \begin{array}{ll} (\kappa-1)/\kappa, & \mbox{if $\;\kappa \in (1,2]$}, \\ \\ 1/2 & \mbox{if $\;\kappa \in (2, \infty].$} \end{array} \right. \label{theta}$$ [**Remark.**]{} (i) It is known (Menshikov and Petritis [@menshikov-petritis]) that if $p={1\over \deg}$ and $\psi'(1)<0$, then for $\P$-almost all environment $\omega$, $(X_n)$ is null recurrent. \(ii) For the value of $\kappa$, see Figure 1. Under the assumptions $p={1\over \deg}$ and $\psi'(1)<0$, the value of $\kappa$ lies in $(2, \infty]$ if and only if $\E (A^2) < {1\over \deg}$; and $\kappa=\infty$ if moreover $\hbox{ess sup}(A) \le 1$. \(iii) Since the walk is recurrent, $\max_{0\le i\le n} |X_i|$ cannot be replaced by $|X_n|$ in (\[posrec\]) and (\[nullrec\]). \(iv) Theorem \[t:nullrec\], which could be considered as a (weaker) analogue of the Kesten–Kozlov–Spitzer theorem, shows that tree-valued RWRE has even richer regimes than RWRE on $\z$. In fact, recurrent RWRE on $\z$ is of order of magnitude $(\log n)^2$, and has no $n^a$ (for $0<a<1$) regime. \(v) The case $\psi'(1)\ge 0$ leads to a phenomenon similar to Sinai’s slow movement, and is studied in a forthcoming paper. The rest of the paper is organized as follows. Section \[s:posrec\] is devoted to the proof of Theorem \[t:posrec\]. In Section \[s:proba\], we collect some elementary inequalities, which will be of frequent use later on. Theorem \[t:nullrec\] is proved in Section \[s:nullrec\], by means of a result (Proposition \[p:beta-gamma\]) concerning the solution of a recurrence equation which is closely related to Mandelbrot’s multiplicative cascade. We prove Proposition \[p:beta-gamma\] in Section \[s:beta-gamma\]. Throughout the paper, $c$ (possibly with a subscript) denotes a finite and positive constant; we write $c(\omega)$ instead of $c$ when the value of $c$ depends on the environment $\omega$. Proof of Theorem \[t:posrec\] {#s:posrec} ============================= We first introduce the constant $q$ in the statement of Theorem \[t:posrec\], which is defined without the assumption $p< {1\over \deg}$. Let $$\varrho(r) := \inf_{t\ge 0} \left\{ r^{-t} \, \E(A^t) \right\} , \qquad r>0.$$ Let $\underline{r} >0$ be such that $$\log \underline{r} = \E(\log A) .$$ We mention that $\varrho(r)=1$ for $r\in (0, \underline{r}]$, and that $\varrho(\cdot)$ is continuous and (strictly) decreasing on $[\underline{r}, \, \Theta)$ (where $\Theta:= \hbox{ess sup}(A) < \infty$), and $\varrho(\Theta) = \P (A= \Theta)$. Moreover, $\varrho(r)=0$ for $r> \Theta$. See Chernoff [@chernoff]. We define $$\overline{r} := \inf\left\{ r>0: \; \varrho(r) \le {1\over \deg} \right\}.$$ Clearly, $\underline{r} < \overline{r}$. We define $$q:= \sup_{r\in [\underline{r}, \, \overline{r}]} r \varrho(r). \label{q}$$ The following elementary lemma tells us that, instead of $p$, we can also use $q$ in the recurrence/transience criterion of Lyons and Pemantle. \[l:pq\] We have $q>{1\over \deg}$ $($resp., $q={1\over \deg}$, $q<{1\over \deg})$ if and only if $p>{1\over \deg}$ $($resp., $p={1\over \deg}$, $p<{1\over \deg})$. [*Proof of Lemma \[l:pq\].*]{} By Lyons and Pemantle ([@lyons-pemantle], p. 129), $p= \sup_{r\in (0, \, 1]} r \varrho (r)$. Since $\varrho(r) =1$ for $r\in (0, \, \underline{r}]$, there exists $\min\{\underline{r}, 1\}\le r^* \le 1$ such that $p= r^* \varrho (r^*)$. \(i) Assume $p<{1\over \deg}$. Then $\varrho (1) \le \sup_{r\in (0, \, 1]} r \varrho (r) = p < {1\over \deg}$, which, by definition of $\overline{r}$, implies $\overline{r} < 1$. Therefore, $q \le p <{1\over \deg}$. \(ii) Assume $p\ge {1\over \deg}$. We have $\varrho (r^*) \ge p \ge {1\over \deg}$, which yields $r^* \le \overline{r}$. If $\underline{r} \le 1$, then $r^*\ge \underline{r}$, and thus $p=r^* \varrho (r^*) \le q$. If $\underline{r} > 1$, then $p=1$, and thus $q\ge \underline{r}\, \varrho (\underline{r}) = \underline{r} > 1=p$. We have therefore proved that $p\ge {1\over \deg}$ implies $q\ge p$. If moreover $p>{1\over \deg}$, then $q \ge p>{1\over \deg}$. \(iii) Assume $p={1\over \deg}$. We already know from (ii) that $q \ge p$. On the other hand, $\varrho (1) \le \sup_{r\in (0, \, 1]} r \varrho (r) = p = {1\over \deg}$, implying $\overline{r} \le 1$. Thus $q \le p$. As a consequence, $q=p={1\over \deg}$.$\Box$ Having defined $q$, the next step in the proof of Theorem \[t:posrec\] is to compute invariant measures $\pi$ for $(X_n)$. We first introduce some notation on the tree. For any $m\ge 0$, let $$\T_m := \left\{x \in \T: \; |x| = m \right\} .$$ For any $x\in \T$, let $\{ x_i \}_{1\le i\le \deg}$ be the set of children of $x$. If $\pi$ is an invariant measure, then $$\pi (x) = {\omega ({\buildrel \leftarrow \over x}, x) \over \omega (x, {\buildrel \leftarrow \over x})} \, \pi({\buildrel \leftarrow \over x}), \qquad \forall \, x\in \T \backslash \{ e\}.$$ By induction, this leads to (recalling $A$ from (\[A\])): for $x\in \T_m$ ($m\ge 1$), $$\pi (x) = {\pi(e)\over \omega (x, {\buildrel \leftarrow \over x})} {\omega (e, x^{(1)}) \over A(x^{(1)})} \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) ,$$ where $]\! ] e, x]\! ]$ denotes the shortest path $x^{(1)}$, $x^{(2)}$, $\cdots$, $x^{(m)} =: x$ from the root $e$ (but excluded) to the vertex $x$. The identity holds for [*any*]{} choice of $(A(e_i), \, 1\le i\le \deg)$. We choose $(A(e_i), \, 1\le i\le \deg)$ to be a random vector independent of $(\omega(x,y), \, |x|\ge 1, \, y\in \T)$, and distributed as $(A(x_i), \, 1\le i\le \deg)$, for any $x\in \T_m$ with $m\ge 1$. By the ellipticity condition on the environment, we can take $\pi(e)$ to be sufficiently small so that for some $c_0\in (0, 1]$, $$c_0\, \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) \le \pi (x) \le \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) . \label{pi}$$ By Chebyshev’s inequality, for any $r>\underline{r}$, $$\max_{x\in \T_n} \P \left\{ \pi (x) \ge r^n\right\} \le \varrho(r)^n. \label{chernoff}$$ Since $\# \T_n = \deg^n$, this gives $\E (\#\{ x\in \T_n: \; \pi (x)\ge r^n \} ) \le \deg^n \varrho(r)^n$. By Chebyshev’s inequality and the Borel–Cantelli lemma, for any $r>\underline{r}$ and $\P$-almost surely for all large $n$, $$\#\left\{ x\in \T_n: \; \pi (x) \ge r^n \right\} \le n^2 \deg^n \varrho(r)^n. \label{Jn-ub1}$$ On the other hand, by (\[chernoff\]), $$\P \left\{ \exists x\in \T_n: \pi (x) \ge r^n\right\} \le \deg^n \varrho (r)^n.$$ For $r> \overline{r}$, the expression on the right-hand side is summable in $n$. By the Borel–Cantelli lemma, for any $r>\overline{r}$ and $\P$-almost surely for all large $n$, $$\max_{x\in \T_n} \pi (x) < r^n. \label{Jn-ub}$$ [*Proof of Theorem \[t:posrec\]: upper bound.*]{} Fix $\varepsilon>0$ such that $q+ 3\varepsilon < {1\over \deg}$. We follow the strategy given in Liggett ([@liggett], p. 103) by introducing a positive recurrent birth-and-death chain $(\widetilde{X_j}, \, j\ge 0)$, starting from $0$, with transition probability from $i$ to $i+1$ (for $i\ge 1$) equal to $${1\over \widetilde{\pi} (i)} \, \sum_{x\in \T_i} \pi(x) (1- \omega(x, {\buildrel \leftarrow \over x})) ,$$ where $\widetilde{\pi} (i) := \sum_{x\in \T_i} \pi(x)$. We note that $\widetilde{\pi}$ is a finite invariant measure for $(\widetilde{X_j})$. Let $$\tau_n := \inf \left\{ i\ge 1: \, X_i \in \T_n\right\}, \qquad n\ge 0.$$ By Liggett ([@liggett], Theorem II.6.10), for any $n\ge 1$, $$P_\omega (\tau_n< \tau_0) \le \widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0),$$ where $\widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0)$ is the probability that $(\widetilde{X_j})$ hits $n$ before returning to $0$. According to Hoel et al. ([@hoel-port-stone], p. 32, Formula (61)), $$\widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0) = c_1(\omega) \left( \, \sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x) (1- \omega(x, {\buildrel \leftarrow \over x}))}Ê\right)^{\! \! -1} ,$$ where $c_1(\omega)\in (0, \infty)$ depends on $\omega$. We arrive at the following estimate: for any $n\ge 1$, $$P_\omega (\tau_n< \tau_0) \le c_1(\omega) \, \left( \, \sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)}Ê\right)^{\! \! -1} . \label{liggett}$$ We now estimate $\sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)}$. For any fixed $0=r_0< \underline{r} < r_1 < \cdots < r_\ell = \overline{r} <r_{\ell +1}$, $$\sum_{x\in \T_i} \pi(x) \le \sum_{j=1}^{\ell+1} (r_j)^i \# \left\{ x\in \T_i: \pi(x) \ge (r_{j-1})^i \right\} + \sum_{x\in \T_i: \, \pi(x) \ge (r_{\ell +1})^i} \pi(x).$$ By (\[Jn-ub\]), $\sum_{x\in \T_i: \, \pi(x) \ge (r_{\ell +1})^i} \pi(x) =0$ $\P$-almost surely for all large $i$. It follows from (\[Jn-ub1\]) that $\P$-almost surely, for all large $i$, $$\sum_{x\in \T_i} \pi(x) \le (r_1)^i \deg^i + \sum_{j=2}^{\ell+1} (r_j)^i i^2 \, \deg^i \varrho (r_{j-1})^i.$$ Recall that $q= \sup_{r\in [\underline{r}, \, \overline{r}] } r \, \varrho(r) \ge \underline{r} \, \varrho (\underline{r}) = \underline{r}$. We choose $r_1:= \underline{r} + \varepsilon \le q+\varepsilon$. We also choose $\ell$ sufficiently large and $(r_j)$ sufficiently close to each other so that $r_j \, \varrho(r_{j-1}) < q+\varepsilon$ for all $2\le j\le \ell+1$. Thus, $\P$-almost surely for all large $i$, $$\sum_{x\in \T_i} \pi(x) \le (r_1)^i \deg^i + \sum_{j=2}^{\ell+1} i^2 \, \deg^i (q+\varepsilon)^i = (r_1)^i \deg^i + \ell \, i^2 \, \deg^i (q+\varepsilon)^i,$$ which implies (recall: $\deg(q+\varepsilon)<1$) that $\sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)} \ge {c_2\over n^2\, \deg^n (q+\varepsilon)^n}$. Plugging this into (\[liggett\]) yields that, $\P$-almost surely for all large $n$, $$P_\omega (\tau_n< \tau_0) \le c_3(\omega)\, n^2\, \deg^n (q+\varepsilon)^n \le [(q+2\varepsilon)\deg]^n.$$ In particular, by writing $L(\tau_n):= \# \{ 1\le i \le \tau_n: \, X_i = e\}$, we obtain: $$P_\omega \left\{ L(\tau_n) \ge j \right\} = \left[ P_\omega (\tau_n> \tau_0) \right]^j \ge \left\{ 1- [(q+2\varepsilon)\deg]^n \right\}^j ,$$ which, by the Borel–Cantelli lemma, yields that, $\P$-almost surely for all large $n$, $$L(\tau_n) \ge {1\over [(q+3\varepsilon) \deg]^n} , \qquad \hbox{\rm $P_\omega$-a.s.}$$ Since $\{ L(\tau_n) \ge j \} \subset \{ \max_{0\le k \le 2j} |X_k| < n\}$, and since $\varepsilon$ can be as close to 0 as possible, we obtain the upper bound in Theorem \[t:posrec\].$\Box$ [*Proof of Theorem \[t:posrec\]: lower bound.*]{} Assume $p< {1\over \deg}$. Recall that in this case, we have $\overline{r}<1$. Let $\varepsilon>0$ be small. Let $r \in (\underline{r}, \, \overline{r})$ be such that $\varrho(r) > {1\over \deg} \ee^\varepsilon$ and that $r\varrho(r) \ge q\ee^{-\varepsilon}$. Let $L$ be a large integer with $\deg^{-1/L} \ge \ee^{-\varepsilon}$ and satisfying (\[GW\]) below. We start by constructing a Galton–Watson tree $\G$, which is a certain subtree of $\T$. The first generation of $\G$, denoted by $\G_1$ and defined below, consists of vertices $x\in \T_L$ satisfying a certain property. The second generation of $\G$ is formed by applying the same procedure to each element of $\G_1$, and so on. To be precise, $$\G_1 = \G_1 (L,r) := \left\{ x\in \T_L: \, \min_{z\in ]\! ] e, \, x ]\! ]} \prod_{y\in ]\! ] e, \, z]\! ]} A(y) \ge r^L \right\} ,$$ where $]\! ]e, \, x ]\! ]$ denotes as before the set of vertices (excluding $e$) lying on the shortest path relating $e$ and $x$. More generally, if $\G_i$ denotes the $i$-th generation of $\G$, then $$\G_{n+1} := \bigcup_{u\in \G_n } \left\{ x\in \T_{(n+1)L}: \, \min_{z\in ]\! ] u, \, x ]\! ]} \prod_{y\in ]\! ] u, \, z]\! ]} A(y) \ge r^L \right\} , \qquad n=1,2, \dots$$ We claim that it is possible to choose $L$ sufficiently large such that $$\E(\# \G_1) \ge \ee^{-\varepsilon L} \deg^L \varrho(r)^L . \label{GW}$$ Note that $\ee^{-\varepsilon L} \deg^L \varrho(r)^L>1$, since $\varrho(r) > {1\over \deg} \ee^\varepsilon$. We admit (\[GW\]) for the moment, which implies that $\G$ is super-critical. By theory of branching processes (Harris [@harris], p. 13), when $n$ goes to infinity, ${\# \G_{n/L} \over [\E(\# \G_1)]^{n/L} }$ converges almost surely (and in $L^2$) to a limit $W$ with $\P(W>0)>0$. Therefore, on the event $\{ W>0\}$, for all large $n$, $$\# (\G_{n/L}) \ge c_4(\omega) [\E(\# \G_1)]^{n/L}. \label{GnL}$$ (For notational simplification, we only write our argument for the case when $n$ is a multiple of $L$. It is clear that our final conclusion holds for all large $n$.) Recall that according to the Dirichlet principle (Griffeath and Liggett [@griffeath-liggett]), $$\begin{aligned} 2\pi(e) P_\omega \left\{ \tau_n < \tau_0 \right\} &=&\inf_{h: \, h(e)=1, \, h(z)=0, \, \forall |z| \ge n} \sum_{x,y\in \T} \pi(x) \omega(x,y) (h(x)- h(y))^2 \nonumber \\ &\ge& c_5\, \inf_{h: \, h(e)=1, \, h(z)=0, \, \forall z\in \T_n} \sum_{|x|<n} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2, \label{durrett}\end{aligned}$$ the last inequality following from ellipticity condition on the environment. Clearly, $$\begin{aligned} \sum_{|x|<n} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2 &=&\sum_{i=0}^{(n/L)-1} \sum_{x: \, iL \le |x| < (i+1) L} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2 \\ &:=&\sum_{i=0}^{(n/L)-1} I_i,\end{aligned}$$ with obvious notation. For any $i$, $$I_i \ge \deg^{-L} \sum_{v\in \G_{i+1}} \, \sum_{x\in [\! [ v^\uparrow, v[\! [} \, \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2,$$ where $v^\uparrow \in \G_i$ denotes the unique element of $\G_i$ lying on the path $[ \! [ e, v ]\! ]$ (in words, $v^\uparrow$ is the parent of $v$ in the Galton–Watson tree $\G$), and the factor $\deg^{-L}$ comes from the fact that each term $\pi(x) (h(x)- h(y))^2$ is counted at most $\deg^L$ times in the sum on the right-hand side. By (\[pi\]), for $x\in [\! [ v^\uparrow, v[\! [$, $\pi(x) \ge c_0 \, \prod_{u\in ]\! ]e, x]\! ]} A(u)$, which, by the definition of $\G$, is at least $c_0 \, r^{(i+1)L}$. Therefore, $$\begin{aligned} I_i &\ge& c_0 \, \deg^{-L} \sum_{v\in \G_{i+1}} \, \sum_{x\in [\! [ v^\uparrow, v[\! [} \, \sum_{y: \, x= {\buildrel \leftarrow \over y}} r^{(i+1)L} (h(x)- h(y))^2 \\ &\ge&c_0 \, \deg^{-L} r^{(i+1)L} \sum_{v\in \G_{i+1}} \, \sum_{y\in ]\! ] v^\uparrow, v]\! ]} (h({\buildrel \leftarrow \over y})- h(y))^2 .\end{aligned}$$ By the Cauchy–Schwarz inequality, $\sum_{y\in ]\! ] v^\uparrow, v]\! ]} (h({\buildrel \leftarrow \over y})- h(y))^2 \ge {1\over L} (h(v^\uparrow)-h(v))^2$. Accordingly, $$I_i \ge c_0 \, {\deg^{-L} r^{(i+1)L}\over L} \sum_{v\in \G_{i+1}} (h(v^\uparrow)-h(v))^2 ,$$ which yields $$\begin{aligned} \sum_{i=0}^{(n/L)-1} I_i &\ge& c_0 \, {\deg^{-L}\over L} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} \sum_{v\in \G_{i+1}} (h(v^\uparrow)- h(v))^2 \\ &\ge& c_0 \, {\deg^{-L}\over L} \deg^{-n/L} \sum_{v\in \G_{n/L}} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2 ,\end{aligned}$$ where, $e=: v^{(0)}$, $v^{(1)}$, $v^{(2)}$, $\cdots$, $v^{(n/L)} := v$, is the shortest path (in $\G$) from $e$ to $v$, and the factor $\deg^{-n/L}$ results from the fact that each term $r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2$ is counted at most $\deg^{n/L}$ times in the sum on the right-hand side. By the Cauchy–Schwarz inequality, for all $h: \T\to \r$ with $h(e)=1$ and $h(z)=0$ ($\forall z\in \T_n$), we have $$\begin{aligned} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2 &\ge&{1\over \sum_{i=0}^{(n/L)-1} r^{-(i+1)L}} \, \left( \sum_{i=0}^{(n/L)-1} (h(v^{(i)})- h(v^{(i+1)})) \right)^{\! \! 2} \\ &=&{1\over \sum_{i=0}^{(n/L)-1} r^{-(i+1)L}} \ge c_6 \, r^n.\end{aligned}$$ Therefore, $$\sum_{i=0}^{(n/L)-1} I_i \ge c_0c_6 \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \# (\G_{n/L}) \ge c_0 c_6 c_4(\omega) \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \, [\E (\# \G_1)]^{n/L}\, {\bf 1}_{ \{ W>0 \} },$$ the last inequality following from (\[GnL\]). Plugging this into (\[durrett\]) yields that for all large $n$, $$P_\omega \left\{ \tau_n < \tau_0 \right\} \ge c_7(\omega) \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \, [\E (\# \G_1)]^{n/L}\, {\bf 1}_{ \{ W>0 \} } .$$ Recall from (\[GW\]) that $\E(\# \G_1) \ge \ee^{-\varepsilon L} \deg^L \varrho(r)^L$. Therefore, on $\{W>0\}$, for all large $n$, $P_\omega \{ \tau_n < \tau_0 \} \ge c_8(\omega) (\ee^{-\varepsilon} \deg^{-1/L} \deg r \varrho(r))^n$, which is no smaller than $c_8(\omega) (\ee^{-3\varepsilon} q \deg)^n$ (since $\deg^{-1/L} \ge \ee^{-\varepsilon}$ and $r \varrho(r) \ge q \ee^{-\varepsilon}$ by assumption). Thus, by writing $L(\tau_n) := \#\{ 1\le i\le n: \; X_i = e \}$ as before, we have, on $\{ W>0 \}$, $$P_\omega \left\{ L(\tau_n) \ge j \right\} = \left[ P_\omega (\tau_n> \tau_0) \right]^j \le [1- c_8(\omega) (\ee^{-3\varepsilon} q \deg)^n ]^j.$$ By the Borel–Cantelli lemma, for $\P$-almost all $\omega$, on $\{W>0\}$, we have, $P_\omega$-almost surely for all large $n$, $L(\tau_n) \le 1/(\ee^{-4\varepsilon} q \deg)^n$, i.e., $$\max_{0\le k\le \tau_0(\lfloor 1/(\ee^{-4\varepsilon} q \deg)^n\rfloor )} |X_k| \ge n ,$$ where $0<\tau_0(1)<\tau_0(2)<\cdots$ are the successive return times to the root $e$ by the walk (thus $\tau_0(1) = \tau_0$). Since the walk is positive recurrent, $\tau_0(\lfloor 1/(\ee^{-4\varepsilon} q \deg)^n\rfloor ) \sim {1\over (\ee^{-4\varepsilon} q \deg)^n} E_\omega [\tau_0]$ (for $n\to \infty$), $P_\omega$-almost surely ($a_n \sim b_n$ meaning $\lim_{n\to \infty}Ê{a_n \over b_n} =1$). Therefore, for $\P$-almost all $\omega \in \{ W>0\}$, $$\liminf_{n\to \infty} {\max_{0\le k\le n} |X_k| \over \log n} \ge {1\over \log[1/(q\deg)]}, \qquad \hbox{\rm $P_\omega$-a.s.}$$ Recall that $\P\{ W>0\}>0$. Since modifying a finite number of transition probabilities does not change the value of $\liminf_{n\to \infty} {\max_{0\le k\le n} |X_k| \over \log n}$, we obtain the lower bound in Theorem \[t:posrec\]. It remains to prove (\[GW\]). Let $(A^{(i)})_{i\ge 1}$ be an i.i.d. sequence of random variables distributed as $A$. Clearly, for any $\delta\in (0,1)$, $$\begin{aligned} \E( \# \G_1) &=& \deg^L \, \P\left( \, \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\right) \\ &\ge& \deg^L \, \P \left( \, (1-\delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\right) .\end{aligned}$$ We define a new probability $\Q$ by $${\mathrm{d} \Q \over \mathrm{d}\P} := {\ee^{t \log A} \over \E(\ee^{t \log A})} = {A^t \over \E(A^t)},$$ for some $t\ge 0$. Then $$\begin{aligned} \E(\# \G_1) &\ge& \deg^L \, \E_\Q \left[ \, {[\E(A^t)]^L \over \exp\{ t \sum_{i=1}^L \log A^{(i)}\} }\, {\bf 1}_{\{ (1-\delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\} } \right] \\ &\ge& \deg^L \, {[\E(A^t)]^L \over r^{t (1- \delta) L} } \, \Q \left( (1- \delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L \right).\end{aligned}$$ To choose an optimal value of $t$, we fix $\widetilde{r}\in (r, \, \overline{r})$ with $\widetilde{r} < r^{1-\delta}$. Our choice of $t=t^*$ is such that $\varrho(\widetilde{r}) = \inf_{t\ge 0} \{ \widetilde{r}^{-t} \E(A^t)\} = \widetilde{r}^{-t^*} \E(A^{t^*})$. With this choice, we have $\E_\Q(\log A)=\log \widetilde{r}$, so that $\Q \{ (1- \delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\} \ge c_9$. Consequently, $$\E(\# \G_1) \ge c_9 \, \deg^L \, {[\E(A^{t^*})]^L \over r^{t^* (1- \delta) L} }= c_9 \, \deg^L \, {[ \widetilde{r}^{\,t^*} \varrho(\widetilde{r})]^L \over r^{t^* (1- \delta) L} } \ge c_9 \, r^{\delta t^* L} \deg^L \varrho(\widetilde{r})^L .$$ Since $\delta>0$ can be as close to $0$ as possible, the continuity of $\varrho(\cdot)$ on $[\underline{r}, \, \overline{r})$ yields (\[GW\]), and thus completes the proof of Theorem \[t:posrec\].$\Box$ Some elementary inequalities {#s:proba} ============================ We collect some elementary inequalities in this section. They will be of use in the next sections, in the study of the null recurrence case. \[l:exp\] Let $\xi\ge 0$ be a random variable. [(i)]{} Assume that $\e(\xi^a)<\infty$ for some $a>1$. Then for any $x\ge 0$, $${\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a} \le {\e (\xi^a) \over [\e \xi]^a} . \label{RSD}$$ [(ii)]{} If $\e (\xi) < \infty$, then for any $0 \le \lambda \le 1$ and $t \ge 0$, $$\e \left\{ \exp \left( - t\, { (\lambda+\xi)/ (1+\xi) \over \e [(\lambda+\xi)/ (1+\xi)] } \right) \right\} \le \e \left\{ \exp\left( - t\, { \xi \over \e (\xi)} \right) \right\} . \label{exp}$$ [**Remark.**]{} When $a=2$, (\[RSD\]) is a special case of Lemma 6.4 of Pemantle and Peres [@pemantle-peres2]. [*Proof of Lemma \[l:exp\].*]{} We actually prove a very general result, stated as follows. Let $\varphi : (0, \infty) \to \r$ be a convex ${\cal C}^1$-function. Let $x_0 \in \r$ and let $I$ be an open interval containing $x_0$. Assume that $\xi$ takes values in a Borel set $J \subset \r$ (for the moment, we do not assume $\xi\ge 0$). Let $h: I \times J \to (0, \infty)$ and ${\partial h\over \partial x}: I \times J \to \r$ be measurable functions such that - $\e \{ h(x_0, \xi)\} <\infty$ and $\e \{ |\varphi ({ h(x_0,\xi) \over \e h(x_0, \xi)} )| \} < \infty$; - $\e[\sup_{x\in I} \{ | {\partial h\over \partial x} (x, \xi)| + |\varphi' ({h(x, \xi) \over \e h(x, \xi)} ) | \, ({| {\partial h\over \partial x} (x, \xi) | \over \e \{ h(x, \xi)\} } + {h(x, \xi) \over [\e \{ h(x, \xi)\}]^2 } | \e \{ {\partial h\over \partial x} (x, \xi) \} | )\} ] < \infty$; - both $y \to h(x_0, y)$ and $y \to { \partial \over \partial x} \log h(x,y)|_{x=x_0}$ are monotone on $J$. Then $${\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x, \xi)}\right) \right\} \Big|_{x=x_0} \ge 0, \qquad \hbox{\rm or}\qquad \le 0, \label{monotonie}$$ depending on whether $h(x_0, \cdot)$ and ${\partial \over \partial x} \log h(x_0,\cdot)$ have the same monotonicity. To prove (\[monotonie\]), we observe that by the integrability assumptions, $$\begin{aligned} & &{\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x,\xi)}\right) \right\} \Big|_{x=x_0} \\ &=&{1 \over ( \e h(x_0, \xi))^2}\, \e \left( \varphi'( h(x_0, \xi) ) \left[ {\partial h \over \partial x} (x_0, \xi) \e h(x_0, \xi) - h(x_0, \xi) \e {\partial h \over \partial x} (x_0, \xi) \right] \right) .\end{aligned}$$ Let $\widetilde \xi$ be an independent copy of $\xi$. The expectation expression $\e(\varphi'( h(x_0, \xi) ) [\cdots])$ on the right-hand side is $$\begin{aligned} &=& \e \left( \varphi'( h(x_0, \xi) ) \left[ {\partial h \over \partial x} (x_0, \xi) h(x_0, \widetilde\xi) - h(x_0, \xi) {\partial h \over \partial x} (x_0, \widetilde\xi) \right] \right) \\ &=& {1 \over 2}\, \e \left( \left[ \varphi'( h(x_0, \xi) ) - \varphi'( h(x_0, \widetilde\xi) )\right] \left[ {\partial h \over \partial x} (x_0, \xi) h(x_0, \widetilde\xi) - h(x_0, \xi) {\partial h \over \partial x} (x_0, \widetilde\xi) \right] \right) \\ &=& {1 \over 2}\, \e \left( h(x_0, \xi) h(x_0, \widetilde \xi) \, \eta \right) ,\end{aligned}$$ where $$\eta := \left[ \varphi'( h(x_0, \xi) ) - \varphi'( h(x_0, \widetilde\xi) ) \right] \, \left[ {\partial \log h \over \partial x} (x_0, \xi) - {\partial \log h \over \partial x} (x_0, \widetilde\xi) \right] .$$ Therefore, $${\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x,\xi)}\right) \right\} \Big|_{x=x_0} \; = \; {1 \over 2( \e h(x_0, \xi))^2}\, \e \left( h(x_0, \xi) h(x_0, \widetilde \xi) \, \eta \right) .$$ Since $\eta \ge 0$ or $\le 0$ depending on whether $h(x_0, \cdot)$ and ${\partial \over \partial x} \log h(x_0,\cdot)$ have the same monotonicity, this yields (\[monotonie\]). To prove (\[RSD\]) in Lemma \[l:exp\], we take $x_0\in (0,\, \infty)$, $J= \r_+$, $I$ a finite open interval containing $x_0$ and away from 0, $\varphi(z)= z^a$, and $h(x,y)= { y \over x+ y}$, to see that the function $x\mapsto {\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a}$ is non-decreasing on $(0, \infty)$. By dominated convergence, $$\lim_{x \to\infty} {\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a}= \lim_{x \to\infty} {\e[({\xi\over 1+\xi/x})^a] \over [\e ( {\xi\over 1+\xi/x})]^a} = {\e (\xi^a) \over [\e \xi]^a} ,$$ yielding (\[RSD\]). The proof of (\[exp\]) is similar. Indeed, applying (\[monotonie\]) to the functions $\varphi(z)= \ee^{-t z}$ and $ h(x, y) = {x + y \over 1+ y}$ with $x\in (0,1)$, we get that the function $x \mapsto \e \{ \exp ( - t { (x+\xi)/(1+\xi) \over \e [(x+\xi)/(1+\xi)]} )\}$ is non-increasing on $(0,1)$; hence for $\lambda \in [0,\, 1]$, $$\e \left\{ \exp \left( - t { (\lambda+\xi)/(1+\xi) \over \e [(\lambda+\xi)/(1+\xi)] } \right) \right\} \le \e \left\{ \exp \left( - t { \xi /(1+\xi) \over \e [\xi/(1+\xi)] } \right) \right\}.$$ On the other hand, we take $\varphi(z)= \ee^{-t z}$ and $h(x,y) = {y \over 1+ xy}$ (for $x\in (0, 1)$) in (\[monotonie\]) to see that $x \mapsto \e \{ \exp ( - t { \xi /(1+x \xi) \over \e [\xi /(1+x\xi)] } ) \}$ is non-increasing on $(0,1)$. Therefore, $$\e \left\{ \exp \left( - t { \xi /(1+\xi) \over \e [\xi/(1+\xi)] } \right) \right\} \le \e \left\{ \exp\left( - t \, { \xi \over \e (\xi)}\right) \right\} ,$$ which implies (\[exp\]).$\Box$ \[l:moment\] Let $\xi_1$, $\cdots$, $\xi_k$ be independent non-negative random variables such that for some $a\in [1,\, 2]$, $\e(\xi_i^a)<\infty$ $(1\le i\le k)$. Then $$\e \left[ (\xi_1 + \cdots + \xi_k)^a \right] \le \sum_{k=1}^k \e(\xi_i^a) + (k-1) \left( \sum_{i=1}^k \e \xi_i \right)^a.$$ [*Proof.*]{} By induction on $k$, we only need to prove the lemma in case $k=2$. Let $$h(t) := \e \left[ (\xi_1 + t\xi_2)^a \right] - \e(\xi_1^a) - t^a \e(\xi_2^a) - (\e \xi_1 + t \e \xi_2)^a, \qquad t\in [0,1].$$ Clearly, $h(0) = - (\e \xi_1)^a \le 0$. Moreover, $$h'(t) = a \e \left[ (\xi_1 + t\xi_2)^{a-1} \xi_2 \right] - a t^{a-1} \e(\xi_2^a) - a(\e \xi_1 + t \e \xi_2)^{a-1} \e(\xi_2) .$$ Since $(x+y)^{a-1} \le x^{a-1} + y^{a-1}$ (for $1\le a\le 2$), we have $$\begin{aligned} h'(t) &\le& a \e \left[ (\xi_1^{a-1} + t^{a-1}\xi_2^{a -1}) \xi_2 \right] - a t^{a-1} \e(\xi_2^a) - a(\e \xi_1)^{a-1} \e(\xi_2) \\ &=& a \e (\xi_1^{a-1}) \e(\xi_2) - a(\e \xi_1)^{a -1} \e(\xi_2) \le 0,\end{aligned}$$ by Jensen’s inequality (for $1\le a\le 2$). Therefore, $h \le 0$ on $[0,1]$. In particular, $h(1) \le 0$, which implies Lemma \[l:moment\].$\Box$ The following inequality, borrowed from page 82 of Petrov [@petrov], will be of frequent use. \[f:petrov\] Let $\xi_1$, $\cdots$, $\xi_k$ be independent random variables. We assume that for any $i$, $\e(\xi_i)=0$ and $\e(|\xi_i|^a) <\infty$, where $1\le a\le 2$. Then $$\e \left( \, \left| \sum_{i=1}^k \xi_i \right| ^a \, \right) \le 2 \sum_{i=1}^k \e( |\xi_i|^a).$$ \[l:abc\] Fix $a >1$. Let $(u_j)_{j\ge 1}$ be a sequence of positive numbers, and let $(\lambda_j)_{j\ge 1}$ be a sequence of non-negative numbers. [(i)]{} If there exists some constant $c_{10}>0$ such that for all $n\ge 2$, $$u_{j+1} \le \lambda_n + u_j - c_{10}\, u_j^{a}, \qquad \forall 1\le j \le n-1,$$ then we can find a constant $c_{11}>0$ independent of $n$ and $(\lambda_j)_{j\ge 1}$, such that $$u_n \le c_{11} \, ( \lambda_n^{1/a} + n^{- 1/(a-1)}), \qquad \forall n\ge 1.$$ [(ii)]{} Fix $K>0$. Assume that $\lim_{j\to\infty} u_j=0$ and that $\lambda_n \in [0, \, {K\over n}]$ for all $n\ge 1$. If there exist $c_{12}>0$ and $c_{13}>0$ such that for all $n\ge 2$, $$u_{j+1} \ge \lambda_n + (1- c_{12} \lambda_n) u_j - c_{13} \, u_j^a , \qquad \forall 1 \le j \le n-1,$$ then for some $c_{14}>0$ independent of $n$ and $(\lambda_j)_{j\ge 1}$ $(c_{14}$ may depend on $K)$, $$u_n \ge c_{14} \, ( \lambda_n^{1/a} + n^{- 1/(a-1)} ), \qquad \forall n\ge 1.$$ [*Proof.*]{} (i) Put $\ell = \ell(n) := \min\{n, \, \lambda_n^{- (a-1)/a} \}$. There are two possible situations. First situation: there exists some $j_0 \in [n- \ell, n-1]$ such that $u_{j_0} \le ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a}$. Since $u_{j+1} \le \lambda_n + u_j$ for all $j\in [j_0, n-1]$, we have $$u_n \le (n-j_0 ) \lambda_n + u_{j_0} \le \ell \lambda_n + ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a} \le (1+ ({2 \over c_{10}})^{1/a})\, \lambda_n^{1/a},$$ which implies the desired upper bound. Second situation: $u_j > ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a}$, $\forall \, j \in [n- \ell, n-1]$. Then $c_{10}\, u_j^{a} > 2\lambda_n$, which yields $$u_{j+1} \le u_j - {c_{10} \over 2} u_j^a, \qquad \forall \, j \in [n- \ell, n-1].$$ Since $a>1$ and $(1-y)^{1-a} \ge 1+ (a-1) y$ (for $0< y< 1$), this yields, for $j \in [n- \ell, n-1]$, $$u_{j+1}^{1-a} \ge u_j^{1-a} \, \left( 1 - {c_{10} \over 2} u_j^{a-1} \right)^{ 1-a} \ge u_j^{ 1-a} \, \left( 1 + {c_{10} \over 2} (a-1)\, u_j^{a-1} \right) = u_j^{1-a} + {c_{10} \over 2} (a-1) .$$ Therefore, $u_n^{1-a} \ge c_{15}\, \ell$ with $c_{15}:= {c_{10} \over 2} (a-1)$. As a consequence, $u_n \le (c_{15}\, \ell)^{- 1/(a-1)} \le (c_{15})^{- 1/(a-1)} \, ( n^{- 1/(a-1)} + \lambda_n^{1/a} )$, as desired. \(ii) Let us first prove: $$\label{c7} u_n \ge c_{16}\, n^{- 1/(a-1)}.$$ To this end, let $n$ be large and define $v_j := u_j \, (1- c_{12} \lambda_n)^{ -j} $ for $1 \le j \le n$. Since $u_{j+1} \ge (1- c_{12} \lambda_n) u_j - c_{13} u_j^a $ and $\lambda_n \le K/n$, we get $$v_{j+1} \ge v_j - c_{13} (1- c_{12} \lambda_n)^{(a-1)j-1}\, v_j^a\ge v_j - c_{17} \, v_j^a, \qquad \forall\, 1\le j \le n-1.$$ Since $u_j \to 0$, there exists some $j_0>0$ such that for all $n>j \ge j_0$, we have $c_{17} \, v_j^{a-1} < 1/2$, and $$v_{j+1}^{1-a} \le v_j^{1-a}\, \left( 1- c_{17} \, v_j^{a-1}\right)^{1-a} \le v_j^{1-a}\, \left( 1+ c_{18} \, v_j^{a-1}\right) = v_j^{1-a} + c_{18}.$$ It follows that $v_n^{1-a} \le c_{18}\, (n-j_0) + v_{j_0}^{1-a}$, which implies (\[c7\]). It remains to show that $u_n \ge c_{19} \, \lambda_n^{1/a}$. Consider a large $n$. The function $h(x):= \lambda_n + (1- c_{12} \lambda_n) x - c_{13} x^a$ is increasing on $[0, c_{20}]$ for some fixed constant $c_{20}>0$. Since $u_j \to 0$, there exists $j_0$ such that $u_j \le c_{20}$ for all $j \ge j_0$. We claim there exists $j \in [j_0, n-1]$ such that $u_j > ({\lambda_n\over 2c_{13}})^{1/a}$: otherwise, we would have $c_{13}\, u_j^a \le {\lambda_n\over 2} \le \lambda_n$ for all $j \in [j_0, n-1]$, and thus $$u_{j+1} \ge (1- c_{12}\, \lambda_n) u_j \ge \cdots \ge (1- c_{12}\,\lambda_n)^{j-j_0} \, u_{j_0} ;$$ in particular, $u_n \ge (1- c_{12}\, \lambda_n)^{n-j_0} \, u_{j_0}$ which would contradict the assumption $u_n \to 0$ (since $\lambda_n \le K/n$). Therefore, $u_j > ({\lambda_n\over 2c_{13}})^{1/a}$ for some $j\ge j_0$. By monotonicity of $h(\cdot)$ on $[0, c_{20}]$, $$u_{j+1} \ge h(u_j) \ge h\left(({\lambda_n\over 2 c_{13}})^{1/a}\right) \ge ({\lambda_n\over 2 c_{13}})^{1/a},$$ the last inequality being elementary. This leads to: $u_{j+2} \ge h(u_{j+1}) \ge h(({\lambda_n\over 2 c_{13}})^{1/a} ) \ge ({\lambda_n\over 2 c_{13}})^{1/a}$. Iterating the procedure, we obtain: $u_n \ge ({\lambda_n\over 2 c_{13}})^{1/a}$ for all $n> j_0$, which completes the proof of the Lemma.$\Box$ Proof of Theorem \[t:nullrec\] {#s:nullrec} ============================== Let $n\ge 2$, and let as before $$\tau_n := \inf\left\{ i\ge 1: X_i \in \T_n \right\} .$$ We start with a characterization of the distribution of $\tau_n$ via its Laplace transform $\e ( \ee^{- \lambda \tau_n} )$, for $\lambda \ge 0$. To state the result, we define $\alpha_{n,\lambda}(\cdot)$, $\beta_{n,\lambda}(\cdot)$ and $\gamma_n(\cdot)$ by $\alpha_{n,\lambda}(x) = \beta_{n,\lambda} (x) = 1$ and $\gamma_n(x)=0$ (for $x\in \T_n$), and $$\begin{aligned} \alpha_{n,\lambda}(x) &=& \ee^{-\lambda} \, {\sum_{i=1}^\deg A(x_i) \alpha_{n,\lambda} (x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i)}, \label{alpha} \\ \beta_{n,\lambda}(x) &=& {(1-\ee^{-2\lambda}) + \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i)}, \label{beta} \\ \gamma_n(x) &=& {[1/\omega(x, {\buildrel \leftarrow \over x} )] + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_n(x_i)} , \qquad 1\le |x| < n, \label{gamma}\end{aligned}$$ where $\beta_n(\cdot) := \beta_{n,0}(\cdot)$, and for any $x\in \T$, $\{x_i\}_{1\le i\le \deg}$ stands as before for the set of children of $x$. \[p:tau\] We have, for $n\ge 2$, $$\begin{aligned} E_\omega\left( \ee^{- \lambda \tau_n} \right) &=&\ee^{-\lambda} \, {\sum_{i=1}^\deg \omega (e, e_i) \alpha_{n,\lambda} (e_i) \over \sum_{i=1}^\deg \omega (e, e_i) \beta_{n,\lambda} (e_i)}, \qquad \forall \lambda \ge 0, \label{Laplace-tau} \\ E_\omega(\tau_n) &=& {1+ \sum_{i=1}^\deg \omega(e,e_i) \gamma_n (e_i) \over \sum_{i=1}^\deg \omega(e,e_i) \beta_n(e_i)}. \label{E(tau)} \end{aligned}$$ [*Proof of Proposition \[p:tau\].*]{} Identity (\[E(tau)\]) can be found in Rozikov [@rozikov]. The proof of (\[Laplace-tau\]) is along similar lines; so we feel free to give an outline only. Let $g_{n, \lambda}(x) := E_\omega (\ee^{- \lambda \tau_n} \, | \, X_0=x)$. By the Markov property, $g_{n, \lambda}(x) = \ee^{-\lambda} \sum_{i=1}^\deg \omega(x, x_i)g_{n, \lambda}(x_i) + \ee^{-\lambda} \omega(x, {\buildrel \leftarrow \over x}) g_{n, \lambda}({\buildrel \leftarrow \over x})$, for $|x| < n$. By induction on $|x|$ (such that $1\le |x| \le n-1$), we obtain: $g_{n, \lambda}(x) = \ee^\lambda (1- \beta_{n, \lambda} (x)) g_{n, \lambda}({\buildrel \leftarrow \over x}) + \alpha_{n, \lambda} (x)$, from which (\[Laplace-tau\]) follows. Probabilistic interpretation: for $1\le |x| <n$, if $T_{\buildrel \leftarrow \over x} := \inf \{ k\ge 0: X_k= {\buildrel \leftarrow \over x} \}$, then $\alpha_{n, \lambda} (x) = E_\omega [ \ee^{-\lambda \tau_n} {\bf 1}_{ \{ \tau_n < T_{\buildrel \leftarrow \over x} \} } \, | \, X_0=x]$, $\beta_{n, \lambda} (x) = 1- E_\omega [ \ee^{-\lambda (1+ T_{\buildrel \leftarrow \over x}) } {\bf 1}_{ \{ \tau_n > T_{\buildrel \leftarrow \over x} \} } \, | \, X_0=x]$, and $\gamma_n (x) = E_\omega [ (\tau_n \wedge T_{\buildrel \leftarrow \over x}) \, | \, X_0=x]$. We do not use these identities in the paper.$\Box$ It turns out that $\beta_{n,\lambda}(\cdot)$ is closely related to Mandelbrot’s multiplicative cascade [@mandelbrot]. Let $$M_n := \sum_{x\in \T_n} \prod_{y\in ] \! ] e, \, x] \! ] } A(y) , \qquad n\ge 1, \label{Mn}$$ where $] \! ] e, \,x] \! ]$ denotes as before the shortest path relating $e$ to $x$. We mention that $(A(e_i), \, 1\le i\le \deg)$ is a random vector independent of $(\omega(x,y), \, |x|\ge 1, \, y\in \T)$, and is distributed as $(A(x_i), \, 1\le i\le \deg)$, for any $x\in \T_m$ with $m\ge 1$. Let us recall some properties of $(M_n)$ from Theorem 2.2 of Liu [@liu00] and Theorem 2.5 of Liu [@liu01]: under the conditions $p={1\over \deg}$ and $\psi'(1)<0$, $(M_n)$ is a martingale, bounded in $L^a$ for any $a\in [1, \kappa)$; in particular, $$M_\infty := \lim_{n\to \infty} M_n \in (0, \infty), \label{cvg-M}$$ exists $\P$-almost surely and in $L^a(\P)$, and $$\E\left( \ee^{-s M_\infty} \right) \le \exp\left( - c_{21} \, s^{c_{22}}\right), \qquad \forall s\ge 1; \label{M-lowertail}$$ furthermore, if $1<\kappa< \infty$, then we also have $${c_{23}\over x^\kappa} \le \P\left( M_\infty > x\right) \le {c_{24}\over x^\kappa}, \qquad x\ge 1. \label{M-tail}$$ We now summarize the asymptotic properties of $\beta_{n,\lambda}(\cdot)$ which will be needed later on. \[p:beta-gamma\] Assume $p= {1\over \deg}$ and $\psi'(1)<0$. [(i)]{} For any $1\le i\le \deg$, $n\ge 2$, $t\ge 0$ and $\lambda \in [0, \, 1]$, we have $$\E \left\{ \exp \left[ -t \, {\beta_{n, \lambda} (e_i) \over \E[\beta_{n, \lambda} (e_i)]} \right] \right\} \le \left\{\E \left( \ee^{-t\, M_n/\Theta} \right) \right\}^{1/\deg} , \label{comp-Laplace}$$ where, as before, $\Theta:= \hbox{\rm ess sup}(A) < \infty$. [(ii)]{} If $\kappa\in (2, \infty]$, then for any $1\le i\le \deg$ and all $n\ge 2$ and $\lambda \in [0, \, {1\over n}]$, $$c_{25} \left( \sqrt {\lambda} + {1\over n} \right) \le \E[\beta_{n, \lambda}(e_i)] \le c_{26} \left( \sqrt {\lambda} + {1\over n} \right). \label{E(beta):kappa>2}$$ [(iii)]{} If $\kappa\in (1,2]$, then for any $1\le i\le \deg$, when $n\to \infty$ and uniformly in $\lambda \in [0, {1\over n}]$, $$\E[\beta_{n, \lambda}(e_i)] \; \approx \; \lambda^{1/\kappa} + {1\over n^{1/(\kappa-1)}} , \label{E(beta):kappa<2}$$ where $a_n \approx b_n$ denotes as before $\lim_{n\to \infty} \, {\log a_n \over \log b_n} =1$. The proof of Proposition \[p:beta-gamma\] is postponed until Section \[s:beta-gamma\]. By admitting it for the moment, we are able to prove Theorem \[t:nullrec\]. [*Proof of Theorem \[t:nullrec\].*]{} Assume $p= {1\over \deg}$ and $\psi'(1)<0$. Let $\pi$ be an invariant measure. By (\[pi\]) and the definition of $(M_n)$, $\sum_{x\in \T_n} \pi(x) \ge c_0 \, M_n$. Therefore by (\[cvg-M\]), we have $\sum_{x\in \T} \pi(x) =\infty$, $\P$-a.s., implying that $(X_n)$ is null recurrent. We proceed to prove the lower bound in (\[nullrec\]). By (\[gamma\]) and the ellipticity condition on the environment, $\gamma_n (x) \le {1\over \omega(x, {\buildrel \leftarrow \over x} )} + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i) \le c_{27} + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i)$. Iterating the argument yields $$\gamma_n (e_i) \le c_{27} \left( 1+ \sum_{j=2}^{n-1} M_j^{(e_i)}\right), \qquad n\ge 3,$$ where $$M_j^{(e_i)} := \sum_{x\in \T_j} \prod_{y\in ] \! ] e_i, x] \! ]} A(y).$$ For future use, we also observe that $$\label{defMei1} M_n= \sum_{i=1}^\deg \, A(e_i) \, M^{(e_i)}_n, \qquad n\ge 2.$$ Let $1\le i\le \deg$. Since $(M_j^{(e_i)}, \, j\ge 2)$ is distributed as $(M_{j-1}, \, j\ge 2)$, it follows from (\[cvg-M\]) that $M_j^{(e_i)}$ converges (when $j\to \infty$) almost surely, which implies $\gamma_n (e_i) \le c_{28}(\omega) \, n$. Plugging this into (\[E(tau)\]), we see that for all $n\ge 3$, $$E_\omega \left( \tau_n \right) \le {c_{29}(\omega) \, n \over \sum_{i=1}^\deg \omega(e,e_i) \beta_n(e_i)} \le {c_{30}(\omega) \, n \over \beta_n(e_1)}, \label{toto2}$$ the last inequality following from the ellipticity assumption on the environment. We now bound $\beta_n(e_1)$ from below (for large $n$). Let $1\le i\le \deg$. By (\[comp-Laplace\]), for $\lambda \in [0,\, 1]$ and $s\ge 0$, $$\E \left\{ \exp \left[ -s \, {\beta_{n, \lambda} (e_i) \over \E [\beta_{n, \lambda} (e_i)]} \right] \right\} \le \left\{ \E \left( \ee^{-s \, M_n/\Theta} \right) \right\}^{1/\deg} \le \left\{ \E \left(\ee^{-s \, M_\infty/\Theta} \right) \right\}^{1/\deg} ,$$ where, in the last inequality, we used the fact that $(M_n)$ is a uniformly integrable martingale. Let $\varepsilon>0$. Applying (\[M-lowertail\]) to $s:= n^{\varepsilon}$, we see that $$\sum_n \E \left\{ \exp \left[ -n^{\varepsilon} {\beta_{n, \lambda} (e_i) \over \E[\beta_{n, \lambda} (e_i)]} \right] \right\} <\infty . \label{toto3}$$ In particular, $\sum_n \exp [ -n^{\varepsilon} {\beta_n (e_1) \over \E [\beta_n (e_1)]} ]$ is $\P$-almost surely finite (by taking $\lambda=0$; recalling that $\beta_n (\cdot) := \beta_{n, 0} (\cdot)$). Thus, for $\P$-almost all $\omega$ and all sufficiently large $n$, $\beta_n (e_1) \ge n^{-\varepsilon} \, \E [\beta_n (e_1)]$. Going back to (\[toto2\]), we see that for $\P$-almost all $\omega$ and all sufficiently large $n$, $$E_\omega \left( \tau_n \right) \le {c_{30}(\omega) \, n^{1+\varepsilon} \over \E [\beta_n (e_1)]}.$$ Let $m(n):= \lfloor {n^{1+2\varepsilon} \over \E [\beta_n (e_1)]} \rfloor$. By Chebyshev’s inequality, for $\P$-almost all $\omega$ and all sufficiently large $n$, $P_\omega ( \tau_n \ge m(n) ) \le c_{31}(\omega) \, n^{-\varepsilon}$. Considering the subsequence $n_k:= \lfloor k^{2/\varepsilon}\rfloor$, we see that $\sum_k P_\omega ( \tau_{n_k} \ge m(n_k) )< \infty$, $\P$-a.s. By the Borel–Cantelli lemma, for $\P$-almost all $\omega$ and $P_\omega$-almost all sufficiently large $k$, $\tau_{n_k} < m(n_k)$, which implies that for $n\in [n_{k-1}, n_k]$ and large $k$, we have $\tau_n < m(n_k) \le {n_k^{1+2\varepsilon} \over \E [\beta_{n_k} (e_1)]} \le {n^{1+3\varepsilon} \over \E [\beta_n(e_1)]}$ (the last inequality following from the estimate of $\E [\beta_n(e_1)]$ in Proposition \[p:beta-gamma\]). In view of Proposition \[p:beta-gamma\], and since $\varepsilon$ can be as small as possible, this gives the lower bound in (\[nullrec\]) of Theorem \[t:nullrec\]. To prove the upper bound, we note that $\alpha_{n,\lambda}(x) \le \beta_n(x)$ for any $\lambda\ge 0$ and any $0<|x|\le n$ (this is easily checked by induction on $|x|$). Thus, by (\[Laplace-tau\]), for any $\lambda\ge 0$, $$E_\omega\left( \ee^{- \lambda \tau_n} \right) \le {\sum_{i=1}^\deg \omega (e, e_i) \beta_n (e_i) \over \sum_{i=1}^\deg \omega (e, e_i) \beta_{n,\lambda} (e_i)} \le \sum_{i=1}^\deg {\beta_n (e_i) \over \beta_{n,\lambda} (e_i)}.$$ We now fix $r\in (1, \, {1\over \nu})$, where $\nu:= 1- {1\over \min\{ \kappa, \, 2\} }$ is defined in (\[theta\]). It is possible to choose a small $\varepsilon>0$ such that $${1\over \kappa -1} - {r\over \kappa}> 3\varepsilon \quad \hbox{if }\kappa \in (1, \, 2], \qquad 1 - {r\over 2}> 3\varepsilon \quad \hbox{if }\kappa \in (2, \, \infty].$$ Let $\lambda = \lambda(n) := n^{-r}$. By (\[toto3\]), we have $\beta_{n,n^{-r}} (e_i) \ge n^{-\varepsilon}\, \E [\beta_{n,n^{-r}} (e_i)]$ for $\P$-almost all $\omega$ and all sufficiently large $n$, which yields $$E_\omega\left( \ee^{- n^{-r} \tau_n} \right) \le n^\varepsilon \sum_{i=1}^\deg {\beta_n (e_i) \over \E [\beta_{n, n^{-r}} (e_i)]} .$$ It is easy to bound $\beta_n (e_i)$. For any given $x\in \T \backslash \{ e\}$ with $|x|\le n$, $n\mapsto \beta_n (x)$ is non-increasing (this is easily checked by induction on $|x|$). Chebyshev’s inequality, together with the Borel–Cantelli lemma (applied to a subsequence, as we did in the proof of the lower bound) and the monotonicity of $n\mapsto \beta_n(e_i)$, readily yields $\beta_n (e_i) \le n^\varepsilon \, \E [\beta_n (e_i)]$ for almost all $\omega$ and all sufficiently large $n$. As a consequence, for $\P$-almost all $\omega$ and all sufficiently large $n$, $$E_\omega\left( \ee^{- n^{-r} \tau_n} \right) \le n^{2\varepsilon} \sum_{i=1}^\deg {\E [\beta_n (e_i)] \over \E [\beta_{n, n^{-r}} (e_i)]} .$$ By Proposition \[p:beta-gamma\], this yields $E_\omega ( \ee^{- n^{-r} \tau_n} ) \le n^{-\varepsilon}$ (for $\P$-almost all $\omega$ and all sufficiently large $n$; this is where we use ${1\over \kappa -1} - {r\over \kappa}> 3\varepsilon$ if $\kappa \in (1, \, 2]$, and $1 - {r\over 2}> 3\varepsilon$ if $\kappa \in (2, \, \infty]$). In particular, for $n_k:= \lfloor k^{2/\varepsilon} \rfloor$, we have $\P$-almost surely, $E_\omega ( \sum_k \ee^{- n_k^{-r} \tau_{n_k}} ) < \infty$, which implies that, $\p$-almost surely for all sufficiently large $k$, $\tau_{n_k} \ge n_k^r$. This implies that $\p$-almost surely for all sufficiently large $n$, $\tau_n \ge {1\over 2}\, n^r$. The upper bound in (\[nullrec\]) of Theorem \[t:nullrec\] follows.$\Box$ Proposition \[p:beta-gamma\] is proved in Section \[s:beta-gamma\]. Proof of Proposition \[p:beta-gamma\] {#s:beta-gamma} ===================================== Let $\theta \in [0,\, 1]$. Let $(Z_{n,\theta})$ be a sequence of random variables, such that $Z_{1,\theta} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i$, where $(A_i, \, 1\le i\le \deg)$ is distributed as $(A(x_i), \, 1\le i\le \deg)$ (for any $x\in \T$), and that $$Z_{j+1,\theta} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i {\theta + Z_{j,\theta}^{(i)} \over 1+ Z_{j,\theta}^{(i)} } , \qquad \forall\, j\ge 1, \label{ZW}$$ where $Z_{j,\theta}^{(i)}$ (for $1\le i \le \deg$) are independent copies of $Z_{j,\theta}$, and are independent of the random vector $(A_i, \, 1\le i\le \deg)$. Then, for any given $n\ge 1$ and $\lambda\ge 0$, $$Z_{n, 1-\ee^{-2\lambda}} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i\, \beta_{n, \lambda}(e_i) , \label{Z=beta}$$ provided $(A_i, \, 1\le i\le \deg)$ and $(\beta_{n, \lambda}(e_i), \, 1\le i\le \deg)$ are independent. \[p:concentration\] Assume $p={1\over \deg}$ and $\psi'(1)<0$. Let $\kappa$ be as in $(\ref{kappa})$. For all $a\in (1, \kappa) \cap (1, 2]$, we have $$\sup_{\theta \in [0,1]} \sup_{j\ge 1} {[\E (Z_{j,\theta} )^a ] \over (\E Z_{j,\theta})^a} < \infty.$$ [*Proof of Proposition \[p:concentration\].*]{} Let $a\in (1,2]$. Conditioning on $A_1$, $\dots$, $A_\deg$, we can apply Lemma \[l:moment\] to see that $$\begin{aligned} &&\E \left[ \left( \, \sum_{i=1}^\deg A_i {\theta+ Z_{j,\theta}^{(i)} \over 1+ Z_{j,\theta}^{(i)} } \right)^a \Big| A_1, \dots, A_\deg \right] \\ &\le& \sum_{i=1}^\deg A_i^a \, \E \left[ \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} }\right)^a \; \right] + (\deg-1) \left[ \sum_{i=1}^\deg A_i\, \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a \\ &\le& \sum_{i=1}^\deg A_i^a \, \E \left[ \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} }\right)^a \; \right] + c_{32} \left[ \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a,\end{aligned}$$ where $c_{32}$ depends on $a$, $\deg$ and the bound of $A$ (recalling that $A$ is bounded away from 0 and infinity). Taking expectation on both sides, and in view of (\[ZW\]), we obtain: $$\E[(Z_{j+1,\theta})^a] \le \deg \E(A^a) \E \left[ \left( {\theta+ Z_{j,\theta}\over 1+ Z_{j,\theta} }\right)^a \; \right] + c_{32} \left[ \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a.$$ We divide by $(\E Z_{j+1,\theta})^a = [ \E({\theta+Z_{j,\theta}\over 1+ Z_{j,\theta} })]^a$ on both sides, to see that $${\E[(Z_{j+1,\theta})^a]\over (\E Z_{j+1,\theta})^a} \le \deg \E(A^a) {\E[ ({\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} })^a] \over [\E ({\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} })]^a } + c_{32}.$$ Put $\xi = \theta+ Z_{j,\theta}$. By (\[RSD\]), we have $${\E[ ({\theta+Z_{j,\theta} \over 1+Z_{j,\theta} })^a] \over [\E ({\theta+Z_{j,\theta} \over 1+ Z_{j,\theta} })]^a } = {\E[ ({\xi \over 1- \theta+ \xi })^a] \over [\E ({ \xi \over 1- \theta+ \xi })]^a } \le {\E[\xi^a] \over [\E \xi ]^a } .$$ Applying Lemma \[l:moment\] to $k=2$ yields that $\E[\xi^a] = \E[( \theta+ Z_{j,\theta} )^a] \le \theta^a + \E[( Z_{j,\theta} )^a] + (\theta + \E( Z_{j,\theta} ))^a $. It follows that ${\E[ \xi^a] \over [\E \xi ]^a } \le {\E[ (Z_{j,\theta})^a] \over [\E Z_{j,\theta}]^a } +2$, which implies that for $j\ge 1$, $${\E[(Z_{j+1,\theta})^a]\over (\E Z_{j+1,\theta})^a} \le \deg \E(A^a) {\E[(Z_{j,\theta})^a]\over (\E Z_{j,\theta})^a} + (2 \deg \E(A^a)+ c_{32}).$$ Thus, if $\deg \E(A^a)<1$ (which is the case if $1<a<\kappa$), then $$\sup_{j\ge 1} {\E[ (Z_{j,\theta})^a] \over (\E Z_{j,\theta})^a} < \infty,$$ uniformly in $\theta \in [0, \, 1]$.$\Box$ We now turn to the proof of Proposition \[p:beta-gamma\]. For the sake of clarity, the proofs of (\[comp-Laplace\]), (\[E(beta):kappa&gt;2\]) and (\[E(beta):kappa&lt;2\]) are presented in three distinct parts. Proof of (\[comp-Laplace\]) {#subs:beta} --------------------------- By (\[exp\]) and (\[ZW\]), we have, for all $\theta\in [0, \, 1]$ and $j\ge 1$, $$\E \left\{ \exp\left( - t \, { Z_{j+1, \theta} \over \E (Z_{j+1, \theta})}\right) \right\} \le \E \left\{ \exp\left( - t \sum_{i=1}^\deg A_i { Z^{(i)}_{j, \theta} \over \E (Z^{(i)}_{j, \theta}) }\right) \right\}, \qquad t\ge 0.$$ Let $f_j(t) := \E \{ \exp ( - t { Z_{j, \theta} \over \E Z_{j, \theta}} )\}$ and $g_j(t):= \E (\ee^{ -t\, M_j})$ (for $j\ge 1$). We have $$f_{j+1}(t) \le \E \left( \prod_{i=1}^\deg f_j(t A_i) \right), \quad j\ge 1.$$ On the other hand, by (\[defMei1\]), $$g_{j+1}(t) = \E \left\{ \exp\left( - t \sum_{i=1}^\deg A(e_i) M^{(e_i)}_{j+1} \right) \right\} = \E \left( \prod_{i=1}^\deg g_j(t A_i) \right), \qquad j\ge 1.$$ Since $f_1(\cdot)= g_1(\cdot)$, it follows by induction on $j$ that for all $j\ge 1$, $f_j(t) \le g_j(t)$; in particular, $f_n(t) \le g_n(t)$. We take $\theta = 1- \ee^{-2\lambda}$. In view of (\[Z=beta\]), we have proved that $$\E \left\{ \exp\left( - t \sum_{i=1}^\deg A(e_i) {\beta_{n, \lambda}(e_i) \over \E [\beta_{n, \lambda}(e_i)] }\right) \right\} \le \E \left\{ \ee^{- t \, M_n} \right\} , \label{beta_n(e)}$$ which yields (\[comp-Laplace\]).$\Box$ [**Remark.**]{} Let $$\beta_{n,\lambda}(e) := {(1-\ee^{-2\lambda})+ \sum_{i=1}^\deg A(e_i) \beta_{n,\lambda}(e_i) \over 1+ \sum_{i=1}^\deg A(e_i) \beta_{n,\lambda}(e_i)}.$$ By (\[beta\_n(e)\]) and (\[exp\]), if $\E(A)= {1\over \deg}$, then for $\lambda\ge 0$, $n\ge 1$ and $t\ge 0$, $$\E \left\{ \exp\left( - t {\beta_{n, \lambda}(e) \over \E [\beta_{n, \lambda}(e)] }\right) \right\} \le \E \left\{ \ee^{- t \, M_n} \right\} .$$ Proof of (\[E(beta):kappa&gt;2\]) {#subs:kappa>2} --------------------------------- Assume $p={1\over \deg}$ and $\psi'(1)<0$. Since $Z_{j, \theta}$ is bounded uniformly in $j$, we have, by (\[ZW\]), for $1\le j \le n-1$, $$\begin{aligned} \E(Z_{j+1, \theta}) &=& \E\left( {\theta+Z_{j, \theta} \over 1+Z_{j, \theta} } \right) \nonumber \\ &\le& \E\left[(\theta+ Z_{j, \theta} )(1 - c_{33}\, Z_{j, \theta} )\right] \nonumber \\ &\le & \theta + \E(Z_{j, \theta}) - c_{33}\, \E\left[(Z_{j, \theta})^2\right] \label{E(Z2)} \\ &\le & \theta + \E(Z_{j, \theta}) - c_{33}\, \left[ \E Z_{j, \theta} \right]^2. \nonumber\end{aligned}$$ By Lemma \[l:abc\], we have, for any $K>0$ and uniformly in $\theta\in [0, \,Ê{K\over n}]$, $$\label{53} \E (Z_{n, \theta}) \le c_{34} \left( \sqrt {\theta} + {1\over n} \right) \le {c_{35} \over \sqrt{n}}.$$ We mention that this holds for all $\kappa \in (1, \, \infty]$. In view of (\[Z=beta\]), this yields the upper bound in (\[E(beta):kappa&gt;2\]). To prove the lower bound, we observe that $$\E(Z_{j+1, \theta}) \ge \E\left[(\theta+ Z_{j, \theta} )(1 - Z_{j, \theta} )\right] = \theta+ (1-\theta) \E(Z_{j, \theta}) - \E\left[(Z_{j, \theta})^2\right] . \label{51}$$ If furthermore $\kappa \in (2, \infty]$, then $\E [(Z_{j, \theta})^2 ] \le c_{36}\, (\E Z_{j, \theta})^2$ (see Proposition \[p:concentration\]). Thus, for all $1\le j\le n-1$, $$\E(Z_{j+1, \theta}) \ge \theta+ (1-\theta) \E(Z_{j, \theta}) - c_{36}\, (\E Z_{j,\theta})^2 .$$ By (\[53\]), $\E (Z_{n, \theta}) \to 0$ uniformly in $\theta\in [0, \,Ê{K\over n}]$ (for any given $K>0$). An application of (\[Z=beta\]) and Lemma \[l:abc\] readily yields the lower bound in (\[E(beta):kappa&gt;2\]).$\Box$ Proof of (\[E(beta):kappa&lt;2\]) {#subs:kappa<2} --------------------------------- We assume in this part $p={1\over \deg}$, $\psi'(1)<0$ and $1<\kappa \le 2$. Let $\varepsilon>0$ be small. Since $(Z_{j, \theta})$ is bounded, we have $\E[(Z_{j, \theta})^2] \le c_{37} \, \E [(Z_{j, \theta})^{\kappa-\varepsilon}]$, which, by Proposition \[p:concentration\], implies $$\E\left[ (Z_{j, \theta})^2 \right] \le c_{38} \, \left( \E Z_{j, \theta} \right)^{\kappa- \varepsilon} . \label{c38}$$ Therefore, (\[51\]) yields that $$\E(Z_{j+1, \theta}) \ge \theta+ (1-\theta) \E(Z_{j, \theta}) - c_{38} \, (\E Z_{j, \theta})^{\kappa-\varepsilon} .$$ By (\[53\]), $\E (Z_{n, \theta}) \to 0$ uniformly in $\theta\in [0, \,Ê{K\over n}]$ (for any given $K>0$). An application of Lemma \[l:abc\] implies that for any $K>0$, $$\E (Z_{\ell, \theta}) \ge c_{14} \left( \theta^{1/(\kappa-\varepsilon)} + {1\over \ell^{1/(\kappa -1 - \varepsilon)}} \right), \qquad \forall \, \theta\in [0, \,Ê{K\over n}], \; \; \forall \, 1\le \ell \le n. \label{ell}$$ The lower bound in (\[E(beta):kappa&lt;2\]) follows from (\[Z=beta\]). It remains to prove the upper bound. Define $$Y_{j, \theta} := {Z_{j, \theta} \over \E(Z_{j, \theta})} , \qquad 1\le j\le n.$$ We take $Z_{j-1, \theta}^{(x)}$ (for $x\in \T_1$) to be independent copies of $Z_{j-1, \theta}$, and independent of $(A(x), \; x\in \T_1)$. By (\[ZW\]), for $2\le j\le n$, $$\begin{aligned} Y_{j, \theta} &\; {\buildrel law \over =} \;& \sum_{x\in \T_1} A(x) {(\theta+ Z_{j-1, \theta}^{(x)} )/ (1+ Z_{j-1, \theta}^{(x)}) \over \E [(\theta+ Z_{j-1, \theta}^{(x)} )/ (1+ Z_{j-1, \theta}^{(x)}) ]} \ge \sum_{x\in \T_1} A(x) {Z_{j-1, \theta}^{(x)} / (1+ Z_{j-1, \theta}^{(x)}) \over \theta+ \E [Z_{j-1, \theta}]} \\ &=& { \E [Z_{j-1, \theta}]\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} - { \E [Z_{j-1, \theta}]\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) {(Z_{j-1, \theta}^{(x)})^2/\E(Z_{j-1, \theta}) \over 1+Z_{j-1, \theta}^{(x)}} \\ &\ge& \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} - \Delta_{j-1, \theta} \; ,\end{aligned}$$ where $$\begin{aligned} Y_{j-1, \theta}^{(x)} &:=&{Z_{j-1, \theta}^{(x)} \over \E(Z_{j-1, \theta})} , \\ \Delta_{j-1, \theta} &:=&{\theta\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} + \sum_{x\in \T_1} A(x) {(Z_{j-1, \theta}^{(x)})^2 \over \E(Z_{j-1, \theta})} .\end{aligned}$$ By (\[c38\]), $\E[ {(Z_{j-1, \theta}^{(i)})^2 \over \E(Z_{j-1, \theta})}]\le c_{38}\, (\E Z_{j-1, \theta})^{\kappa-1-\varepsilon}$. On the other hand, by (\[ell\]), $\E(Z_{j-1, \theta}) \ge c_{14}\, \theta^{1/(\kappa-\varepsilon)}$ for $2\le j \le n$, and thus ${\theta\over \theta+ \E [Z_{j-1, \theta}]} \le c_{39}\, (\E Z_{j-1, \theta})^{\kappa-1- \varepsilon}$. As a consequence, $\E( \Delta_{j-1, \theta} ) \le c_{40}\, (\E Z_{j-1, \theta})^{\kappa-1-\varepsilon}$. If we write $\xi \; {\buildrel st. \over \ge} \; \eta$ to denote that $\xi$ is stochastically greater than or equal to $\eta$, then we have proved that $Y_{j, \theta} \; {\buildrel st. \over \ge} \; \sum_{x\in \T_1}^\deg A(x) Y_{j-1, \theta}^{(x)} - \Delta_{j-1, \theta}$. Applying the same argument to each of $(Y_{j-1, \theta}^{(x)}, \, x\in \T_1)$, we see that, for $3\le j\le n$, $$Y_{j, \theta} \; {\buildrel st. \over \ge} \; \sum_{u\in \T_1} A(u) \sum_{v\in \T_2: \; u={\buildrel \leftarrow \over v}} A(v) Y_{j-2, \theta}^{(v)} - \left( \Delta_{j-1, \theta}+ \sum_{u\in \T_1} A(u) \Delta_{j-2, \theta}^{(u)} \right) ,$$ where $Y_{j-2, \theta}^{(v)}$ (for $v\in \T_2$) are independent copies of $Y_{j-2, \theta}$, and are independent of $(A(w), \, w\in \T_1 \cup \T_2)$, and $(\Delta_{j-2, \theta}^{(u)}, \, u\in \T_1)$ are independent of $(A(u), \, u\in \T_1)$ and are such that $\e[\Delta_{j-2, \theta}^{(u)}] \le c_{40}\, (\E Z_{j-2, \theta})^{\kappa-1-\varepsilon}$. By induction, we arrive at: for $j>m \ge 1$, $$Y_{j, \theta} \; {\buildrel st. \over \ge}\; \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) Y_{j-m, \theta}^{(x)} - \Lambda_{j,m,\theta}, \label{Yn>}$$ where $Y_{j-m, \theta}^{(x)}$ (for $x\in \T_m$) are independent copies of $Y_{j-m, \theta}$, and are independent of the random vector $(A(w), \, 1\le |w| \le m)$, and $\E(\Lambda_{j,m,\theta}) \le c_{40}\, \sum_{\ell=1}^m (\E Z_{j-\ell, \theta})^{\kappa-1-\varepsilon} $. Since $\E(Z_{i, \theta}) = \E({\theta+ Z_{i-1, \theta} \over 1+ Z_{i-1, \theta}}) \ge \E(Z_{i-1, \theta}) - \E[(Z_{i-1, \theta})^2] \ge \E(Z_{i-1, \theta}) - c_{38}\, [\E Z_{i-1, \theta} ]^{\kappa-\varepsilon}$ (by (\[c38\])), we have, for all $j\in (j_0, n]$ (with a large but fixed integer $j_0$) and $1\le \ell \le j-j_0$, $$\begin{aligned} \E(Z_{j, \theta}) &\ge&\E(Z_{j-\ell, \theta}) \prod_{i=1}^\ell \left\{ 1- c_{38}\, [\E Z_{j-i, \theta} ]^{\kappa-1-\varepsilon}\right\} \\ &\ge&\E(Z_{j-\ell, \theta}) \prod_{i=1}^\ell \left\{ 1- c_{41}\, (j-i)^{-(\kappa-1- \varepsilon)/2}\right\} ,\end{aligned}$$ the last inequality being a consequence of (\[53\]). Thus, for $j\in (j_0, n]$ and $1\le \ell \le j^{(\kappa-1-\varepsilon)/2}$, $\E(Z_{j, \theta}) \ge c_{42}\, \E(Z_{j-\ell, \theta})$, which implies that for all $m\le j^{(\kappa-1-\varepsilon)/2}$, $\E(\Lambda_{j,m, \theta}) \le c_{43} \, m (\E Z_{j, \theta})^{\kappa-1-\varepsilon}$. By Chebyshev’s inequality, for $j\in (j_0, n]$, $m\le j^{(\kappa-1-\varepsilon)/2}$ and $r>0$, $$\P\left\{ \Lambda_{j,m, \theta} > \varepsilon r\right\} \le {c_{43} \, m (\E Z_{j, \theta})^{\kappa -1-\varepsilon} \over \varepsilon r}. \label{toto4}$$ Let us go back to (\[Yn&gt;\]), and study the behaviour of $\sum_{x\in \T_m} ( \prod_{y\in ]\! ] e, x ]\! ]} A(y) ) Y_{j-m, \theta}^{(x)}$. Let $M^{(x)}$ (for $x\in \T_m$) be independent copies of $M_\infty$ and independent of all other random variables. Since $\E(Y_{j-m, \theta}^{(x)})= \E(M^{(x)})=1$, we have, by Fact \[f:petrov\], for any $a\in (1, \, \kappa)$, $$\begin{aligned} &&\E \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right|^a \right\} \\ &\le&2 \E \left\{ \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y)^a \right) \, \E\left( | Y_{j-m, \theta}^{(x)} - M^{(x)}|^a \right) \right\}.\end{aligned}$$ By Proposition \[p:concentration\] and the fact that $(M_n)$ is a martingale bounded in $L^a$, we have $\E ( | Y_{j-m, \theta}^{(x)} - M^{(x)}|^a ) \le c_{44}$. Thus, $$\begin{aligned} \E \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right|^a \right\} &\le& 2c_{44} \E \left\{ \sum_{x\in \T_m} \prod_{y\in ]\! ] e, x ]\! ]} A(y)^a \right\} \\ &=& 2c_{44} \, \deg^m \, [\E(A^a)]^m.\end{aligned}$$ By Chebyshev’s inequality, $$\P \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right| > \varepsilon r\right\} \le {2c_{44} \, \deg^m [\E(A^a)]^m \over \varepsilon^a r^a}. \label{toto6}$$ Clearly, $\sum_{x\in \T_m} (\prod_{y\in ]\! ] e, x ]\! ]} A(y) ) M^{(x)}$ is distributed as $M_\infty$. We can thus plug (\[toto6\]) and (\[toto4\]) into (\[Yn&gt;\]), to see that for $j\in [j_0, n]$, $m\le j^{(\kappa-1-\varepsilon)/2}$ and $r>0$, $$\P \left\{ Y_{j, \theta} > (1-2\varepsilon) r\right\} \ge \P \left\{ M_\infty > r\right\} - {c_{43}\, m (\E Z_{j, \theta})^{\kappa-1- \varepsilon} \over \varepsilon r} - {2c_{44} \, \deg^m [\E(A^a)]^m \over \varepsilon^a r^a} . \label{Yn-lb}$$ We choose $m:= \lfloor j^\varepsilon \rfloor$. Since $a\in (1, \, \kappa)$, we have $\deg \E(A^a) <1$, so that $\deg^m [\E(A^a)]^m \le \exp( - j^{\varepsilon/2})$ for all large $j$. We choose $r= {1\over (\E Z_{j, \theta})^{1- \delta}}$, with $\delta := {4\kappa \varepsilon \over \kappa -1}$. In view of (\[M-tail\]), we obtain: for $j\in [j_0, n]$, $$\P \left\{ Y_{j, \theta} > {1-2\varepsilon\over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge c_{23} \, (\E Z_{j, \theta})^{(1- \delta) \kappa} - {c_{43}\over \varepsilon} \, j^\varepsilon\, (\E Z_{j, \theta})^{\kappa-\varepsilon-\delta} - {2c_{44} \, (\E Z_{j, \theta})^{(1- \delta)a} \over \varepsilon^a \exp(j^{\varepsilon/2})} .$$ Since $c_{14}/j^{1/(\kappa-1- \varepsilon)} \le \E(Z_{j, \theta}) \le c_{35}/j^{1/2}$ (see (\[ell\]) and (\[53\]), respectively), we can pick up sufficiently small $\varepsilon$, so that for $j\in [j_0, n]$, $$\P \left\{ Y_{j, \theta} > {1-2\varepsilon\over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge {c_{23} \over 2} \, (\E Z_{j, \theta})^{(1-\delta) \kappa}.$$ Recall that by definition, $Y_{j, \theta} = {Z_{j, \theta} \over \E(Z_{j, \theta})}$. Therefore, for $j\in [j_0, n]$, $$\E[(Z_{j, \theta})^2] \ge [\E Z_{j, \theta}]^2 \, {(1-2\varepsilon)^2\over (\E Z_{j, \theta})^{2(1- \delta)}} \P \left\{ Y_{j, \theta} > {1-2\varepsilon \over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge c_{45} \, (\E Z_{j, \theta})^{\kappa+ (2- \kappa)\delta}.$$ Of course, the inequality holds trivially for $0\le j < j_0$ (with possibly a different value of the constant $c_{45}$). Plugging this into (\[E(Z2)\]), we see that for $1\le j\le n-1$, $$\E(Z_{j+1, \theta}) \le \theta + \E(Z_{j, \theta}) - c_{46}\, (\E Z_{j, \theta})^{\kappa+ (2- \kappa)\delta} .$$ By Lemma \[l:abc\], this yields $\E(Z_{n, \theta}) \le c_{47} \, \{ \theta^{1/[\kappa+ (2- \kappa)\delta]} + n^{- 1/ [\kappa -1 + (2- \kappa)\delta]}\}$. An application of (\[Z=beta\]) implies the desired upper bound in (\[E(beta):kappa&lt;2\]).$\Box$ [**Remark.**]{} A close inspection on our argument shows that under the assumptions $p= {1\over \deg}$ and $\psi'(1)<0$, we have, for any $1\le i \le \deg$ and uniformly in $\lambda \in [0, \, {1\over n}]$, $$\left( {\alpha_{n, \lambda}(e_i) \over \E[\alpha_{n, \lambda}(e_i)]} ,\; {\beta_{n, \lambda}(e_i) \over \E[\beta_{n, \lambda}(e_i)]} , \; {\gamma_n(e_i) \over \E[\gamma_n (e_i)]} \right) \; {\buildrel law \over \longrightarrow} \; (M_\infty, \, M_\infty, \, M_\infty),$$ where “${\buildrel law \over \longrightarrow}$" stands for convergence in distribution, and $M_\infty$ is the random variable defined in $(\ref{cvg-M})$.$\Box$ [**Acknowledgements**]{} We are grateful to Philippe Carmona and Marc Yor for helpful discussions. [99]{} Chernoff, H. (1952). A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. [*Ann. Math. Statist.*]{} [**23**]{}, 493–507. Duquesne, T. and Le Gall, J.-F. (2002). [*Random Trees, Lévy Processes and Spatial Branching Processes.*]{} Astérisque [**281**]{}. Société Mathématique de France, Paris. Griffeath, D. and Liggett, T.M. (1982). Critical phenomena for Spitzer’s reversible nearest particle systems. [*Ann. Probab.*]{} [**10**]{}, 881–895. Harris, T.E. (1963). [*The Theory of Branching Processes.*]{} Springer, Berlin. Hoel, P., Port, S. and Stone, C. (1972). [*Introduction to Stochastic Processes.*]{} Houghton Mifflin, Boston. Kesten, H., Kozlov, M.V. and Spitzer, F. (1975). A limit law for random walk in a random environment. [*Compositio Math.*]{} [**30**]{}, 145–168. Le Gall, J.-F. (2005). Random trees and applications. [*Probab. Surveys*]{} [**2**]{}, 245–311. Liggett, T.M. (1985). [*Interacting Particle Systems.*]{} Springer, New York. Liu, Q.S. (2000). On generalized multiplicative cascades. [*Stoch. Proc. Appl.*]{} [**86**]{}, 263–286. Liu, Q.S. (2001). Asymptotic properties and absolute continuity of laws stable by random weighted mean. [*Stoch. Proc. Appl.*]{} [**95**]{}, 83–107. Lyons, R. and Pemantle, R. (1992). Random walk in a random environment and first-passage percolation on trees. [*Ann. Probab.*]{} [**20**]{}, 125–136. Lyons, R. and Peres, Y. (2005+). [*Probability on Trees and Networks.*]{} (Forthcoming book) [http://mypage.iu.edu/\~rdlyons/prbtree/prbtree.html]{} Mandelbrot, B. (1974). Multiplications aléatoires itérées et distributions invariantes par moyenne pondérée aléatoire. [*C. R. Acad. Sci. Paris*]{} [**278**]{}, 289–292. Menshikov, M.V. and Petritis, D. (2002). On random walks in random environment on trees and their relationship with multiplicative chaos. In: [*Mathematics and Computer Science II (Versailles, 2002)*]{}, pp. 415–422. Birkhäuser, Basel. Pemantle, R. (1995). Tree-indexed processes. [*Statist. Sci.*]{} [**10**]{}, 200–213. Pemantle, R. and Peres, Y. (1995). Critical random walk in random environment on trees. [*Ann. Probab.*]{} [**23**]{}, 105–140. Pemantle, R. and Peres, Y. (2005+). The critical Ising model on trees, concave recursions and nonlinear capacity. [ArXiv:math.PR/0503137.]{} Peres, Y. (1999). Probability on trees: an introductory climb. In: [*École d’Été St-Flour 1997*]{}, Lecture Notes in Mathematics [**1717**]{}, pp. 193–280. Springer, Berlin. Petrov, V.V. (1995). [*Limit Theorems of Probability Theory.*]{} Clarendon Press, Oxford. Rozikov, U.A. (2001). Random walks in random environments on the Cayley tree. [*Ukrainian Math. J.*]{} [**53**]{}, 1688–1702. Sinai, Ya.G. (1982). The limit behavior of a one-dimensional random walk in a random environment. [*Theory Probab. Appl.*]{} [**27**]{}, 247–258. Sznitman, A.-S. (2005+). Random motions in random media. (Lecture notes of minicourse at Les Houches summer school.) [http://www.math.ethz.ch/u/sznitman/]{} Zeitouni, O. (2004). Random walks in random environment. In: [*École d’Été St-Flour 2001*]{}, Lecture Notes in Mathematics [**1837**]{}, pp. 189–312. Springer, Berlin. -- ------------------------------ --------------------------------------------------- Yueyun Hu Zhan Shi Département de Mathématiques Laboratoire de Probabilités et Modèles Aléatoires Université Paris XIII Université Paris VI 99 avenue J-B Clément 4 place Jussieu F-93430 Villetaneuse F-75252 Paris Cedex 05 France France -- ------------------------------ ---------------------------------------------------
--- abstract: 'Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.' address: - | Systems Engineering Department,\ National Autonomous University of Honduras. Blvd. Suyapa, Tegucigalpa, Honduras - | Department of Computer Science, University of Alcalá\ Alcalá de Henares, 28871 Madrid, Spain - | Department of Computer Science, University of A Coruña\ Campus de Elviña s/n 15071 - A Coruña, Spain author: - 'Raul-Jose Palma-Mendoza' - 'Luis de-Marcos' - Daniel Rodriguez - 'Amparo Alonso-Betanzos' title: 'Distributed Correlation-Based Feature Selection in Spark' --- feature selection ,scalability ,big data ,apache spark ,cfs ,correlation Introduction {#sec:intro} ============ In recent years, the advent of big data has raised unprecedented challenges for all types of organizations and researchers in many fields. Xindong et al.  [@XindongWu2014], however, state that the big data revolution has come to us not only with many challenges but also with plenty of opportunities for those organizations and researchers willing to embrace them. Data mining is one field where the opportunities offered by big data can be embraced, and, as indicated by Leskovec et al.  [@Leskovec2014mining], the main challenge is to extract useful information or knowledge from these huge data volumes that enable us to predict or better understand the phenomena involved in the generation of the data. Feature selection (FS) is a dimensionality reduction technique that has emerged as an important step in data mining. According to Guyon and Eliseeff [@Guyon2003] its purpose is twofold: to select relevant attributes and simultaneously to discard redundant attributes. This purpose has become even more important nowadays, as vast quantities of data need to be processed in all kinds of disciplines. Practitioners also face the challenge of not having enough computational resources. In a review of the most widely used FS methods, Bolón-Canedo et al. [@Bolon-Canedo2015b] conclude that there is a growing need for scalable and efficient FS methods, given that the existing methods are likely to prove inadequate for handling the increasing number of features encountered in big data. Depending on their relationship with the classification process, FS methods are commonly classified in one of three main categories : (i) filter methods, (ii) wrapper methods, or (iii) embedded methods. *Filters* rely solely on the characteristics of the data and, since they are independent of any learning scheme, they require less computational effort. They have been shown to be important preprocessing techniques, with many applications such as churn prediction [@Idris2012; @Idris2013] and microarray data classification. In microarray data classification, filters obtain better or at least comparable results in terms of accuracy to wrappers [@Bolon-Canedo2015a]. In *wrapper* methods, the final subset selection is based on a learning algorithm that is repeatedly trained with the data. Although wrappers tend to increase the final accuracy of the learning scheme, they are usually more computationally expensive than the other two approaches. Finally, in *embedded* methods, FS is part of the classification process, e.g., as happens with decision trees. Another important classification of FS methods is, according to their results, as (i) ranker algorithms or (ii) subset selector algorithms. With *rankers*, the result is a sorted set of the original features. The order of this returned set is defined according to the quality that the FS method determines for each feature. Some rankers also assign a weight to each feature that provides more information about its quality. *Subset selectors* return a non-ordered subset of features from the original set so that together they yield the highest possible quality according to some given measure. Subset selectors, therefore, consist of a search procedure and an evaluation measure. This can be considered an advantage in many cases, as rankers usually evaluate features individually and leave it to the user to select the number of top features in a ranking. One filter-based subset selector method is the Correlation-Based Feature Selection (CFS) algorithm [@Hall2000], traditionally considered useful due to its ability not only to reduce dimensionality but also to improve classification algorithm performance. However, the CFS algorithm, like many other multivariate FS algorithms, has a time execution complexity $\mathcal{O}(m^2 \cdot n)$, where $m$ is the number of features and $n$ is the number of instances. This quadratic complexity in the number of features makes CFS very sensitive to the *the curse of dimensionality* [@bellman1957dynamic]. Therefore, a scalable adaptation of the original algorithm is required to be able to apply the CFS algorithm to datasets that are large both in number of instances and dimensions. As a response to the big data phenomenon, many technologies and programming frameworks have appeared with the aim of helping data mining practitioners design new strategies and algorithms that can tackle the challenge of distributing work over clusters of computers. One such tool that has recently received much attention is Apache Spark [@Zaharia2010], which represents a new programming model that is a superset of the MapReduce model introduced by Google [@Dean2004a; @Dean2008]. One of Spark’s strongest advantages over the traditional MapReduce model is its ability to efficiently handle the iterative algorithms that frequently appear in the data mining and machine learning fields. We describe two distributed and parallel versions of the original CFS algorithm for classification problems using the Apache Spark programming model. The main difference between them is how the data is distributed across the cluster, i.e., using a horizontal partitioning scheme (hp) or using a vertical partitioning scheme (vp). We compare the two versions – DiCFS-hp and DiCFS-vp, respectively – and also compare them with a baseline, represented by the classical non-distributed implementation of CFS in WEKA [@Hall2009a]. Finally, their benefits in terms of reduced execution time are compared with those of the CFS version developed by Eiras-Fanco et al. [@Eiras-Franco2016] for regression problems. The results show that the time-efficiency and scalability of our two versions are an improvement on those of the original version of the CFS; furthermore, similar or improved execution times are obtained with respect to the Eiras-Franco et al [@Eiras-Franco2016] regression version. In the interest of reproducibility, our software and sources are available as a Spark package[^1] called DiCFS, with a corresponding mirror in Github.[^2] The rest of this paper is organized as follows. Section \[sec:stateofart\] summarizes the most important contributions in the area of distributed and parallel FS and proposes a classification according to how parallelization is carried out. Section \[sec:cFS\] describes the original CFS algorithm, including its theoretical foundations. Section \[sec:spark\] presents the main aspects of the Apache Spark computing framework, focusing on those relevant to the design and implementation of our proposed algorithms. Section \[sec:diCFS\] describes and discusses our DiCFS-hp and DiCFS-vp versions of the CFS algorithm. Section \[sec:experiments\] describes our experiments to compare results for DiCFS-hp and DiCFS-vp, the WEKA approach and the Eiras-Fanco et al. [@Eiras-Franco2016] approach. Finally, conclusions and future work are outlined in Section \[sec:conclusions\]. Background and Related Work {#sec:stateofart} =========================== As might be expected, filter-based FS algorithms have asymptotic complexities that depend on the number of features and/or instances in a dataset. Many algorithms, such as the CFS, have quadratic complexities, while the most frequently used algorithms have at least linear complexities [@Bolon-Canedo2015b]. This is why, in recent years, many attempts have been made to achieve more scalable FS methods. In what follows, we analyse recent work on the design of new scalable FS methods according to parallelization approaches: (i) search-oriented, (ii) dataset-split-oriented, or (iii) filter-oriented. *Search-oriented* parallelizations account for most approaches, in that the main aspects to be parallelized are (i) the search guided by a classifier and (ii) the corresponding evaluation of the resulting models. We classify the following studies in this category: - Kubica et al. [@Kubica2011] developed parallel versions of three forward-search-based FS algorithms, where a wrapper with a logistic regression classifier is used to guide a search parallelized using the MapReduce model. - García et al. [@Garcia_aparallel] presented a simple approach for parallel FS, based on selecting random feature subsets and evaluating them in parallel using a classifier. In their experiments they used a support vector machine (SVM) classifier and, in comparing their results with those for a traditional wrapper approach, found lower accuracies but also much shorter computation times. - Wang et al. [@Wang2016] used the Spark computing model to implement an FS strategy for classifying network traffic. They first implemented an initial FS using the Fisher score filter [@duda2012pattern] and then performed, using a wrapper approach, a distributed forward search over the best $m$ features selected. Since the Fisher filter was used, however, only numerical features could be handled. - Silva et al. [@Silva2017] addressed the FS scaling problem using an asynchronous search approach, given that synchronous search, as commonly performed, can lead to efficiency losses due to the inactivity of some processors waiting for other processors to end their tasks. In their tests, they first obtained an initial reduction using a mutual information (MI) [@Peng2005] filter and then evaluated subsets using a random forest (RF) [@Ho1995] classifier. However, as stated by those authors, any other approach could be used for subset evaluation. *Dataset-split-oriented* approaches have the main characteristic that parallelization is performed by splitting the dataset vertically or horizontally, then applying existing algorithms to the parts and finally merging the results following certain criteria. We classify the following studies in this category: - Peralta et al. [@Peralta2015] used the MapReduce model to implement a wrapper-based evolutionary search FS method. The dataset was split by instances and the FS method was applied to each resulting subset. Simple majority voting was used as a reduction step for the selected features and the final subset of feature was selected according to a user-defined threshold. All tests were carried out using the EPSILON dataset, which we also use here (see Section \[sec:experiments\]). - Bolón-Canedo et al. [@Bolon-Canedo2015a] proposed a framework to deal with high dimensionality data by first optionally ranking features using a FS filter, then partitioning vertically by dividing the data according to features (columns) rather than, as commonly done, according to instances (rows). After partitioning, another FS filter is applied to each partition, and finally, a merging procedure guided by a classifier obtains a single set of features. The authors experiment with five commonly used FS filters for the partitions, namely, CFS [@Hall2000], Consistency [@Dash2003], INTERACT [@Zhao2007], Information Gain [@Quinlan1986] and ReliefF [@Kononenko1994], and with four classifiers for the final merging, namely, C4.5 [@Quinlan1992], Naive Bayes [@rish2001empirical], $k$-Nearest Neighbors [@Aha1991] and SVM [@vapnik1995nature], show that their own approach significantly reduces execution times while maintaining and, in some cases, even improving accuracy. Finally, *filter-oriented* methods include redesigned or new filter methods that are, or become, inherently parallel. Unlike the methods in the other categories, parallelization in this category methods can be viewed as an internal, rather than external, element of the algorithm. We classify the following studies in this category: - Zhao et al. [@Zhao2013a] described a distributed parallel FS method based on a variance preservation criterion using the proprietary software SAS High-Performance Analytics. [^3] One remarkable characteristic of the method is its support not only for supervised FS, but also for unsupervised FS where no label information is available. Their experiments were carried out with datasets with both high dimensionality and a high number of instances. - Ramírez-Gallego et al. [@Ramirez-Gallego2017] described scalable versions of the popular mRMR [@Peng2005] FS filter that included a distributed version using Spark. The authors showed that their version that leveraged the power of a cluster of computers could perform much faster than the original and processed much larger datasets. - In a previous work [@Palma-Mendoza2018], using the Spark computing model we designed a distributed version of the ReliefF [@Kononenko1994] filter, called DiReliefF. In testing using datasets with large numbers of features and instances, it was much more efficient and scalable than the original filter. - Finally, Eiras-Franco et al [@Eiras-Franco2016], using four distributed FS algorithms, three of them filters, namely, InfoGain [@Quinlan1986], ReliefF [@Kononenko1994] and the CFS [@Hall2000], reduce execution times with respect to the original versions. However, in the CFS case, the version of those authors focuses on regression problems where all the features, including the class label, are numerical, with correlations calculated using the Pearson coefficient. A completely different approach is required to design a parallel version for classification problems where correlations are based on the information theory. The approach described here can be categorized as a *filter-oriented* approach that builds on works described elsewhere [@Ramirez-Gallego2017], [@Palma-Mendoza2018],  [@Eiras-Franco2016]. The fact that their focus was not only on designing an efficient and scalable FS algorithm, but also on preserving the original behaviour (and obtaining the same final results) of traditional filters, means that research focused on those filters is also valid for adapted versions. Another important issue in relation to filters is that, since they are generally more efficient than wrappers, they are often the only feasible option due to the abundance of data. It is worth mentioning that scalable filters could feasibly be included in any of the methods mentioned in the *search-oriented* and *dataset-split-oriented* categories, where an initial filtering step is implemented to improve performance. Correlation-Based Feature Selection (CFS) {#sec:cFS} ========================================= The CFS method, originally developed by Hall [@Hall2000], is categorized as a subset selector because it evaluates subsets rather than individual features. For this reason, the CFS needs to perform a search over candidate subsets, but since performing a full search over all possible subsets is prohibitive (due to the exponential complexity of the problem), a heuristic has to be used to guide a partial search. This heuristic is the main concept behind the CFS algorithm, and, as a filter method, the CFS is not a classification-derived measure, but rather applies a principle derived from Ghiselly’s test theory [@ghiselli1964theory], i.e., *good feature subsets contain features highly correlated with the class, yet uncorrelated with each other*. This principle is formalized in Equation (\[eq:heuristic\]) where $M_s$ represents the merit assigned by the heuristic to a subset $s$ that contains $k$ features, $\overline{r_{cf}}$ represents the average of the correlations between each feature in $s$ and the class attribute, and $\overline{r_{ff}}$ is the average correlation between each of the $\begin{psmallmatrix}k\\2\end{psmallmatrix}$ possible feature pairs in $s$. The numerator can be interpreted as an indicator of how predictive the feature set is and the denominator can be interpreted as an indicator of how redundant features in $s$ are. $$\label{eq:heuristic} M_s = \frac { k\cdot \overline { r_{cf} } }{ \sqrt { k + k (k - 1) \cdot \overline{ r_{ff}} } }$$ Equation (\[eq:heuristic\]) also posits the second important concept underlying the CFS, which is the computation of correlations to obtain the required averages. In classification problems, the CFS uses the symmetrical uncertainty (SU) measure [@press1982numerical] shown in Equation (\[eq:su\]), where $H$ represents the entropy function of a single or conditioned random variable, as shown in Equation (\[eq:entropy\]). This calculation adds a requirement for the dataset before processing, which is that all non-discrete features must be discretized. By default, this process is performed using the discretization algorithm proposed by Fayyad and Irani [@Fayyad1993]. $$\label{eq:su} SU = 2 \cdot \left[ \frac { H(X) - H(X|Y) }{ H(Y) + H(X) } \right]$$ $$\begin{aligned} \label{eq:entropy} H(X) &=-\sum _{ x\in X }{ p(x)\log _{2}{p(x)} } \nonumber \\ H(X | Y) &=-\sum _{ y\in Y }{ p(y) } \sum_{x \in X}{p(x |y) \log _{ 2 }{ p(x | y) } } \end{aligned}$$ The third core CFS concept is its search strategy. By default, the CFS algorithm uses a best-first search to explore the search space. The algorithm starts with an empty set of features and at each step of the search all possible single feature expansions are generated. The new subsets are evaluated using Equation (\[eq:heuristic\]) and are then added to a priority queue according to merit. In the subsequent iteration, the best subset from the queue is selected for expansion in the same way as was done for the first empty subset. If expanding the best subset fails to produce an improvement in the overall merit, this counts as a *fail* and the next best subset from the queue is selected. By default, the CFS uses five consecutive fails as a stopping criterion and as a limit on queue length. The final CFS element is an optional post-processing step. As stated before, the CFS tends to select feature subsets with low redundancy and high correlation with the class. However, in some cases, extra features that are *locally predictive* in a small area of the instance space may exist that can be leveraged by certain classifiers [@Hall1999]. To include these features in the subset after the search, the CFS can optionally use a heuristic that enables inclusion of all features whose correlation with the class is higher than the correlation between the features themselves and with features already selected. Algorithm \[alg:cFS\] summarizes the main aspects of the CFS. $Corrs := $ correlations between all features with the class \[lin:allCorrs\] $BestSubset := \emptyset$ $Queue.setCapacity(5)$ $Queue.add(BestSubset)$ $NFails := 0$ $HeadState := Queue.dequeue$ $NewSubsets := evaluate(expand(HeadState), Corrs)$ \[lin:expand\] $Queue.add(NewSubsets)$ $BestSubset$ $LocalBest := Queue.head$ $BestSubset := LocalBest$ $NFails := 0$ $NFails := NFails + 1$ $BestSubset$ The Spark Cluster Computing Model {#sec:spark} ================================= The following short description of the main concepts behind the Spark computing model focuses exclusively on aspects that complete the conceptual basis for our DiCFS proposal in Section \[sec:diCFS\]. The main concept behind the Spark model is what is known as the resilient distributed dataset (RDD). Zaharia et al. [@Zaharia2010; @Zaharia2012] defined an RDD as a read-only collection of objects, i.e., a dataset partitioned and distributed across the nodes of a cluster. The RDD has the ability to automatically recover lost partitions through a lineage record that knows the origin of the data and possible calculations done. Even more relevant for our purposes is the fact that operations run for an RDD are automatically parallelized by the Spark engine; this abstraction frees the programmer from having to deal with threads, locks and all other complexities of traditional parallel programming. With respect to the cluster architecture, Spark follows the master-slave model. Through a cluster manager (master), a driver program can access the cluster and coordinate the execution of a user application by assigning tasks to the executors, i.e., programs that run in worker nodes (slaves). By default, only one executor is run per worker. Regarding the data, RDD partitions are distributed across the worker nodes, and the number of tasks launched by the driver for each executor is set according to the number of RDD partitions residing in the worker. Two types of operations can be executed on an RDD, namely, actions and transformations. Of the *actions*, which allow results to be obtained from a Spark cluster, perhaps the most important is $collect$, which returns an array with all the elements in the RDD. This operation has to be done with care, to avoid exceeding the maximum memory available to the driver. Other important actions include $reduce$, $sum$, $aggregate$ and $sample$, but as they are not used by us here, we will not explain them. *Transformations* are mechanisms for creating an RDD from another RDD. Since RDDs are read-only, a transformation creating a new RDD does not affect the original RDD. A basic transformation is $mapPartitions$, which receives, as a parameter, a function that can handle all the elements of a partition and return another collection of elements to conform a new partition. The $mapPartitions$ transformation is applied to all partitions in the RDD to obtain a new transformed RDD. Since received and returned partitions do not need to match in size, $mapPartitions$ can thus reduce or increase the overall size of an RDD. Another interesting transformation is $reduceByKey$; this can only be applied to what is known as a $PairRDD$, which is an RDD whose elements are key-value pairs, where the keys do not have to be unique. The $reduceByKey$ transformation is used to aggregate the elements of an RDD, which it does by applying a commutative and associative function that receives two values of the PairRDD as arguments and returns one element of the same type. This reduction is applied by key, i.e., elements with the same key are reduced such that the final result is a PairRDD with unique keys, whose corresponding values are the result of the reduction. Other important transformations (which we do not explain here) are $map$, $flatMap$ and $filter$. Another key concept in Spark is *shuffling*, which refers to the data communication required for certain types of transformations, such as the above-mentioned $reduceByKey$. Shuffling is a costly operation because it requires redistribution of the data in the partitions, and therefore, data read and write across all nodes in the cluster. For this reason, shuffling operations are minimized as much as possible. The final concept underpinning our proposal is *broadcasting*, which is a useful mechanism for efficiently sharing read-only data between all worker nodes in a cluster. Broadcast data is dispatched from the driver throughout the network and is thus made available to all workers in a deserialized fast-to-access form. Distributed Correlation-Based Feature Selection (DiCFS) {#sec:diCFS} ======================================================= We now describe the two algorithms that conform our proposal. They represent alternative distributed versions that use different partitioning strategies to process the data. We start with some considerations common to both approaches. As stated previously, CFS has a time execution complexity of $\mathcal{O}(m^2 \cdot n)$ where $m$ is the number of features and $n$ is the number of instances. This complexity derives from the first step shown in Algorithm \[alg:cFS\], the calculation of $\begin{psmallmatrix}m+ 1\\2\end{psmallmatrix}$ correlations between all pairs of features including the class, and the fact that for each pair, $\mathcal{O}(n)$ operations are needed in order to calculate the entropies. Thus, to develop a scalable version, our main focus in parallelization design must be on the calculation of correlations. Another important issue is that, although the original study by Hall [@Hall2000] stated that all correlations had to be calculated before the search, this is only a true requisite when a backward best-first search is performed. In the case of the search shown in Algorithm \[alg:cFS\], correlations can be calculated on demand, i.e., on each occasion a new non-evaluated pair of features appears during the search. In fact, trying to calculate all correlations in any dataset with a high number of features and instances is prohibitive; the tests performed on the datasets described in Section \[sec:experiments\] show that a very low percentage of correlations is actually used during the search and also that on-demand correlation calculation is around $100$ times faster when the default number of five maximum fails is used. Below we describe our two alternative methods for calculating these correlations in a distributed manner depending on the type of partitioning used. Horizontal Partitioning {#subsec:horizontalPart} ----------------------- Horizontal partitioning of the data may be the most natural way to distribute work between the nodes of a cluster. If we consider the default layout where the data is represented as a matrix $D$ in which the columns represent the different features and the rows represent the instances, then it is natural to distribute the matrix by assigning different groups of rows to nodes in the cluster. If we represent this matrix as an RDD, this is exactly what Spark will automatically do. Once the data is partitioned, Algorithm \[alg:cFS\] (omitting line \[lin:allCorrs\]) can be started on the driver. The distributed work will be performed on line \[lin:expand\], where the best subset in the queue is expanded and, depending on this subset and the state of the search, a number $nc$ of new pairs of correlations will be required to evaluate the resulting subsets. Thus, the most complex step is the calculation of the corresponding $nc$ contingency tables that will allow us to obtain the entropies and conditional entropies that conform the symmetrical uncertainty correlation (see Equation (\[eq:su\])). These $nc$ contingency tables are partially calculated locally by the workers following Algorithm \[alg:localCTables\]. As can be observed, the algorithm loops through all the local rows, counting the values of the features contained in *pairs* (declared in line \[lin:pairs\]) and storing the results in a map holding the feature pairs as keys and the contingency tables as their matching values. The next step is to merge the contingency tables from all the workers to obtain global results. Since these tables hold simple value counts, they can easily be aggregated by performing an element-wise sum of the corresponding tables. These steps are summarized in Equation (\[eq:cTables\]), where $CTables$ is an RDD of keys and values, and where each key corresponds to a feature pair and each value to a contingency table. $pairs \leftarrow$ $nc$ pairs of features \[lin:pairs\] $rows \leftarrow$ local rows of $partition$ $m \leftarrow$ number of columns (features in $D$) $ctables \leftarrow$ a map from each pair to an empty contingency table $ctables(x,y)(r(x),r(y))$ += $1$ $ctables$ $$\begin{aligned} \label{eq:cTables} pairs &= \left \{ (feat_a, feat_b), \cdots, (feat_x, feat_y) \right \} \nonumber \\ nc &= \left | pairs \right | \nonumber \\ CTables &= D.mapPartitions(localCTables(pairs)).reduceByKey(sum) \nonumber \\ CTables &= \begin{bmatrix} ((feat_a, feat_b), ctable_{a,b})\\ \vdots \\ ((feat_x, feat_y), ctable_{x,y})\\ \end{bmatrix}_{nc \times 1} \nonumber \\\end{aligned}$$ Once the contingency tables have been obtained, the calculation of the entropies and conditional entropies is straightforward since all the information necessary for each calculation is contained in a single row of the $CTables$ RDD. This calculation can therefore be performed in parallel by processing the local rows of this RDD. Once the distributed calculation of the correlations is complete, control returns to the driver, which continues execution of line \[lin:expand\] in Algorithm \[alg:cFS\]. As can be observed, the distributed work only happens when new correlations are needed, and this occurs in only two cases: (i) when new pairs of features need to be evaluated during the search, and (ii) at the end of the execution if the user requests the addition of locally predictive features. To sum up, every iteration in Algorithm \[alg:cFS\] expands the current best subset and obtains a group of subsets for evaluation. This evaluation requires a merit, and the merit for each subset is obtained according to Figure \[fig:horizontalPartResume\], which illustrates the most important steps in the horizontal partitioning scheme using a case where correlations between features f2 and f1 and between f2 and f3 are calculated in order to evaluate a subset. ![Horizontal partitioning steps for a small dataset D to obtain the correlations needed to evaluate a features subset[]{data-label="fig:horizontalPartResume"}](fig01.eps){width="100.00000%"} Vertical Partitioning {#subsec:vecticalPart} --------------------- Vertical partitioning has already been proposed in Spark by Ramírez-Gallego et al. [@Ramirez-Gallego2017], using another important FS filter, mRMR. Although mRMR is a ranking algorithm (it does not select subsets), it also requires the calculation of information theory measures such as entropies and conditional entropies between features. Since data is distributed horizontally by Spark, those authors propose two main operations to perform the vertical distribution: - *Columnar transformation*. Rather than use the traditional format whereby the dataset is viewed as a matrix whose columns represent features and rows represent instances, a transposed version is used in which the data represented as an RDD is distributed by features and not by instances, in such a way that the data for a specific feature will in most cases be stored and processed by the same node. Figure \[fig:columnarTrans\], based on Ramírez-Gallego et al. [@Ramirez-Gallego2017], explains the process using an example based on a dataset with two partitions, seven instances and four features. - *Feature broadcasting*. Because features must be processed in pairs to calculate conditional entropies and because different features can be stored in different nodes, some features are broadcast over the cluster so all nodes can access and evaluate them along with the other stored features. ![Example of a columnar transformation of a small dataset with two partitions, seven instances and four features (from [@Ramirez-Gallego2017])[]{data-label="fig:columnarTrans"}](fig02.eps){width="100.00000%"} In the case of the adapted mRMR [@Ramirez-Gallego2017], since every step in the search requires the comparison of a single feature with a group of remaining features, it proves efficient, at each step, to broadcast this single feature (rather than multiple features). In the case of the CFS, the core issue is that, at any point in the search when expansion is performed, if the size of subset being expanded is $k$, then the correlations between the $m-k$ remaining features and $k-1$ features in the subset being expanded have already been calculated in previous steps; consequently, only the correlations between the most recently added feature and the $m-k$ remaining features are missing. Therefore, the proposed operations can be applied efficiently in the CFS just by broadcasting the most recently added feature. The disadvantages of vertical partitioning are that (i) it requires an extra processing step to change the original layout of the data and this requires shuffling, (ii) it needs data transmission to broadcast a single feature in each search step, and (iii) the fact that, by default, the dataset is divided into a number of partitions equal to the number of features $m$ in the dataset may not be optimal for all cases (while this parameter can be tuned, it can never exceed $m$). The main advantage of vertical partitioning is that the data layout and the broadcasting of the compared feature move all the information needed to calculate the contingency table to the same node, which means that this information can be more efficiently processed locally. Another advantage is that the whole dataset does not need to be read every time a new set of features has to be compared, since the dataset can be filtered by rows to process only the required features. Due to the nature of the search strategy (best-first) used in the CFS, the first search step will always involve all features, so no filtering can be performed. For each subsequent step, only one more feature per step can be filtered out. This is especially important with high dimensionality datasets: the fact that the number of features is much higher than the number of search steps means that the percentage of features that can be filtered out is reduced. We performed a number of experiments to quantify the effects of the advantages and disadvantages of each approach and to check the conditions in which one approach was better than the other. Experiments {#sec:experiments} =========== The experiments tested and compared time-efficiency and scalability for the horizontal and vertical DiCFS approaches so as to check whether they improved on the original non-distributed version of the CFS. We also tested and compared execution times with that reported in the recently published research by Eiras-Franco et al. [@Eiras-Franco2016] into a distributed version of CFS for regression problems. Note that no experiments were needed to compare the quality of the results for the distributed and non-distributed CFS versions as the distributed versions were designed to return the same results as the original algorithm. For our experiments, we used a single master node and up to ten slave nodes from the big data platform of the Galician Supercomputing Technological Centre (CESGA). [^4] The nodes have the following configuration: - CPU: 2 X Intel Xeon E5-2620 v3 @ 2.40GHz - CPU Cores: 12 (2X6) - Total Memory: 64 GB - Network: 10GbE - Master Node Disks: 8 X 480GB SSD SATA 2.5" MLC G3HS - Slave Node Disks: 12 X 2TB NL SATA 6Gbps 3.5" G2HS - Java version: OpenJDK 1.8 - Spark version: 1.6 - Hadoop (HDFS) version: 2.7.1 - WEKA version: 3.8.1 The experiments were run on four large-scale publicly available datasets. The ECBDL14 [@Bacardit2012] dataset, from the protein structure prediction field, was used in the ECBLD14 Big Data Competition included in the GECCO’2014 international conference. This dataset has approximately 33.6 million instances, 631 attributes and 2 classes, consists 98% of negative examples and occupies about 56GB of disk space. HIGGS [@Sadowski2014], from the UCI Machine Learning Repository [@Lichman2013], is a recent dataset representing a classification problem that distinguishes between a signal process which produces Higgs bosons and a background process which does not. KDDCUP99 [@Ma2009] represents data from network connections and classifies them as normal connections or different types of attacks (a multi-class problem). Finally, EPSILON is an artificial dataset built for the Pascal Large Scale Learning Challenge in 2008.[^5] Table \[tbl:datasets\] summarizes the main characteristics of the datasets. [P[1in]{}P[0.7in]{}P[0.7in]{}P[0.7in]{}P[0.7in]{}]{} Dataset & No. of Samples ($\times 10^{6}$) & No. of Features. & Feature Types & Problem Type\ ECBDL14 [@Bacardit2012] & $\sim$33.6 & 632 & Numerical, Categorical & Binary\ HIGGS [@Sadowski2014] & 11 & 28 & Numerical & Binary\ KDDCUP99 [@Ma2009] & $\sim$5 & 42 & Numerical, Categorical & Multiclass\ EPSILON & 1/2 & 2,000 & Numerical & Binary\ With respect to algorithm parameter configuration, two defaults were used in all the experiments: the inclusion of locally predictive features and the use of five consecutive fails as a stopping criterion. These defaults apply to both distributed and non-distributed versions. Moreover, for the vertical partitioning version, the number of partitions was equal to the number of features, as set by default in Ramírez-Gallego et al. [@Ramirez-Gallego2017]. The horizontally and vertically distributed versions of the CFS are labelled DiCFS-hp and DiCFS-vp, respectively. We first compared execution times for the four algorithms in the datasets using ten slave nodes with all their cores available. For the case of the non-distributed version of the CFS, we used the implementation provided in the WEKA platform [@Hall2009a]. The results are shown in Figure \[fig:execTimeVsNInsta\]. ![Execution time with respect to percentages of instances in four datasets, for DiCFS-hp and DiCFS-vp using ten nodes and for a non-distributed implementation in WEKA using a single node[]{data-label="fig:execTimeVsNInsta"}](fig03.eps){width="100.00000%"} Note that, with the aim of offering a comprehensive view of execution time behaviour, Figure \[fig:execTimeVsNInsta\] shows results for sizes larger than the 100% of the datasets. To achieve these sizes, the instances in each dataset were duplicated as many times as necessary. Note also that, since ECBDL14 is a very large dataset, its temporal scale is different from that of the other datasets. Regarding the non-distributed version of the CFS, Figure \[fig:execTimeVsNInsta\] does not show results for WEKA in the experiments on the ECBDL14 dataset, because it was impossible to execute that version in the CESGA platform due to memory requirements exceeding the available limits. This also occurred with the larger samples from the EPSILON dataset for both algorithms: DiCFS-vp and DiCFS-hp. Even when it was possible to execute the WEKA version with the two smallest samples from the EPSILON dataset, these samples are not shown because the execution times were too high (19 and 69 minutes, respectively). Figure \[fig:execTimeVsNInsta\] shows successful results for the smaller HIGGS and KDDCUP99 datasets, which could still be processed in a single node of the cluster, as required by the non-distributed version. However, even in the case of these smaller datasets, the execution times of the WEKA version were worse compared to those of the distributed versions. Regarding the distributed versions, DiCFS-vp was unable to process the oversized versions of the ECBDL14 dataset, due to the large amounts of memory required to perform shuffling. The HIGGS and KDDCUP99 datasets showed an increasing difference in favor of DiCFS-hp, however, due to the fact that these datasets have much smaller feature sizes than ECBDL14 and EPSILON. As mentioned earlier, DiCFS-vp ties parallelization to the number of features in the dataset, so datasets with small numbers of features were not able to fully leverage the cluster nodes. Another view of the same issue is given by the results for the EPSILON dataset; in this case, DiCFS-vp obtained the best execution times for the 300% sized and larger datasets. This was because there were too many partitions (2,000) for the number of instances available in smaller than 300% sized datasets; further experiments showed that adjusting the number of partitions to 100 reduced the execution time of DiCFS-vp for the 100% EPSILON dataset from about 2 minutes to 1.4 minutes (faster than DiCFS-hp). Reducing the number of partitions further, however, caused the execution time to start increasing again. Figure \[fig:execTimeVsNFeats\] shows the results for similar experiments, except that this time the percentage of features in the datasets was varied and the features were copied to obtain oversized versions of the datasets. It can be observed that the number of features had a greater impact on the memory requirements of DiCFS-vp. This caused problems not only in processing the ECBDL14 dataset but also the EPSILON dataset. We can also see quadratic time complexity in the number of features and how the temporal scale in the EPSILON dataset (with the highest number of dimensions) matches that of the ECBDL14 dataset. As for the KDDCUP99 dataset, the results show that increasing the number of features obtained a better level of parallelization and a slightly improved execution time of DiCFS-vp compared to DiCFS-hp for the 400% dataset version and above. ![Execution times with respect to different percentages of features in four datasets for DiCFS-hp and DiCFS-vp[]{data-label="fig:execTimeVsNFeats"}](fig04.eps){width="100.00000%"} An important measure of the scalability of an algorithm is *speed-up*, which is a measure that indicates how capable an algorithm is of leveraging a growing number of nodes so as to reduce execution times. We used the speed-up definition shown in Equation (\[eq:speedup\]) and used all the available cores for each node (i.e., 12). The experimental results are shown in Figure \[fig:speedup\], where it can be observed that, for all four datasets, DiCFS-hp scales better than DiCFS-vp. It can also be observed that the HIGGS and KDDCUP datasets are too small to take advantage of the use of more than two nodes and also that practically no speed-up improvement is obtained from increasing this value. To summarize, our experiments show that even when vertical partitioning results in shorter execution times (the case in certain circumstances, e.g., when the dataset has an adequate number of features and instances for optimal parallelization according to the cluster resources), the benefits are not significant and may even be eclipsed by the effort invested in determining whether this approach is indeed the most efficient approach for a particular dataset or a particular hardware configuration or in fine-tuning the number of partitions. Horizontal partitioning should therefore be considered as the best option in the general case. $$\label{eq:speedup} speedup(m)=\left[ \frac { execution\quad time\quad on\quad 2\quad nodes }{execution\quad time\quad on\quad m\quad nodes } \right]$$ ![Speed-up for four datasets for DiCFS-hp and DiCFS-vp[]{data-label="fig:speedup"}](fig05.eps){width="100.00000%"} We also compared the DiCFS-hp approach with that of Eiras-Franco et al. [@Eiras-Franco2016], who described a Spark-based distributed version of the CFS for regression problems. The comparison was based on their experiments with the HIGGS and EPSILON datasets but using our current hardware. Those datasets were selected as only having numerical features and so could naturally be treated as regression problems. Table \[tbl:speedUp\] shows execution time and speed-up values obtained for different sizes of both datasets for both distributed and non-distributed versions and considering them to be classification and regression problems. Regression-oriented versions for the Spark and WEKA versions are labelled RegCFS and RegWEKA, respectively, the number after the dataset name represents the sample size and the letter indicates whether the sample had removed or added instances (*i*) or removed or added features (*f*). In the case of oversized samples, the method used was the same as described above, i.e., features or instances were copied as necessary. The experiments were performed using ten cluster nodes for the distributed versions and a single node for the WEKA version. The resulting speed-up was calculated as the WEKA execution time divided by the corresponding Spark execution time. The original experiments in [@Eiras-Franco2016] were performed only using EPSILON\_50i and HIGGS\_100i. It can be observed that much better speed-up was obtained by the DiCFS-hp version for EPSILON\_50i but in the case of HIGGS\_100i, the resulting speed-up in the classification version was lower than the regression version. However, in order to have a better comparison, two more versions for each dataset were considered, Table \[tbl:speedUp\] shows that the DiCFS-hp version has a better speed-up in all cases except in HIGGS\_100i dataset mentioned before. -------------- --------- --------- ------- -------- -------- ---------- Dataset RegCFS DiCFS-hp EPSILON\_25i 1011.42 655.56 58.85 63.61 10.31 17.19 EPSILON\_25f 393.91 703.95 25.83 55.08 12.78 15.25 EPSILON\_50i 4103.35 2228.64 76.98 110.13 20.24 53.30 HIGGS\_100i 182.86 327.61 21.34 23.70 13.82 8.57 HIGGS\_200i 2079.58 475.98 28.89 26.77 17.78 71.99 HIGGS\_200f 934.07 720.32 21.42 34.35 20.97 43.61 -------------- --------- --------- ------- -------- -------- ---------- : Execution time and speed-up values for different CFS versions for regression and classification[]{data-label="tbl:speedUp"} Conclusions and Future Work {#sec:conclusions} =========================== We describe two parallel and distributed versions of the CFS filter-based FS algorithm using the Apache Spark programming model: DiCFS-vp and DiCFS-hp. These two versions essentially differ in how the dataset is distributed across the nodes of the cluster. The first version distributes the data by splitting rows (instances) and the second version, following Ramírez-Gallego et al.  [@Ramirez-Gallego2017], distributes the data by splitting columns (features). As the outcome of a four-way comparison of DiCFS-vp and DiCFS-hp, a non-distributed implementation in WEKA and a distributed regression version in Spark, we can conclude as follows: - As was expected, both DiCFS-vp and DiCFS-hp were able to handle larger datasets in much a more time-efficient manner than the classical WEKA implementation. Moreover, in many cases they were the only feasible way to process certain types of datasets because of prohibitive WEKA memory requirements. - Of the horizontal and vertical partitioning schemes, the horizontal version (DiCFS-hp) proved to be the better option in the general case due to its better scalability and its natural partitioning mode that enables the Spark framework to make better use of cluster resources. - For classification problems, the benefits obtained from distribution compared to non-distribution version can be considered equal to or even better than the benefits already demonstrated for the regression domain [@Eiras-Franco2016]. Regarding future research, an especially interesting line is whether it is necessary for this kind of algorithm to process all the data available or whether it would be possible to design automatic sampling procedures that could guarantee that, under certain circumstances, equivalent results could be obtained. In the case of the CFS, this question becomes more pertinent in view of the study of symmetrical uncertainty in datasets with up to 20,000 samples by Hall [@Hall1999], where tests showed that symmetrical uncertainty decreased exponentially with the number of instances and then stabilized at a certain number. Another line of future work could be research into different data partitioning schemes that could, for instance, improve the locality of data while overcoming the disadvantages of vertical partitioning. Acknowledgements {#acknowledgements .unnumbered} ================ The authors thank CESGA for use of their supercomputing resources. This research has been partially supported by the Spanish Ministerio de Economía y Competitividad (research projects TIN 2015-65069-C2-1R, TIN2016-76956-C3-3-R), the Xunta de Galicia (Grants GRC2014/035 and ED431G/01) and the European Union Regional Development Funds. R. Palma-Mendoza holds a scholarship from the Spanish Fundación Carolina and the National Autonomous University of Honduras. [10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} D. W. Aha, D. Kibler, M. K. Albert, [Instance-Based Learning Algorithms]{}, Machine Learning 6 (1) (1991) 37–66. [](http://dx.doi.org/10.1023/A:1022689900470). J. Bacardit, P. Widera, A. M[á]{}rquez-chamorro, F. Divina, J. S. Aguilar-Ruiz, N. Krasnogor, [Contact map prediction using a large-scale ensemble of rule sets and the fusion of multiple predicted structural features]{}, Bioinformatics 28 (19) (2012) 2441–2448. [](http://dx.doi.org/10.1093/bioinformatics/bts472). R. Bellman, [[Dynamic Programming]{}]{}, Rand Corporation research study, Princeton University Press, 1957. V. Bol[ó]{}n-Canedo, N. S[á]{}nchez-Maro[ñ]{}o, A. Alonso-Betanzos, [[Distributed feature selection: An application to microarray data classification]{}]{}, Applied Soft Computing 30 (2015) 136–150. [](http://dx.doi.org/10.1016/j.asoc.2015.01.035). V. Bol[ó]{}n-Canedo, N. S[á]{}nchez-Maro[ñ]{}o, A. Alonso-Betanzos, [Recent advances and emerging challenges of feature selection in the context of big data]{}, Knowledge-Based Systems 86 (2015) 33–45. [](http://dx.doi.org/10.1016/j.knosys.2015.05.014). M. Dash, H. Liu, [[Consistency-based search in feature selection]{}]{}, Artificial Intelligence 151 (1-2) (2003) 155–176. [](http://dx.doi.org/10.1016/S0004-3702(03)00079-1). <http://linkinghub.elsevier.com/retrieve/pii/S0004370203000791> J. Dean, S. Ghemawat, [MapReduce: Simplied Data Processing on Large Clusters]{}, Proceedings of 6th Symposium on Operating Systems Design and Implementation (2004) 137–149[](http://arxiv.org/abs/10.1.1.163.5292), [](http://dx.doi.org/10.1145/1327452.1327492). J. Dean, S. Ghemawat, [[MapReduce: Simplified Data Processing on Large Clusters]{}]{}, Communications of the ACM 51 (1) (2008) 107. <http://dl.acm.org/citation.cfm?id=1327452.1327492> R. O. Duda, P. E. Hart, D. G. Stork, [[Pattern Classification]{}]{}, John Wiley [&]{} Sons, 2001. C. Eiras-Franco, V. Bol[ó]{}n-Canedo, S. Ramos, J. Gonz[á]{}lez-Dom[í]{}nguez, A. Alonso-Betanzos, J. Touri[ñ]{}o, [[Multithreaded and Spark parallelization of feature selection filters]{}]{}, Journal of Computational Science 17 (2016) 609–619. [](http://dx.doi.org/10.1016/j.jocs.2016.07.002). U. M. Fayyad, K. B. Irani, [[Multi-Interval Discretization of Continuos-Valued Attributes for Classification Learning]{}]{} (1993). <http://trs-new.jpl.nasa.gov/dspace/handle/2014/35171> D. J. Garcia, L. O. Hall, D. B. Goldgof, K. Kramer, [A Parallel Feature Selection Algorithm from Random Subsets]{} (2004). E. E. Ghiselli, [[Theory of Psychological Measurement]{}]{}, McGraw-Hill series in psychology, McGraw-Hill, 1964. <https://books.google.es/books?id=mmh9AAAAMAAJ> I. Guyon, A. Elisseeff, [An Introduction to Variable and Feature Selection]{}, Journal of Machine Learning Research (JMLR) 3 (3) (2003) 1157–1182. [](http://arxiv.org/abs/1111.6189v1), [](http://dx.doi.org/10.1016/j.aca.2011.07.027). M. A. Hall, [Correlation-based feature selection for machine learning]{}, PhD Thesis., Department of Computer Science, Waikato University, New Zealand (1999). [](http://dx.doi.org/10.1.1.37.4643). M. A. Hall, [[Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning]{}]{} (2000) 359–366. <http://dl.acm.org/citation.cfm?id=645529.657793> M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, I. Witten, [The WEKA data mining software: An update]{}, SIGKDD Explorations 11 (1) (2009) 10–18. [](http://dx.doi.org/10.1145/1656274.1656278). T. K. Ho, [[Random Decision Forests]{}]{}, in: Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1, ICDAR ’95, IEEE Computer Society, Washington, DC, USA, 1995, pp. 278—-. <http://dl.acm.org/citation.cfm?id=844379.844681> A. Idris, A. Khan, Y. S. Lee, [Intelligent churn prediction in telecom: Employing mRMR feature selection and RotBoost based ensemble classification]{}, Applied Intelligence 39 (3) (2013) 659–672. [](http://dx.doi.org/10.1007/s10489-013-0440-x). A. Idris, M. Rizwan, A. Khan, [Churn prediction in telecom using Random Forest and PSO based data balancing in combination with various feature selection strategies]{}, Computers and Electrical Engineering 38 (6) (2012) 1808–1819. [](http://dx.doi.org/10.1016/j.compeleceng.2012.09.001). I. Kononenko, [[Estimating attributes: Analysis and extensions of RELIEF]{}]{}, Machine Learning: ECML-94 784 (1994) 171–182. [](http://dx.doi.org/10.1007/3-540-57868-4). <http://www.springerlink.com/index/10.1007/3-540-57868-4> J. Kubica, S. Singh, D. Sorokina, [[Parallel Large-Scale Feature Selection]{}]{}, in: Scaling Up Machine Learning, no. February, 2011, pp. 352–370. [](http://dx.doi.org/10.1017/CBO9781139042918.018). <http://ebooks.cambridge.org/ref/id/CBO9781139042918A143> J. Leskovec, A. Rajaraman, J. D. Ullman, [[Mining of Massive Datasets]{}]{}, 2014. [](http://dx.doi.org/10.1017/CBO9781139924801). <http://ebooks.cambridge.org/ref/id/CBO9781139924801> M. Lichman, [[UCI Machine Learning Repository]{}](http://archive.ics.uci.edu/ml) (2013). <http://archive.ics.uci.edu/ml> J. Ma, L. K. Saul, S. Savage, G. M. Voelker, [Identifying Suspicious URLs : An Application of Large-Scale Online Learning]{}, in: Proceedings of the International Conference on Machine Learning (ICML), Montreal, Quebec, 2009. R. J. Palma-Mendoza, D. Rodriguez, L. De-Marcos, [[Distributed ReliefF-based feature selection in Spark]{}](http://link.springer.com/10.1007/s10115-017-1145-y), Knowledge and Information Systems (2018) 1–20[](http://dx.doi.org/10.1007/s10115-017-1145-y). <http://link.springer.com/10.1007/s10115-017-1145-y> H. Peng, F. Long, C. Ding, [[Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy.]{}]{}, IEEE transactions on pattern analysis and machine intelligence 27 (8) (2005) 1226–38. [](http://dx.doi.org/10.1109/TPAMI.2005.159). <http://www.ncbi.nlm.nih.gov/pubmed/16119262> D. Peralta, S. del R[í]{}o, S. Ram[í]{}rez-Gallego, I. Riguero, J. M. Benitez, F. Herrera, [[Evolutionary Feature Selection for Big Data Classification: A MapReduce Approach ]{}]{}, Mathematical Problems in Engineering 2015 (JANUARY). [](http://dx.doi.org/10.1155/2015/246139). W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, [Numerical recipes in C]{}, Vol. 2, Cambridge Univ Press, 1982. J. R. Quinlan, [[Induction of Decision Trees]{}](http://dx.doi.org/10.1023/A:1022643204877), Mach. Learn. 1 (1) (1986) 81–106. [](http://dx.doi.org/10.1023/A:1022643204877). <http://dx.doi.org/10.1023/A:1022643204877> J. R. Quinlan, [[C4.5: Programs for Machine Learning]{}](http://portal.acm.org/citation.cfm?id=152181), Vol. 1, 1992. [](http://dx.doi.org/10.1016/S0019-9958(62)90649-6). <http://portal.acm.org/citation.cfm?id=152181> S. Ram[í]{}rez-Gallego, I. Lastra, D. Mart[í]{}nez-Rego, V. Bol[ó]{}n-Canedo, J. M. Ben[í]{}tez, F. Herrera, A. Alonso-Betanzos, [[Fast-mRMR: Fast Minimum Redundancy Maximum Relevance Algorithm for High-Dimensional Big Data]{}]{}, International Journal of Intelligent Systems 32 (2) (2017) 134–152. [](http://dx.doi.org/10.1002/int.21833). <http://doi.wiley.com/10.1002/int.21833> I. Rish, [An empirical study of the naive Bayes classifier]{}, in: IJCAI 2001 workshop on empirical methods in artificial intelligence, Vol. 3, IBM, 2001, pp. 41–46. P. Sadowski, P. Baldi, D. Whiteson, [Searching for Higgs Boson Decay Modes with Deep Learning]{}, Advances in Neural Information Processing Systems 27 (Proceedings of NIPS) (2014) 1–9. J. Silva, A. Aguiar, F. Silva, [[Parallel Asynchronous Strategies for the Execution of Feature Selection Algorithms]{}]{}, International Journal of Parallel Programming (2017) 1–32[](http://dx.doi.org/10.1007/s10766-017-0493-2). <http://link.springer.com/10.1007/s10766-017-0493-2> V. Vapnik, [The Nature of Statistical Learning Theory]{} (1995). Y. Wang, W. Ke, X. Tao, [[A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark]{}]{}, Information 7 (1) (2016) 6. [](http://dx.doi.org/10.3390/info7010006). <http://www.mdpi.com/2078-2489/7/1/6> , [Xingquan Zhu]{}, [Gong-Qing Wu]{}, [Wei Ding]{}, [[Data mining with big data]{}](http://ieeexplore.ieee.org/document/6547630/), IEEE Transactions on Knowledge and Data Engineering 26 (1) (2014) 97–107. [](http://dx.doi.org/10.1109/TKDE.2013.109). <http://ieeexplore.ieee.org/document/6547630/> M. Zaharia, M. Chowdhury, T. Das, A. Dave, [[Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing]{}]{}, NSDI’12 Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation (2012) 2[](http://arxiv.org/abs/EECS-2011-82), [](http://dx.doi.org/10.1111/j.1095-8649.2005.00662.x). M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, I. Stoica, [Spark : Cluster Computing with Working Sets]{}, HotCloud’10 Proceedings of the 2nd USENIX conference on Hot topics in cloud computing (2010) 10[](http://dx.doi.org/10.1007/s00256-009-0861-0). Z. Zhao, H. Liu, [Searching for interacting features]{}, IJCAI International Joint Conference on Artificial Intelligence (2007) 1156–1161[](http://dx.doi.org/10.3233/IDA-2009-0364). Z. Zhao, R. Zhang, J. Cox, D. Duling, W. Sarle, [[Massively parallel feature selection: an approach based on variance preservation]{}]{}, Machine Learning 92 (1) (2013) 195–220. [](http://dx.doi.org/10.1007/s10994-013-5373-4). <http://link.springer.com/10.1007/s10994-013-5373-4> [^1]: <https://spark-packages.org> [^2]: <https://github.com/rauljosepalma/DiCFS> [^3]: <http://www.sas.com/en_us/software/high-performance-analytics.html> [^4]: <http://bigdata.cesga.es/> [^5]: <http://largescale.ml.tu-berlin.de/about/>
--- abstract: | The aim of this paper is to numerically solve a diffusion differential problem having time derivative of fractional order. To this end we propose a collocation-Galerkin method that uses the fractional splines as approximating functions. The main advantage is in that the derivatives of integer and fractional order of the fractional splines can be expressed in a closed form that involves just the generalized finite difference operator. This allows us to construct an accurate and efficient numerical method. Several numerical tests showing the effectiveness of the proposed method are presented.\ [**Keywords**]{}: Fractional diffusion problem, Collocation method, Galerkin method, Fractional spline author: - 'Laura Pezza[^1], Francesca Pitolli[^2]' title: 'A fractional spline collocation-Galerkin method for the time-fractional diffusion equation' --- Introduction. {#sec:intro} ============= The use of fractional calculus to describe real-world phenomena is becoming increasingly widespread. Integro-differential equations of [*fractional*]{}, [*i.e.*]{} positive real, order are used, for instance, to model wave propagation in porous materials, diffusive phenomena in biological tissue, viscoelastic properties of continuous media [@Hi00; @Ma10; @KST06; @Ta10]. Among the various fields in which fractional models are successfully used, viscoelasticity is one of the more interesting since the memory effect introduced by the time-fractional derivative allows to model anomalous diffusion phenomena in materials that have mechanical properties in between pure elasticity and pure viscosity [@Ma10]. Even if these models are empirical, nevertheless they are shown to be consistent with experimental data.\ The increased interest in fractional models has led to the development of several numerical methods to solve fractional integro-differential equations. Many of the proposed methods generalize to the fractional case numerical methods commonly used for the classical integer case (see, for instance, [@Ba12; @PD14; @ZK14] and references therein). But the nonlocality of the fractional derivative raises the challenge of obtaining numerical solution with high accuracy at a low computational cost. In [@PP16] we proposed a collocation method especially designed for solving differential equations of fractional order in time. The key ingredient of the method is the use of the fractional splines introduced in [@UB00] as approximating functions. Thus, the method takes advantage of the explicit differentiation rule for fractional B-splines that allows us to evaluate accurately the derivatives of both integer and fractional order.\ In the present paper we used the method to solve a diffusion problem having time derivative of fractional order and show that the method is efficient and accurate. More precisely, the [*fractional spline collocation-Galerkin method*]{} here proposed combines the fractional spline collocation method introduced in [@PP16] for the time discretization and a classical spline Galerkin method in space.\ The paper is organized as follows. In Section \[sec:diffeq\], a time-fractional diffusion problem is presented and the definition of fractional derivative is given. Section \[sec:fractBspline\] is devoted to the fractional B-splines and the explicit expression of their fractional derivative is given. The fractional spline approximating space is described in Section \[sec:app\_spaces\], while the fractional spline collocation-Galerkin method is introduced in Section \[sec:Galerkin\]. Finally, in Section \[sec:numtest\] some numerical tests showing the performance of the method are displayed. Some conclusions are drawn in Section \[sec:concl\]. A time-fractional diffusion problem. {#sec:diffeq} ==================================== We consider the [*time-fractional differential diffusion problem*]{} [@Ma10] $$\label{eq:fracdiffeq} \left \{ \begin{array}{lcc} \displaystyle D_t^\gamma \, u(t, x) - \frac{\partial^2}{\partial x^2} \, u(t, x) = f(t, x)\,, & \quad t \in [0, T]\,, & \quad x \in [0,1] \,,\\ \\ u(0, x) = 0\,, & & \quad x \in [0,1]\,, \\ \\ u(t, 0) = u(t, 1) = 0\,, & \quad t \in [0, T]\,, \end{array} \right.$$ where $ D_t^\gamma u$, $0 < \gamma < 1$, denotes the [*partial fractional derivative*]{} with respect to the time $t$. Usually, in viscoelasticity the fractional derivative is to be understood in the Caputo sense, [*i.e.*]{} $$\label{eq:Capfrac} D_t^\gamma \, u(t, x) = \frac1{\Gamma(1-\gamma)} \, \int_0^t \, \frac{u_t(\tau,x)}{(t - \tau)^\gamma} \, d\tau\,, \qquad t\ge 0\,,$$ where $\Gamma$ is the Euler’s gamma function $$\Gamma(\gamma+1)= \int_0^\infty \, s^\gamma \, {\rm e}^{-s} \, ds\,.$$ We notice that due to the homogeneous initial condition for the function $u(t,x)$, solution of the differential problem (\[eq:fracdiffeq\]), the Caputo definition (\[eq:Capfrac\]) coincides with the Riemann-Liouville definition (see [@Po99] for details). One of the advantage of the Riemann-Liouville definition is in that the usual differentiation operator in the Fourier domain can be easily extended to the fractional case, [*i.e.*]{} $${\cal F} \bigl(D_t^\gamma \, f(t) \bigr) = (i\omega)^\gamma {\cal F} (f(t))\,,$$ where ${\cal F}(f)$ denotes the Fourier transform of the function $f(t)$. Thus, analytical Fourier methods usually used in the classical integer case can be extended to the fractional case [@Ma10]. The fractional B-splines and their fractional derivatives. {#sec:fractBspline} ========================================================== The [*fractional B-splines*]{}, [*i.e.*]{} the B-splines of fractional degree, were introduced in [@UB00] generalizing to the fractional power the classical definition for the polynomial B-splines of integer degree. Thus, the fractional B-spline $B_{\alpha}$ of degree $\alpha$ is defined as $$\label{eq:Balpha} B_{\alpha}(t) := \frac{{ \Delta}^{\alpha+1} \, t_+^\alpha} {\Gamma(\alpha+1)}\,, \qquad \alpha > -\frac 12\,,$$ where $$\label{eq:fracttruncpow} t_+^\alpha: = \left \{ \begin{array}{ll} t^\alpha\,, & \qquad t \ge 0\,, \\ \\ 0\,, & \qquad \hbox{otherwise}\,, \end{array} \right. \qquad \alpha > -1/2\,,$$ is the [*fractional truncated power function*]{}. $\Delta^{\alpha}$ is the [*generalized finite difference operator*]{} $$\label{eq:fracfinitediff} \Delta^{\alpha} \, f(t) := \sum_{k\in \NN} \, (-1)^k \, {\alpha \choose k} \, f(t-\,k)\,, \qquad \alpha \in \RR^+\,,$$ where $$\label{eq:binomfrac} {\alpha \choose k} := \frac{\Gamma(\alpha+1)}{k!\, \Gamma(\alpha-k+1)}\,, \qquad k\in \NN\,, \quad \alpha \in \RR^+\,,$$ are the [*generalized binomial coefficients*]{}. We notice that ’fractional’ actually means ’noninteger’, [*i.e.*]{} $\alpha$ can assume any real value greater than $-1/2$. For real values of $\alpha$, $B_\alpha$ does not have compact support even if it belongs to $L_2(\RR)$. When $\alpha=n$ is a nonnegative integer, Equations (\[eq:Balpha\])-(\[eq:binomfrac\]) are still valid; $\Delta^{n}$ is the usual finite difference operator so that $B_n$ is the classical polynomial B-spline of degree $n$ and compact support $[0,n+1]$ (for details on polynomial B-splines see, for instance, the monograph [@Sc07]). The fractional B-splines for different values of the parameter $\alpha$ are displayed in Figure \[fig:fractBsplines\] (top left panel). The classical polynomial B-splines are also displayed (dashed lines). The picture shows that the fractional B-splines decay very fast toward infinity so that they can be assumed compactly supported for computational purposes. Moreover, in contrast to the polynomial B-splines, fractional splines are not always positive even if the nonnegative part becomes more and more smaller as $\alpha$ increases. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fract_Bspline.png "fig:"){width="6cm"} ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fractder_Bspline_linear.png "fig:"){width="6cm"} ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fractder_Bspline_cubica.png "fig:"){width="6cm"} ![Top left panel: The fractional B-splines (solid lines) and the polynomial B-splines (dashed lines) for $\alpha$ ranging from 0 to 4. Top right panel: The fractional derivatives of the linear B-spline $B_1$ for $\gamma = 0.25, 0.5, 0.75$. Bottom left panel: The fractional derivatives of the cubic B-spline $B_3$ for $\gamma$ ranging from 0.25 to 2. Bottom right panel: The fractional derivatives of the fractional B-spline $B_{3.5}$ for the $\gamma$ ranging from 0.25 to 2. Ordinary derivatives are displayed as dashed lines. []{data-label="fig:fractBsplines"}](Fig_fractder_Bspline_alpha3p5.png "fig:"){width="6cm"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The fractional derivatives of the fractional B-splines can be evaluated explicitly by differentiating (\[eq:Balpha\]) and (\[eq:fracttruncpow\]) in the Caputo sense. This gives the following differentiation rule $$\label{eq:diffrule_tronc} D^{\gamma}_t \, B_{\alpha} (t)= \frac{\Delta^{\alpha+1} \, t_+^{\alpha-\gamma}} {\Gamma(\alpha-\gamma+1)}\,, \qquad 0 < \gamma < \alpha + \frac12\,,$$ which holds both for fractional and integer order $\gamma$. In particular, when $\gamma, \alpha$ are nonnegative integers, (\[eq:diffrule\_tronc\]) is the usual differentiation rule for the classical polynomial B-splines [@Sc07]. We observe that since $B_\alpha$ is a causal function with $B_\alpha^{(n)}(0)=0$ for $n\in \NN\backslash\{0\}$, the Caputo fractional derivative coincides with the Riemann-Liouville fractional derivative.\ From (\[eq:diffrule\_tronc\]) and the composition property $\Delta^{\alpha_1} \, \Delta^{\alpha_2} = \Delta^{\alpha_1+\alpha_2}$ it follows [@UB00] $$\label{eq:diffrule_2} D^{\gamma}_t \, B_{\alpha} = \Delta ^{\gamma} \, B_{\alpha-\gamma}\,,$$ [*i.e.*]{} the fractional derivative of a fractional B-spline of degree $\alpha$ is a fractional spline of degree $\alpha-\gamma$. The fractional derivatives of the classical polynomial B-splines $B_n$ are fractional splines, too. This means that $D^{\gamma}_t \, B_{n}$ is not compactly supported when $\gamma$ is noninteger reflecting the nonlocal behavior of the derivative operator of fractional order.\ In Figure \[fig:fractBsplines\] the fractional derivatives of $B_1$ (top right panel), $B_3$ (bottom left panel) and $B_{3.5}$ (bottom right panel) are displayed for different values of $\gamma$. The fractional spline approximating spaces. {#sec:app_spaces} =========================================== A property of the fractional B-splines that is useful for the construction of numerical methods for the solution of differential problems is the [*refinability*]{}. In fact, the fractional B-splines are [*refinable functions*]{}, [*i.e.*]{} they satisfy the [*refinement equation*]{} $$B_\alpha(t) = \sum_{k\in \NN} \, a^{(\alpha)}_{k} \, B_\alpha(2\,t-k)\,, \qquad t \ge 0\,,$$ where the coefficients $$a^{(\alpha)}_{k} := \frac{1}{2^{\alpha}} {\alpha+1 \choose k}\,,\qquad k\in \NN\,,$$ are the [*mask coefficients*]{}. This means that the sequence of nested approximating spaces $$V^{(\alpha)}_j(\RR) = {\rm span} \,\{B_\alpha(2^j\, t -k), k \in \ZZ\}\,, \qquad j \in \ZZ\,,$$ forms a [*multiresolution analysis*]{} of $L_2(\RR)$. As a consequence, any function $f_j(t)$ belonging to $V^{(\alpha)}_j(\RR)$ can be expressed as $$f_j(t) = \sum_{k\in \ZZ}\, \lambda_{jk} \, B_\alpha(2^j\, t -k)\,,$$ where the coefficient sequence $\{\lambda_{j,k}\} \in \ell_2(\ZZ)$. Moreover, any space $V^{(\alpha)}_j(\RR)$ reproduces polynomials up to degree $\lceil \alpha\rceil$, [*i.e.*]{} $x^d \in V^{(\alpha)}_j(\RR)$, $ 0 \le d \le \lceil \alpha\rceil$, while its approximation order is $\alpha +1$. We recall that the polynomial B-spline $B_n$ reproduces polynomial up to degree $n$ whit approximation order $n+1$ [@UB00]. To solve boundary differential problems we need to construct a multiresolution analysis on a finite interval. For the sake of simplicity in the following we will consider the interval $I=[0,1]$. A simple approach is to restrict the basis $\{B_\alpha(2^j\, t -k)\}$ to the interval $I$, [*i.e.*]{} $$\label{eq:Vj_int} V^{(\alpha)}_j(I) = {\rm span} \,\{B_\alpha(2^j\, t -k), t\in I, -N \le k \le 2^j-1\}\,, \qquad j_0 \le j\,,$$ where $N$ is a suitable index, chosen in order the significant part of $B_\alpha$ is contained in $[0,N+1]$, and $j_0$ is the starting refinement level. The drawback of this approach is its numerical instability and the difficulty in fulfilling the boundary conditions since there are $2N$ boundary functions, [*i.e.*]{} the translates of $B_\alpha$ having indexes $ -N\le k \le -1$ and $2^j-N\le k \le 2^j-1$, that are non zero at the boundaries. More suitable refinable bases can be obtained by the procedure given in [@GPP04; @GP04]. In particular, for the polynomial B-spline $B_n$ a B-basis $\{\phi_{\alpha,j,k}(t)\}$ with optimal approximation properties can be constructed. The internal functions $\phi_{\alpha,j,k}(t)=B_\alpha(2^j\, t -k)$, $0 \le k \le 2^j-1-n$, remain unchanged while the $2n$ boundary functions fulfill the boundary conditions $$\begin{array}{llcc} \phi_{\alpha,j,-1}^{(\nu)}(0) = 1\,, & \phi_{\alpha,j,k}^{(\nu)}(0) = 0\,, &\hbox{for} & 0\le \nu \le -k-2\,, -n \le k \le -2\,,\\ \\ \phi_{\alpha,j,2^j-1}^{(\nu)}(1) = 1\,, & \phi_{\alpha,j,2^j+k}^{(\nu)}(1) = 0\,, & \hbox{for} & 0\le \nu \le -k-2\,, -n \le k \le -2\,,\\ \\ \end{array}$$ Thus, the B-basis naturally fulfills Dirichlet boundary conditions.\ As we will show in the next section, the refinability of the fractional spline bases plays a crucial role in the construction of the collocation-Galerkin method. The fractional spline collocation-Galerkin method. {#sec:Galerkin} ================================================== In the collocation-Galerkin method here proposed, we look for an approximating function $u_{s,j}(t,x) \in V^{(\beta)}_s([0,T]) \otimes V^{(\alpha)}_j([0,1])$. Since just the ordinary first spatial derivative of $u_{s,j}$ is involved in the Galerkin method, we can assume $\alpha$ integer and use as basis function for the space $V^{(\alpha)}_j([0,1])$ the refinable B-basis $\{\phi_{\alpha,j,k}\}$, [*i.e.*]{} $$\label{uj} u_{s,j}(t,x) = \sum_{k \in {\cal Z}_j} \, c_{s,j,k}(t) \, \phi_{\alpha,j,k}(x)\,,$$ where the unknown coefficients $c_{s,j,k}(t)$ belong to $V^{(\beta)}_s([0,T])$. Here, ${\cal Z}_j$ denotes the set of indexes $-n\le k \le 2^j-1$.\ The approximating function $u_{s,j}(t,x)$ solves the variational problem $$\label{varform} \left \{ \begin{array}{ll} \displaystyle \left ( D_t^\gamma u_{s,j},\phi_{\alpha,j,k} \right ) -\left ( \frac {\partial^2} {\partial x^2}\,u_{s,j},\phi_{\alpha,j,k} \right ) = \left ( f,\phi_{\alpha,j,k} \right )\,, & \quad k \in {\cal Z}_j\,, \\ \\ u_{s,j}(0, x) = 0\,, & x \in [0,1]\,, \\ \\ u_{s,j}(t, 0) = 0\,, \quad u_{s,j}(t,1) = 0\,, & t \in [0,T]\,, \end{array} \right.$$ where $(f,g)= \int_0^1 \, f\,g$.\ Now, writing (\[varform\]) in a weak form and using (\[uj\]) we get the system of fractional ordinary differential equations $$\label{fracODE} \left \{ \begin{array}{ll} M_j \, D_t^\gamma\,C_{s,j}(t) + L_j\, C_{s,j}(t) = F_j(t)\,, & \qquad t \in [0,T]\,, \\ \\ C_{s,j}(0) = 0\,, \end{array} \right.$$ where $C_{s,j}(t)=(c_{s,j,k}(t))_{k\in {\cal Z}_j}$ is the unknown vector. The connecting coefficients, i.e. the entries of the mass matrix $M_j = (m_{j,k,i})_{k,i\in{\cal Z}_j}$, of the stiffness matrix $L_j = (\ell_{j,k,i})_{k,i\in{\cal Z}_j}$, and of the load vector $F_j(t)=(f_{j,k}(t))_{k\in {\cal Z}_j}$, are given by $$m_{j,k,i} = \int_0^1\, \phi_{\alpha,j,k}\, \phi_{\alpha,j,i}\,, \qquad \ell_{j,k,i} = \int_0^1 \, \phi'_{\alpha,j,k} \, \phi'_{\alpha,j,i}\,,$$ $$f_{j,k}(t) = \int_0^1\, f(t,\cdot)\, \phi_{\alpha,j,k}\,.$$ The entries of $M_j$ and $L_j$ can be evaluated explicitly using (\[eq:Balpha\]) and (\[eq:diffrule\_tronc\]), respectively, while the entries of $F_j(t)$ can be evaluated by quadrature formulas especially designed for wavelet methods [@CMP15; @GGP00]. To solve the fractional differential system (\[fracODE\]) we use the collocation method introduced in [@PP16]. For an integer value of $T$, let $t_p = p/2^q$, $0\le p \le 2^q\,T$, where $q$ is a given nonnegative integer, be a set of dyadic nodes in the interval $[0,T]$. Now, assuming $$\label{ck} c_{s,j,k}(t) = \sum_{r\in {\cal R}_s} \, \lambda_{k,r}\,\chi_{\beta,s,r}(t) \,, \qquad k \in {\cal Z}_j\,,$$ where $\chi_{\beta,s,r}(t)=B_\beta(2^s\,t-r)$ with $B_\beta$ a fractional B-spline of fractional degree $\beta$, and collocating (\[fracODE\]) on the nodes $t_p$, we get the linear system $$\label{colllinearsys} (M_j\otimes A_s + L_j\otimes G_s) \,\Lambda_{s,j} =F_j\,,$$ where $\Lambda_{s,j}=(\lambda_{k,r})_{r\in {\cal R}_s,k\in {\cal Z}_j}$ is the unknown vector, $$\begin{array}{ll} A_s= \bigl( a_{p,r} \bigr)_{p\in {\cal P}_q,r\in {\cal R}_s}\,, & \qquad a_{p,r} = D_t^\gamma \, \chi_{\beta,s,r}(t_p)\,, \\ \\ G_s=\bigl(g_{p,r}\bigr)_{p \in {\cal P}_q,r\in {\cal R}_s}\,, & \qquad g_{p,r} = \chi_{\beta,s,r}(t_p)\,, \end{array}$$ are the collocation matrices and $$F_j=(f_{j,k}(t_p))_{k\in{\cal Z}_j,p \in {\cal P}_q}\,,$$ is the constant term. Here, ${\cal R}_s$ denotes the set of indexes $-\infty < r \le 2^s-1$ and ${\cal P}_q$ denotes the set of indexes $0<p\le 2^qT$. Since the fractional B-splines have fast decay, the series (\[ck\]) is well approximated by only few terms and the linear system (\[colllinearsys\]) has in practice finite dimension so that the unknown vector $\Lambda_{s,j}$ can be recovered by solving (\[colllinearsys\]) in the least squares sense.\ We notice that the entries of $G_s$, which involve just the values of $\chi_{\beta,s,r}$ on the dyadic nodes $t_p$, can be evaluated explicitly by (\[eq:Balpha\]). On the other hand, we must pay a special attention to the evaluation of the entries of $A_s$ since they involve the values of the fractional derivative $D_t^\gamma\chi_{\beta,s,r}(t_p)$. As shown in Section \[sec:fractBspline\], they can be evaluated efficiently by the differentiation rule (\[eq:diffrule\_2\]). In the following theorem we prove that the fractional spline collocation-Galerkin method is convergent. First of all, let us introduce the Sobolev space on bounded interval $$H^\mu(I):= \{v \in L^2(I): \exists \, \tilde v \in H^\mu (\RR) \ \hbox{\rm such that} \ \tilde v|_I=v\}, \quad \mu\geq 0\,,$$ equipped with the norm $$\|v\|_{\mu,I} = \inf_{\tilde v \in H^\mu(\RR), \tilde v|_I=v} \|\tilde v\|_{\mu,\RR}\,,$$ where $$H^\mu(\RR):= \{v: v\in L^2(\RR) \mbox{ and } (1+|\omega|^2)^{\mu/2} {\cal F}(v)(\omega) \in L^2(\RR)\}, \quad \mu\geq 0\,,$$ is the usual Sobolev space with the norm $$\| v \| _{\mu,\RR} =\bigl \| (1+|\omega|^2)^{\mu/2} {\cal F}(v)(\omega) \bigr \| _{0,\RR}\,.$$ \[Convergence\] Let $$H^\mu(I;H^{\tilde \mu}(\Omega)):= \{v(t,x): \| v(t,\cdot)\|_{H^{\tilde \mu}(\Omega)} \in H^\mu(I)\}, \quad \mu, {\tilde \mu} \geq 0\,,$$ equipped with the norm $$\|v\|_{H^\mu(I;H^{\tilde \mu}(\Omega))} := \bigl \| \|v(t,\cdot)\|_{H^{\tilde \mu}(\Omega)} \bigr\|_{\mu,I}\,.$$ Assume $u$ and $f$ in (\[eq:fracdiffeq\]) belong to $H^{\mu}([0,T];H^{\tilde \mu}([0,1]))$, $0\le \mu$, $0\le \tilde \mu$, and $H^{\mu-\gamma}([0,T];$ $H^{{\tilde \mu}-2}([0,1]))$, $0\le \mu-\gamma$, $0\le \tilde \mu-2$, respectively. Then, the fractional spline collocation-Galerkin method is convergent, [*i.e.*]{}, $$\|u-u_{s,j}\|_{H^0([0,T];H^0([0,1]))} \, \to 0 \quad \hbox{as} \quad s,j \to \infty\,.$$ Moreover, for $\gamma \le \mu \le \beta+1$ and $1 \le \tilde \mu \le \alpha +1$ the following error estimate holds: $$\begin{array}{lcl} \| u-u_{s,j}\|_{H^0([0,T];H^0([0,1]))} &\leq & \left (\eta_1 \, 2^{-j\tilde \mu} + \eta_2 \, 2^{-s\mu} \right ) \| u\|_{H^\mu([0,T];H^{\tilde \mu}([0,1]))}\,, \end{array}$$ where $\eta_1$ and $\eta_2$ are two constants independent of $s$ and $j$. Let $u_j$ be the exact solution of the variational problem (\[varform\]). Following a classical line of reasoning (cf. [@Th06; @FXY11; @DPS94]) we get $$\begin{array}{l} \| u-u_{j,s}\|_{H^0([0,T];H^0([0,1]))} \leq \\ \\ \rule{2cm}{0cm} \leq \|u-u_{j}\|_{H^0([0,T];H^0([0,1]))} + \| u_j-u_{j,s}\|_{H^0([0,T];H^0([0,1]))}\, \leq \\ \\ \rule{2cm}{0cm} \leq \eta_1 \, 2^{-j\tilde \mu}\, \| u\|_{H^0([0,T];H^{\tilde \mu}([0,1]))} + \eta_2 \, 2^{-s\mu} \, \| u\|_{H^\mu([0,T];H^0([0,1]))} \leq \\ \\ \rule{2cm}{0cm} \leq \left ( \eta_1 \, 2^{-j\tilde \mu} + \eta_2 \, 2^{-s\mu} \right ) \, \|u\|_{H^\mu([0,T];H^{\tilde \mu}([0,1]))}\,. \end{array}$$ Numerical tests. {#sec:numtest} ================ To shown the effectiveness of the fractional spline collocation-Galerkin method we solved the fractional diffusion problem (\[eq:fracdiffeq\]) for two different known terms $f(t,x)$ taken from [@FXY11]. In all the numerical tests we used as approximating space for the Galerkin method the (polynomial) cubic spline space. The B-splines $B_3$, its first derivatives $B_3'$ and the B-basis $\{\phi_{3,3,k}\}$ are displayed in Figure \[fig:Bcubic\]. We notice that since the cubic B-spline is centrally symmetric in the interval $[0,4]$, the B-basis is centrally symmetric, too. All the numerical tests were performed on a laptop using a Python environment. Each test takes a few minutes. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Left panel: The cubic B-spline (red line) and its first derivative (blue line). Right panel: The B-basis $\{\phi_{3,3,k}(x)\}$.[]{data-label="fig:Bcubic"}](Fig_Bspline_n3.png "fig:"){width="45.00000%"} ![Left panel: The cubic B-spline (red line) and its first derivative (blue line). Right panel: The B-basis $\{\phi_{3,3,k}(x)\}$.[]{data-label="fig:Bcubic"}](OptBasis_alpha3.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Example 1 --------- In the first test we solved the time-fractional diffusion equation (\[eq:fracdiffeq\]) in the case when $$f(t,x)=\frac{2}{\Gamma(3-\gamma)}\,t^{2-\gamma}\, \sin(2\pi x)+4\pi^2\,t^2\, \sin(2\pi x)\,.$$ The exact solution is $$u(t,x)=t^2\,\sin(2\pi x).$$ We used the fractional B-spline $B_{3.5}$ as approximating function for the collocation method and solved the problem for $\gamma = 1, 0.75, 0.5, 0.25$. The fractional B-spline $B_{3.5}$, its first derivative and its fractional derivatives are shown in Figure \[fig:fract\_Basis\] along with the fractional basis $\{\chi_{3.5,3,r}\}$. The numerical solution $u_{s,j}(t,x)$ and the error $e_{s,j}(t,x) = u(t,x)-u_{s,j}(t,x)$ for $s=6$ and $j=6$ are displayed in Figure \[fig:numsol\_1\] for $\gamma = 0.5$. In all the numerical tests we set $q = s+1$. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Left panel: The fractional B-spline $B_{3.5}$ (green line), its first derivative (red line) and its fractional derivatives of order $\gamma =0.75$ (blue line), 0.5 (cyan line), 0.25 (black line). Right panel: The fractional basis $\{\chi_{3.5,3,r}\}$(right).[]{data-label="fig:fract_Basis"}](Fig_Bspline_n3p5.png "fig:"){width="45.00000%"} ![Left panel: The fractional B-spline $B_{3.5}$ (green line), its first derivative (red line) and its fractional derivatives of order $\gamma =0.75$ (blue line), 0.5 (cyan line), 0.25 (black line). Right panel: The fractional basis $\{\chi_{3.5,3,r}\}$(right).[]{data-label="fig:fract_Basis"}](FractBasis_alpha3p5.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Example 1. The numerical solution (left panel) and the error (right panel) for $j=6$ and $s=6$ when $\gamma = 0.5$.[]{data-label="fig:numsol_1"}](Fig_NumSol_jGK6_sref7_beta3p5_gamma0p5_ex1.png "fig:"){width="45.00000%"} ![Example 1. The numerical solution (left panel) and the error (right panel) for $j=6$ and $s=6$ when $\gamma = 0.5$.[]{data-label="fig:numsol_1"}](Fig_Error_jGK6_sref7_beta3p5_gamma0p5_ex1.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ We analyze the behavior of the error as the degree of the fractional B-spline $B_\beta$ increases. Figure \[fig:L2\_error\_1\] shows the $L_2$-norm of the error as a function of $s$ for $\beta$ ranging from 2 to 4; the four panels in the figure refer to different values of the order of the fractional derivative. For these tests we set $j=5$. The figure shows that for $s \le 4$ the error provided by the polynomial spline approximations is lower than the error provided by the fractional spline approximations. Nevertheless, in this latter case the error decreases reaching the same value, or even a lower one, of the polynomial spline error when $s=5$. We notice that for $\gamma=1$ the errors provided by the polynomial spline approximations of different degrees have approximatively the same values while the error provided by the polynomial spline of degree 2 is lower in case of fractional derivatives. In fact, it is well-known that fractional derivatives are better approximated by less smooth functions [@Po99]. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam1_ex1.png "fig:"){width="45.00000%"} ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam0p75_ex1.png "fig:"){width="45.00000%"} ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam0p5_ex1.png "fig:"){width="45.00000%"} ![Example 1: The $L_2$-norm of the error as a function of $q$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_1"}](Fig_L2Error_gam0p25_ex1.png "fig:"){width="45.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Then, we analyze the convergence of the method for increasing values of $j$ and $s$. Table \[tab:conv\_js\_fract\_1\] reports the $L_2$-norm of the error for different values of $j$ and $s$ when using the fractional B-spline $B_{3.5}$ and $\gamma = 0.5$. The number of degrees-of-freedom is also reported. The table shows that the error decreases when $j$ increases and $s$ is held fix. We notice that the error decreases very slightly when $j$ is held fix and $s$ increases since for these values of $s$ we reached the accuracy level we can expect for that value of $j$ (cf. Figures \[fig:L2\_error\_1\]). The higher values of the error for $s=7$ and $j=5,6$ are due to the numerical instabilities of the basis $\{\chi_{3.5,s,r}\}$ which result in a high condition number of the discretization matrix. The error has a similar behavior even in the case when we used the cubic B-spline space as approximating space for the collocation method (cf. Table \[tab:conv\_js\_cubic\_1\]). 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.02037 (369) 0.00449 (697) 0.00101 (1353) 0.00025 (2665) 6 0.02067 (657) 0.00417 (1241) 0.00093 (2409) 0.00024 (4745) 7 0.01946 (1233) 0.00381 (2329) 0.00115 (4521) 0.00117 (8905) : Example 1: The $L_2$-norm of the error for increasing values of $s$ and $j$ when using the fractional B-spline of degree $\beta=3.5$. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma =0.5$. \[tab:conv\_js\_fract\_1\] 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.02121 (315) 0.00452 (595) 0.00104 (1155) 0.00025 (2275) 6 0.02109 (603) 0.00443 (1139) 0.00097 (2211) 0.00023 (4355) 7 0.02037 (1179) 0.00399 (2227) 0.00115 (4323) 0.00115 (8515) : Example 1: The $L_2$-norm of the error for increasing values of $s$ and $j$ when using the cubic B-spline. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma =0.5$. \[tab:conv\_js\_cubic\_1\] Example 2 --------- In the second test we solved the time-fractional diffusion equation (\[eq:fracdiffeq\]) in the case when $$\begin{array}{lcl} f(t,x) & = & \displaystyle \frac{\pi t^{1-\gamma}}{2\Gamma(2-\gamma)} \left( \, {_1F_1}(1,2-\gamma,i\pi\,t) + \,{_1F_1}(1,2-\gamma,-i\pi\,t) \right) \, \sin(\pi\,x) \\ \\ & + & \pi^2 \, \sin(\pi\,t) \, \sin(\pi\,x)\,, \end{array}$$ where $_1F_1(\alpha,\beta,z)$ is the Kummer’s confluent hypergeometric function defined as $$_1F_1(\alpha,\beta, z) = \frac {\Gamma(\beta)}{\Gamma(\alpha)} \, \sum_{k\in \NN} \, \frac {\Gamma(\alpha+k)}{\Gamma(\beta+k)\, k!} \, z^k\,, \qquad \alpha \in \RR\,, \quad -\beta \notin \NN_0\,,$$ where $\NN_0 = \NN \backslash \{0\}$ (cf. [@AS65 Chapter 13]). In this case the exact solution is $$u(t,x)=\sin(\pi t)\,\sin(\pi x).$$ We performed the same set of numerical tests as in Example 1. The numerical solution $u_{s,j}(t,x)$ and the error $e_{s,j}(t,x)$ for $s=5$ and $j=6$ are displayed in Figure \[fig:numsol\_2\] in the case when $\gamma = 0.5$. Figure \[fig:L2\_error\_2\] shows the $L_2$-norm of the error as a function of $s$ for $\beta$ ranging from 2 to 4 and $j=5$; the four panels in the figure refer to different values of the order of the fractional derivative. Tables \[tab:conv\_js\_fract\_2\]-\[tab:conv\_js\_cubic\_2\] report the $L_2$-norm of the error for different values of $j$ and $s$ and $\beta = 3.5, 3$, respectively. The number of degrees-of-freedom is also reported.\ Figure \[fig:L2\_error\_2\] shows that value of the error is higher than in the previous example but it decreases as $s$ increases showing a very similar behavior as that one in Example 1. The values of the error in Tables \[tab:conv\_js\_fract\_2\]-\[tab:conv\_js\_cubic\_2\] are approximatively the same as in Tables \[tab:conv\_js\_fract\_1\]-\[tab:conv\_js\_cubic\_1\]. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![Example 2: The numerical solution (left panel) and the error (right panel) when $j=6$ and $s=5$. []{data-label="fig:numsol_2"}](Fig_NumSol_jGK6_sref6_beta3p5_gamma0p5_ex2.png "fig:"){width="45.00000%"} ![Example 2: The numerical solution (left panel) and the error (right panel) when $j=6$ and $s=5$. []{data-label="fig:numsol_2"}](Fig_Error_jGK6_sref6_beta3p5_gamma0p5_ex2.png "fig:"){width="45.00000%"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam1_ex2.png "fig:"){width="45.00000%"} ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam0p75_ex2.png "fig:"){width="45.00000%"} ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam0p5_ex2.png "fig:"){width="45.00000%"} ![Example 2: The $L_2$-norm of the error as a function of $s$ for different values of $\gamma$. Each line corresponds to a spline of different degree: solid lines correspond to the polynomial splines; non solid lines correspond to fractional splines.[]{data-label="fig:L2_error_2"}](Fig_L2Error_gam0p25_ex2.png "fig:"){width="45.00000%"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.01938 (369) 0.00429 (697) 0.00111 (1353) 0.00042 (2665) 6 0.01809 (657) 0.00555 (1241) 0.00507 (2409) 0.00523 (4745) 7 0.01811 (1233) 0.01691 (2329) 0.01822 (4521) 0.01858 (8905) : Example 2: The $L_2$-norm of the error for increasing values of $s$ and $j$ for the fractional B-spline of degree $\beta=3.5$. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma=0.5$. \[tab:conv\_js\_fract\_2\] 3 4 5 6 -------------------------------- ---------------- ---------------- ---------------- ---------------- $\sharp V_j^{(\alpha)}([0,1])$ 9 17 33 65 5 0.01909 (315) 0.00404 (595) 0.00102 (1155) 0.00063 (2275) 6 0.01810 (603) 0.00546 (1139) 0.00495 (2211) 0.00511 (4355) 7 0.01805 (1179) 0.01671 (2227) 0.01801 (4323) 0.01838 (8515) : Example 2: The $L_2$-norm of the error for increasing values of $s$ and $j$ for the cubic B-spline. The numbers in parenthesis are the degrees-of-freedom. Here, $\gamma=0.5$. \[tab:conv\_js\_cubic\_2\] Conclusion {#sec:concl} ========== We proposed a fractional spline collocation-Galerkin method to solve the time-fractional diffusion equation. The novelty of the method is in the use of fractional spline spaces as approximating spaces so that the fractional derivative of the approximating function can be evaluated easily by an explicit differentiation rule that involves the generalized finite difference operator. The numerical tests show that the method has a good accuracy so that it can be effectively used to solve fractional differential problems. The numerical instabilities arising in the fractional basis when $s$ increases can be reduced following the approach in [@GPP04] that allows us to construct stable basis on the interval. Moreover, the ill-conditioning of the linear system (\[colllinearsys\]) can be reduced using iterative methods in Krylov spaces, such as the method proposed in [@CPSV17]. Finally, we notice that following the procedure given in [@GPP04], fractional wavelet bases on finite interval can be constructed so that the proposed method can be generalized to fractional wavelet approximating spaces. [10]{} Milton Abramowitz and Irene A. Stegun. , volume 55. Dover Publications, 1965. Dumitru Baleanu, Kai Diethelm, Enrico Scalas, and Juan J. Trujillo. Fractional calculus. models and numerical methods. , 3:10–16, 2012. Francesco Calabr[ò]{}, Carla Manni, and Francesca Pitolli. Computation of quadrature rules for integration with respect to refinable functions on assigned nodes. , 90:168–189, 2015. Daniela Calvetti, Francesca Pitolli, Erkki Somersalo, and Barbara Vantaggi. Bayes meets [K]{}rylov: preconditioning [CGLS]{} for underdetermined systems. , in press. Wolfgang Dahmen, Siegfried Pr[ö]{}ssdorf, and Reinhold Schneider. Wavelet approximation methods for pseudodifferential equations: I stability and convergence. , 215(1):583–620, 1994. Neville Ford, Jingyu Xiao, and Yubin Yan. A finite element method for time fractional partial differential equations. , 14(3):454–474, 2011. Walter Gautschi, Laura Gori, and Francesca Pitolli. Gauss quadrature for refinable weight functions. , 8(3):249–257, 2000. Laura Gori, Laura Pezza, and Francesca Pitolli. Recent results on wavelet bases on the interval generated by [GP]{} refinable functions. , 51(4):549–563, 2004. Laura Gori and Francesca Pitolli. Refinable functions and positive operators. , 49(3):381–393, 2004. Rudolf Hilfer. . World Scientific, 2000. Francesco Mainardi. . World Scientific, 2010. Arvet Pedas and Enn Tamme. Numerical solution of nonlinear fractional differential equations by spline collocation methods. , 255:216–230, 2014. Laura Pezza and Francesca Pitolli. A multiscale collocation method for fractional differential problems. , 147:210–219, 2018. Igor Podlubny. , volume 198. Academic Press, 1998. Larry L. Schumaker. . Cambridge University Press, 2007. Hari Mohan Srivastava and Juan J. Trujillo. . Elsevier, 2006. Vasily E. Tarasov. . Springer Science & Business Media, 2011. Vidar Thomée. . Springer-Verlag, 2006. Michael Unser and Thierry Blu. Fractional splines and wavelets. , 42(1):43–67, 2000. Mohsen Zayernouri and George Em Karniadakis. Fractional spectral collocation method. , 36(1):A40–A62, 2014. [^1]: [*Dept. SBAI, University of Roma ”La Sapienza”*]{}, Via A. Scarpa 16, 00161 Roma, Italy. e-mail: [laura.pezza@sbai.uniroma1.it]{} [^2]: [*Dept. SBAI, University of Roma ”La Sapienza”*]{}, Via A. Scarpa 16, 00161 Roma, Italy. e-mail:
--- abstract: 'Unprecedentedly precise cosmic microwave background (CMB) data are expected from ongoing and near-future CMB Stage-III and IV surveys, which will yield reconstructed CMB lensing maps with effective resolution approaching several arcminutes. The small-scale CMB lensing fluctuations receive non-negligible contributions from nonlinear structure in the late-time density field. These fluctuations are not fully characterized by traditional two-point statistics, such as the power spectrum. Here, we use $N$-body ray-tracing simulations of CMB lensing maps to examine two higher-order statistics: the lensing convergence one-point probability distribution function (PDF) and peak counts. We show that these statistics contain significant information not captured by the two-point function, and provide specific forecasts for the ongoing Stage-III Advanced Atacama Cosmology Telescope (AdvACT) experiment. Considering only the temperature-based reconstruction estimator, we forecast 9$\sigma$ (PDF) and 6$\sigma$ (peaks) detections of these statistics with AdvACT. Our simulation pipeline fully accounts for the non-Gaussianity of the lensing reconstruction noise, which is significant and cannot be neglected. Combining the power spectrum, PDF, and peak counts for AdvACT will tighten cosmological constraints in the $\Omega_m$-$\sigma_8$ plane by $\approx 30\%$, compared to using the power spectrum alone.' author: - 'Jia Liu$^{1,2}$' - 'J. Colin Hill$^{2}$' - 'Blake D. Sherwin$^{3}$' - 'Andrea Petri$^{4}$' - 'Vanessa Böhm$^{5}$' - 'Zoltán Haiman$^{2,6}$' bibliography: - 'paper.bib' title: 'CMB Lensing Beyond the Power Spectrum: Cosmological Constraints from the One-Point PDF and Peak Counts' --- Introduction {#sec:intro} ============ After its first detection in cross-correlation nearly a decade ago [@Smith2007; @Hirata2008] and subsequent detection in auto-correlation five years ago [@das2011; @sherwin2011], weak gravitational lensing of the cosmic microwave background (CMB) is now reaching maturity as a cosmological probe [@Hanson2013; @Das2013; @PolarBear2014a; @PolarBear2014b; @BICEPKeck2016; @Story2014; @Ade2014; @vanEngelen2014; @vanEngelen2015; @planck2015xv]. On their way to the Earth, CMB photons emitted at redshift $z=1100$ are deflected by the intervening matter, producing new correlations in maps of CMB temperature and polarization anisotropies. Estimators based on these correlations can be applied to the observed anisotropy maps to reconstruct a noisy estimate of the CMB lensing potential [@Zaldarriaga1998; @Zaldarriaga1999; @HuOkamoto2002; @Okamoto2003]. CMB lensing can probe fundamental physical quantities, such as the dark energy equation of state and neutrino masses, through its sensitivity to the geometry of the universe and the growth of structure (see Refs. [@Lewis2006; @Hanson2010] for a review). In this paper, we study the non-Gaussian information stored in CMB lensing observations. The Gaussian approximation to the density field breaks down due to nonlinear evolution on small scales at late times. Thus, non-Gaussian statistics (i.e., statistics beyond the power spectrum) are necessary to capture the full information in the density field. Such work has been previously performed (theoretically and observationally) on weak gravitational lensing of galaxies, where galaxy shapes, instead of CMB temperature/polarization patterns, are distorted (hereafter “galaxy lensing”). Several research groups have found independently that non-Gaussian statistics can tighten cosmological constraints when they are combined with the two-point correlation function or angular power spectrum.[^1] Such non-Gaussian statistics have also been applied in the CMB context to the Sunyaev-Zel’dovich signal, including higher-order moments [@Wilson2012; @Hill2013; @Planck2013tSZ; @Planck2015tSZ], the bispectrum [@Bhattacharya2012; @Crawford2014; @Planck2013tSZ; @Planck2015tSZ], and the one-point probability distribution function (PDF) [@Hill2014b; @Planck2013tSZ; @Planck2015tSZ]. In all cases, substantial non-Gaussian information was found, yielding improved cosmological constraints. The motivation to study non-Gaussian statistics of CMB lensing maps is three-fold. First, the CMB lensing kernel is sensitive to structures at high redshift ($z\approx2.0$, compared to $z\approx0.4$ for typical galaxy lensing samples); hence CMB lensing non-Gaussian statistics probe early nonlinearity that is beyond the reach of galaxy surveys. Second, CMB lensing does not suffer from some challenging systematics that are relevant to galaxy lensing, including intrinsic alignments of galaxies, photometric redshift uncertainties, and shape measurement biases. Therefore, a combined analysis of galaxy lensing and CMB lensing will be useful to build a tomographic outlook on nonlinear structure evolution, as well as to calibrate systematics in both galaxy and CMB lensing surveys [@Liu2016; @Baxter2016; @Schaan2016; @Singh2016; @Nicola2016]. Finally, CMB lensing measurements have recently entered a regime of sufficient sensitivity and resolution to detect the (stacked) lensing signals of halos [@Madhavacheril2014; @Baxter2016; @Planck2015cluster]. This suggests that statistics sensitive to the nonlinear growth of structure, i.e., non-Gaussian statistics, will also soon be detectable. We demonstrate below that this is indeed the case, taking as a reference experiment the ongoing Advanced Atacama Cosmology Telescope (AdvACT) survey [@Henderson2016]. Non-Gaussian aspects of the CMB lensing field have recently attracted attention, both as a potential signal and a source of bias in CMB lensing power spectrum estimates. Considering the lensing non-Gaussianity as a signal, a recent analytical study of the CMB lensing bispectrum by Ref. [@Namikawa2016] forecasted its detectability to be 40$\sigma$ with a CMB Stage-IV experiment. Ref. [@Bohm2016] performed the first calculation of the bias induced in CMB lensing power spectrum estimates by the lensing bispectrum, finding non-negligible biases for Stage-III and IV CMB experiments. Refs. [@Pratten2016] and [@Marozzi2016] considered CMB lensing effects arising from the breakdown of the Born approximation, with the former study finding that post-Born terms substantially alter the predicted CMB lensing bispectrum, compared to the contributions from nonlinear structure formation alone. We emphasize that the $N$-body ray-tracing simulations used in this work naturally capture such effects — we do not use the Born approximation. However, we consider only the lensing potential $\phi$ or convergence $\kappa$ here (related by $\kappa = -\nabla^2 \phi/2$), leaving a treatment of the curl potential or image rotation for future work (Ref. [@Pratten2016] has demonstrated that the curl potential possesses non-trivial higher-order statistics). In a follow-up paper, the simulations described here are used to more precisely characterize CMB lensing power spectrum biases arising from the bispectrum and higher-order correlations [@Sherwin2016]. We consider the non-Gaussianity in the CMB lensing field as a potential signal. We use a suite of 46 $N$-body ray-tracing simulations to investigate two non-Gaussian statistics applied to CMB lensing convergence maps — the one-point PDF and peak counts. We examine the deviation of the convergence PDF and peak counts from those of Gaussian random fields. We then quantify the power of these statistics to constrain cosmological models, compared with using the power spectrum alone. The paper is structured as follows. We first introduce CMB lensing in Sec. \[sec:formalism\]. We then describe our simulation pipeline in Sec. \[sec:sim\] and analysis procedures in Sec. \[sec:analysis\]. We show our results for the power spectrum, PDF, peak counts, and the derived cosmological constraints in Sec. \[sec:results\]. We conclude in Sec. \[sec:conclude\]. CMB lensing formalism {#sec:formalism} ===================== To lowest order, the lensing convergence ($\kappa$) is a weighted projection of the three-dimensional matter overdensity $\delta=\delta\rho/\bar{\rho}$ along the line of sight, $$\label{eq.kappadef} \kappa(\thetaB) = \int_0^{\infty} dz W(z) \delta(\chi(z)\thetaB, z),$$ where $\chi(z)$ is the comoving distance and the kernel $W(z)$ indicates the lensing strength at redshift $z$ for sources with a redshift distribution $p(z_s)=dn(z_s)/dz$. For CMB lensing, there is only one source plane at the last scattering surface $z_\star=1100$; therefore, $p(z_s)=\delta_D(z_s-z_\star)$, where $\delta_D$ is the Dirac delta function. For a flat universe, the CMB lensing kernel is $$\begin{aligned} W^{{\kappa_{\rm cmb}}}(z) &=& \frac{3}{2}\Omega_{m}H_0^2 \frac{(1+z)}{H(z)} \frac{\chi(z)}{c} \nonumber\\ &\times& \frac{\chi(z_\star)-\chi(z)}{\chi(z_\star)}.\end{aligned}$$ where $\Omega_{m}$ is the matter density as a fraction of the critical density at $z=0$, $H(z)$ is the Hubble parameter at redshift $z$, with a present-day value $H_0$, and $c$ is the speed of light. $W^{{\kappa_{\rm cmb}}}(z)$ peaks at $z\approx2$ for canonical cosmological parameters ($\Omega_{m}\approx0.3$ and $H_0\approx70$ km/s/Mpc, [@planck2015xiii]). Note that Eq. (\[eq.kappadef\]) assumes the Born approximation, but our simulation approach described below does not — we implement full ray-tracing to calculate $\kappa$. Simulations {#sec:sim} =========== Our simulation procedure includes five main steps: (1) the design (parameter sampling) of cosmological models, (2) $N$-body simulations with Gadget-2,[^2] (3) ray-tracing from $z=0$ to $z=1100$ to obtain (noiseless) convergence maps using the Python code LensTools [@Petri2016],[^3] (4) lensing simulated CMB temperature maps by the ray-traced convergence field, and (5) reconstructing (noisy) convergence maps from the CMB temperature maps after including noise and beam effects. Simulation design ----------------- We use an irregular grid to sample parameters in the $\Omega_m$-$\sigma_8$ plane, within the range of $\Omega_m \in [0.15, 0.7]$ and $\sigma_8 \in [0.5, 1.0]$, where $\sigma_8$ is the rms amplitude of linear density fluctuations on a scale of 8 Mpc/$h$ at $z=0$. An optimized irregular grid has a smaller average distance between neighboring points than a regular grid, and no parameters are duplicated. Hence, it samples the parameter space more efficiently. The procedure to optimize our sampling is described in detail in Ref. [@Petri2015]. The 46 cosmological models sampled are shown in Fig. \[fig:design\]. Other cosmological parameters are held fixed, with $H_0=72$ km/s/Mpc, dark energy equation of state $w=-1$, spectral index $n_s=0.96$, and baryon density $\Omega_b=0.046$. The design can be improved in the future by posterior sampling, where we first run only a few models to generate a low-resolution probability plane, and then sample more densely in the high-probability region. We select the model that is closest to the standard concordance values of the cosmological parameters (e.g., [@planck2015xiii]) as our fiducial model, with $\Omega_m=0.296$ and $\sigma_8=0.786$. We create two sets of realizations for this model, one for covariance matrix estimation, and another one for parameter interpolation. This fiducial model is circled in red in Fig. \[fig:design\]. ![\[fig:design\] The design of cosmological parameters used in our simulations (46 models in total). The fiducial cosmology ($\Omega_m=0.296, \sigma_8=0.786$) is circled in red. The models for which AdvACT-like lensing reconstruction is performed are circled in blue. Other cosmological parameters are fixed at $H_0=72$ km/s/Mpc, $w=-1$, $n_s=0.96$, and $\Omega_b=0.046$.](plot/plot_design.pdf){width="48.00000%"} $N$-body simulation and ray-tracing {#sec:nbody} ----------------------------------- We use the public code Gadget-2 to run $N$-body simulations with $N_{\rm particles}=1024^3$ and box size = 600 Mpc/$h$ (corresponding to a mass resolution of $1.4\times10^{10} M_\odot/h$). To initialize each simulation, we first obtain the linear matter power spectrum with the Einstein-Boltzmann code CAMB.[^4] The power spectrum is then fed into the initial condition generator N-GenIC, which generates initial snapshots (the input of Gadget-2) of particle positions at $z=100$. The $N$-body simulation is then run from $z=100$ to $z=0$, and we record snapshots at every 144 Mpc$/h$ in comoving distance between $z\approx45$ and $z=0$. The choice of $z\approx45$ is determined by requiring that the redshift range covers 99% of the $W^{\kappa_{cmb}}D(z)$ kernel, where we use the linear growth factor $D(z)\sim 1/(1+z)$. We then use the Python code LensTools [@Petri2016] to generate CMB lensing convergence maps. We first slice the simulation boxes to create potential planes (3 planes per box, 200 Mpc/$h$ in thickness), where particle density is converted into gravitational potential using the Poisson equation. We track the trajectories of 4096$^2$ light rays from $z=0$ to $z=1100$, where the deflection angle and convergence are calculated at each potential plane. This procedure automatically captures so-called “post-Born” effects, as we never assume that the deflection angle is small or that the light rays follow unperturbed geodesics.[^5] Finally, we create 1,000 convergence map realizations for each cosmology by randomly rotating/shifting the potential planes [@Petri2016b]. For the fiducial cosmology only, we generate 10,000 realizations for the purpose of estimating the covariance matrix. The convergence maps are 2048$^2$ pixels and 12.25 deg$^2$ in size, with square pixels of side length 0.1025 arcmin. The maps generated at this step correspond to the physical lensing convergence field only, i.e., they have no noise from CMB lensing reconstruction. Therefore, they are labeled as “noiseless” in the following sections and figures. ![\[fig:theory\_ps\] Comparison of the CMB lensing convergence power spectrum from the HaloFit model and that from our simulation (1024$^3$ particles, box size 600 Mpc/$h$, map size 12.25 deg$^2$), for our fiducial cosmology. We also show the prediction from linear theory. Error bars are the standard deviation of 10,000 realizations.](plot/plot_theory_comparison.pdf){width="48.00000%"} We test the power spectra from our simulated maps against standard theoretical predictions. Fig. \[fig:theory\_ps\] shows the power spectrum from our simulated maps versus that from the HaloFit model [@Smith2003; @Takahashi2012] for our fiducial cosmology. We also show the linear-theory prediction, which deviates from the nonlinear HaloFit result at $\ell \gtrsim 700$. The simulation error bars are estimated using the standard deviation of 10,000 realizations. The simulated and (nonlinear) theoretical results are consistent within the error bars for multipoles $\ell<2,000$, which is sufficient for this work, as current and near-future CMB lensing surveys are limited to roughly this $\ell$ range due to their beam size and noise level (the filtering applied in our analysis below effectively removes all information on smaller angular scales). We find similar consistency between theory and simulation for the other 45 simulated models. We test the impact of particle resolution using a smaller box of 300 Mpc/$h$, while keeping the same number of particles (i.e. 8 times higher resolution), and obtain excellent agreement at scales up to $\ell=3,000$. The lack of power on large angular scales is due to the limited size of our convergence maps, while the missing power on small scales is due to our particle resolution. On very small scales ($\ell \gtrsim 5 \times 10^4$), excess power due to finite-pixelization shot noise arises, but this effect is negligible on the scales considered in our analysis. CMB lensing reconstruction {#sec:recon} -------------------------- ![image](plot/plot_maps.pdf){width="\textwidth"} In order to obtain CMB lensing convergence maps with realistic noise properties, we generate lensed CMB temperature maps and reconstruct noisy estimates of the convergence field. First, we generate Gaussian random field CMB temperature maps based on a $\Lambda$CDM concordance model temperature power spectrum computed with CAMB. We compute deflection field maps from the ray-traced convergence maps described in the previous sub-section, after applying a filter that removes power in the convergence maps above $ \ell \approx 4,000$.[^6] These deflection maps are then used to lens the simulated primary CMB temperature maps. The lensing simulation procedure is described in detail in Ref. [@Louis2013]. After obtaining the lensed temperature maps, we apply instrumental effects consistent with specifications for the ongoing AdvACT survey [@Henderson2016]. In particular, the maps are smoothed with a FWHM $=1.4$ arcmin beam, and Gaussian white noise of amplitude 6$\mu$K-arcmin is then added. We subsequently perform lensing reconstruction on these beam-convolved, noisy temperature maps using the quadratic estimator of Ref. [@HuOkamoto2002], but with the replacement of unlensed with lensed CMB temperature power spectra in the filters, which gives an unbiased reconstruction to higher order [@Hanson2010]. The final result is a noisy estimate of the CMB lensing convergence field, with 1,000 realizations for each cosmological model (10,000 for the fiducial model). We consider only temperature-based reconstruction in this work, leaving polarization estimators for future consideration. The temperature estimator is still expected to contribute more significantly than the polarization to the signal-to-noise for Stage-III CMB experiments like AdvACT, but polarization will dominate for Stage-IV (via $EB$ reconstruction). For the AdvACT-like experiment considered here, including polarization would increase the predicted signal-to-noise on the lensing power spectrum by $\approx 35$%. More importantly, polarization reconstruction allows the lensing field to be mapped out to smaller scales than temperature reconstruction [@HuOkamoto2002], and is more immune to foreground-related biases at high-$\ell$ [@vanEngelen2014b]. Thus, it could prove extremely useful for higher-order CMB lensing statistics, which are sourced by non-Gaussian structure on small scales. Clearly these points are worthy of future analysis, but we restrict this work to temperature reconstruction for simplicity. In addition to the fiducial model, we select the nearest eight points in the sampled parameter space (points circled in blue in Fig. \[fig:design\]) for the reconstruction analysis. We determine this selection by first reconstructing the nearest models in parameter space, and then broadening the sampled points until the interpolation is stable and the forecasted contours (see Sec. \[sec:constraints\]) are converged for AdvACT-level noise. At this noise level, the other points in model space are sufficiently distant to contribute negligibly to the forecasted contours. In total, nine models are used to derive parameter constraints from the reconstructed, noisy maps. For completeness, we perform a similar convergence test using forecasted constraints from the noiseless maps, finding excellent agreement between contours derived using all 46 models and using only these nine models. In Fig. \[fig:sample\_maps\], we show an example of a convergence map from the fiducial cosmology before (“noiseless”) and after (“noisy”) reconstruction. Prominent structures seen in the noiseless maps remain obvious in the reconstructed, noisy maps. Gaussian random field --------------------- We also reconstruct a set of Gaussian random fields (GRF) in the fiducial model. We generate a set of GRFs using the average power spectrum of the noiseless $\kappa$ maps. We then lens simulated CMB maps using these GRFs, following the same procedure as outlined above, and subsequently perform lensing reconstruction, just as for the reconstructed $N$-body $\kappa$ maps. These noisy GRF-only reconstructions allow us to examine the effect of reconstruction (in particular the non-Gaussianity of the reconstruction noise itself), as well as to determine the level of non-Gaussianity in the noisy $\kappa$ maps. Interpolation ------------- ![\[fig:interp\] Fractional differences between interpolated and “true” results for the fiducial power spectrum (top), PDF (middle), and peak counts (bottom). Here, we have built the interpolator using results for the other 45 cosmologies, and then compared the interpolated prediction at the fiducial parameter values to the actual simulated results for the fiducial cosmology. The error bars are scaled by $1/\sqrt{N_{\rm sims}}$, where the number of simulations $N_{\rm sims}=1,000$. The agreement for all three statistics is excellent.](plot/plot_interp.pdf){width="48.00000%"} To build a model at points where we do not have simulations, we interpolate from the simulated points in parameter space using the Clough-Tocher interpolation scheme [@alfeld1984; @farin1986], which triangulates the input points and then minimizes the curvature of the interpolating surface; the interpolated points are guaranteed to be continuously differentiable. In Fig. \[fig:interp\], we show a test of the interpolation using the noiseless $\kappa$ maps: we build the interpolator using all of the simulated cosmologies except for the fiducial model (i.e., 45 cosmologies), and then compare the interpolated results at the fiducial parameter values with the true, simulated results for that cosmology. The agreement for all three statistics is excellent, with deviations $\lesssim$ few percent (and well within the statistical precision). Finally, to check the robustness of the interpolation scheme, we also run our analysis using linear interpolation, and obtain consistent results.[^7] Analysis {#sec:analysis} ======== In this section, we describe the analysis of the simulated CMB lensing maps, including the computation of the power spectrum, peak counts, and PDF, and the likelihood estimation for cosmological parameters. These procedures are applied in the same way to the noiseless and noisy (reconstructed) maps. Power spectrum, PDF, and peak counts ------------------------------------ To compute the power spectrum, we first estimate the two-dimensional (2D) power spectrum of CMB lensing maps ($M_{\kappa}$) using $$\begin{aligned} \label{eq: ps2d} C^{\kappa \kappa}(\ellB) = \hat M_{\kappa}(\ellB)^*\hat M_{\kappa}(\ellB) \,,\end{aligned}$$ where $\ellB$ is the 2D multipole with components $\ell_1$ and $\ell_2$, $\hat M_{\kappa}$ is the Fourier transform of $M_{\kappa}$, and the asterisk denotes complex conjugation. We then average over all the pixels within each $|\ellB|\in[\ell-\Delta\ell, \ell+\Delta\ell)$ bin, for 20 log-spaced bins in the range of $100<\ell<2,000$, to obtain the one-dimensional power spectrum. The one-point PDF is the number of pixels with values between \[$\kappa-\Delta\kappa$, $\kappa+\Delta\kappa$) as a function of $\kappa$. We use 50 linear bins with edges listed in Table \[tab: bins\], and normalize the resulting PDF such that its integral is unity. The PDF is a simple observable (a histogram of the data), but captures the amplitude of all (zero-lag) higher-order moments in the map. Thus, it provides a potentially powerful characterization of the non-Gaussian information. Peaks are defined as local maxima in a $\kappa$ map. In a pixelized map, they are pixels with values higher than the surrounding 8 (square) pixels. Similar to cluster counts, peak counts are sensitive to the most nonlinear structures in the Universe. For galaxy lensing, they have been found to associate with halos along the line of sight both with simulations [@Yang2011] and observations [@LiuHaiman2016]. We record peaks on smoothed $\kappa$ maps, in 25 linearly spaced bins with edges listed in Table \[tab: bins\]. ----------------------- ------------------ ----------------------- Smoothing scale PDF bins edges Peak counts bin edges (arcmin) (50 linear bins) (25 linear bins) 0.5 (noiseless) \[-0.50, +0.50\] \[-0.18, +0.36\] 1.0 (noiseless) \[-0.22, +0.22\] \[-0.15, +0.30\] 2.0 (noiseless) \[-0.18, +0.18\] \[-0.12, +0.24\] 5.0 (noiseless) \[-0.10, +0.10\] \[-0.09, +0.18\] 8.0 (noiseless) \[-0.08, +0.08\] \[-0.06, +0.12\] 1.0, 5.0, 8.0 (noisy) \[-0.12, +0.12\] \[-0.06, +0.14\] ----------------------- ------------------ ----------------------- : \[tab: bins\] PDF and peak counts bin edges for each smoothing scale (the full-width-half-maximum of the Gaussian smoothing kernel applied to the maps). Cosmological constraints ------------------------ We estimate cosmological parameter confidence level (C.L.) contours assuming a constant (cosmology-independent) covariance and Gaussian likelihood, $$\begin{aligned} P (\DB | \pB) = \frac{1}{2\pi|\CB|^{1/2}} \exp\left[-\frac{1}{2}(\DB-\muB)\CB^{-1}(\DB-\muB)\right],\end{aligned}$$ where $\DB$ is the data array, $\pB$ is the input parameter array, $\muB=\muB(\pB)$ is the interpolated model, and $\CB$ is the covariance matrix estimated using the fiducial cosmology, with determinant $|\CB|$. The correction factor for an unbiased inverse covariance estimator [@dietrich2010] is negligible in our case, with $(N_{\rm sims}-N_{\rm bins}-2)/(N_{\rm sims}-1) = 0.99$ for $N_{\rm sims} =10,000$ and $N_{\rm bins}=95$. We leave an investigation of the impact of cosmology-dependent covariance matrices and a non-Gaussian likelihood for future work. Due to the limited size of our simulated maps, we must rescale the final error contour by a ratio ($r_{\rm sky}$) of simulated map size (12.25 deg$^2$) to the survey coverage (20,000 deg$^2$ for AdvACT). Two methods allow us to achieve this — rescaling the covariance matrix by $r_{\rm sky}$ before computing the likelihood plane, or rescaling the final C.L. contour by $r_{\rm sky}$. These two methods yield consistent results. In our final analysis, we choose the former method. Results {#sec:results} ======= Non-Gaussianity in noiseless maps {#sec:non-gauss} --------------------------------- ![image](plot/plot_noiseless_PDF.pdf){width="48.00000%"} ![image](plot/plot_noiseless_PDF_diff.pdf){width="48.00000%"} ![image](plot/plot_noiseless_peaks.pdf){width="48.00000%"} ![image](plot/plot_noiseless_peaks_diff.pdf){width="48.00000%"} We show the PDF of noiseless $N$-body $\kappa$ maps (PDF$^\kappa$) for the fiducial cosmology in Fig. \[fig:noiseless\_PDF\], as well as that of GRF $\kappa$ maps (PDF$^{\rm GRF}$) generated from a power spectrum matching that of the $N$-body-derived maps. To better demonstrate the level of non-Gaussianity, we also show the fractional difference of PDF$^\kappa$ from PDF$^{\rm GRF}$. The error bars are scaled to AdvACT sky coverage (20,000 deg$^2$), though note that no noise is present here. The departure of PDF$^\kappa$ from the Gaussian case is significant for all smoothing scales examined (FWHM = 0.5–8.0 arcmin), with increasing significance towards smaller smoothing scales, as expected. The excess in high $\kappa$ bins is expected as the result of nonlinear gravitational evolution, echoed by the deficit in low $\kappa$ bins. We show the comparison of the peak counts of $N$-body $\kappa$ maps (${\rm N}^\kappa_{\rm peaks}$) versus that of GRFs (${\rm N}^{\rm GRF}_{\rm peaks}$) in Fig. \[fig:noiseless\_pk\]. The difference between ${\rm N}^\kappa_{\rm peaks}$ and ${\rm N}^{\rm GRF}_{\rm peaks}$ is less significant than the PDF, because the number of peaks is much smaller than the number of pixels — hence, the peak counts have larger Poisson noise. A similar trend of excess (deficit) of high (low) peaks is also seen in $\kappa$ peaks, when compared to the GRF peaks. Covariance matrix {#sec:covariance} ----------------- ![\[fig:corr\_mat\] Correlation coefficients determined from the full noiseless (top) and noisy (bottom) covariance matrices. Bins 1-20 are for the power spectrum (labeled “PS”); bins 21-70 are for the PDF; and bins 71-95 are for peak counts.](plot/corr_mat.pdf "fig:"){width="48.00000%"} ![\[fig:corr\_mat\] Correlation coefficients determined from the full noiseless (top) and noisy (bottom) covariance matrices. Bins 1-20 are for the power spectrum (labeled “PS”); bins 21-70 are for the PDF; and bins 71-95 are for peak counts.](plot/corr_mat_noisy.pdf "fig:"){width="48.00000%"} Fig. \[fig:corr\_mat\] shows the correlation coefficients of the total covariance matrix for both the noiseless and noisy maps, $$\begin{aligned} \rhoB_{ij} = \frac{\CB_{ij}}{\sqrt{\CB_{ii}\CB_{jj}}}\end{aligned}$$ where $i$ and $j$ denote the bin number, with the first 20 bins for the power spectrum, the next 50 bins for the PDF, and the last 25 bins for peak counts. In the noiseless case, the power spectrum shows little covariance in both its own off-diagonal terms ($<10\%$) and cross-covariance with the PDF and peaks ($<20\%$), hinting that the PDF and peaks contain independent information that is beyond the power spectrum. In contrast, the PDF and peak statistics show higher correlation in both self-covariance (i.e., the covariance within the sub-matrix for that statistic only) and cross-covariance, with strength almost comparable to the diagonal components. They both show strong correlation between nearby $\kappa$ bins (especially in the moderate-$|\kappa|$ regions), which arises from contributions due to common structures amongst the bins (e.g., galaxy clusters). Both statistics show anti-correlation between positive and negative $\kappa$ bins. The anti-correlation may be due to mass conservation — e.g., large amounts of mass falling into halos would result in large voids in surrounding regions. In the noisy case, the off-diagonal terms are generally smaller than in the noiseless case. Moreover, the anti-correlation seen previously between the far positive and negative $\kappa$ tails in the PDF is now a weak positive correlation — we attribute this difference to the complex non-Gaussianity of the reconstruction noise. Interestingly, the self-covariance of the peak counts is significantly reduced compared to the noiseless case, while the self-covariance of the PDF persists to a reasonable degree. Effect of reconstruction noise {#sec:recon_noise} ------------------------------ ![\[fig:recon\] We demonstrate the effect of reconstruction noise on the power spectrum (top), the PDF (middle), and peak counts (bottom) by using Gaussian random field $\kappa$ maps (rather than $N$-body-derived maps) as input to the reconstruction pipeline. The noiseless (solid curves) and noisy/reconstructed (dashed curves) statistics are shown. All maps used here have been smoothed with a Gaussian kernel of FWHM $= 8$ arcmin.](plot/plot_reconstruction.pdf){width="48.00000%"} To disentangle the effect of reconstruction noise from that of nonlinear structure growth, we compare the three statistics before (noiseless) and after (noisy) reconstruction, using only the GRF $\kappa$ fields. Fig. \[fig:recon\] shows the power spectra, PDFs, and peak counts for both the noiseless (solid curves) and noisy (dashed curves) GRFs, all smoothed with a FWHM $= 8$ arcmin Gaussian window. The reconstructed power spectrum has significant noise on small scales, as expected (this is dominated by the usual “$N^{(0)}$” noise bias). The post-reconstruction PDF shows skewness, defined as $$\label{eq.skewdef} S=\left\langle \left( \frac {\kappa-\bar{\kappa}}{\sigma_\kappa}\right)^3 \right\rangle,$$ which is not present in the input GRFs. In other words, the reconstructed maps have a non-zero three-point function, even though the input GRF $\kappa$ maps in this case do not. While this may seem surprising at first, we recall that the three-point function of the reconstructed map corresponds to a six-point function of the CMB temperature map (in the quadratic estimator formalism). Even for a Gaussian random field, the six-point function contains non-zero Wick contractions (those that reduce to products of two-point functions). Propagating such terms into the three-point function of the quadratic estimator for $\kappa$, we find that they do not cancel to zero. This result is precisely analogous to the usual “$N^{(0)}$ bias” on the CMB lensing power spectrum, in which the two-point function of the (Gaussian) primary CMB temperature gives a non-zero contribution to the temperature four-point function. The result in Fig. \[fig:recon\] indicates that the similar PDF “$N^{(0)}$ bias” contains a negative skewness (in addition to non-zero kurtosis and higher moments). While it should be possible to derive this result analytically, we defer the full calculation to future work. If we filter the reconstructed $\kappa$ maps with a large smoothing kernel, the skewness in the reconstructed PDF is significantly decreased (see Fig. \[fig:skew\]). We briefly investigate the PDF of the Planck 2015 CMB lensing map [@planck2015xv] and do not see clear evidence of such skewness — we attribute this to the low effective resolution of the Planck map (FWHM $\sim$ few degrees). Finally, we note that a non-zero three-point function of the reconstruction noise could potentially alter the forecasted $\kappa$ bispectrum results of Ref. [@Namikawa2016] (where the reconstruction noise was taken to be Gaussian). The non-Gaussian properties of the small-scale reconstruction noise were noted in Ref. [@HuOkamoto2002], who pointed out that the quadratic estimator at high-$\ell$ is constructed from progressively fewer arcminute-scale CMB fluctuations. Similarly, the $\kappa$ peak count distribution also displays skewness after reconstruction, although it is less dramatic than that seen in the PDF. The peak of the distribution shifts to a higher $\kappa$ value due to the additional noise in the reconstructed maps. We note that the shape of the peak count distribution becomes somewhat rough when large smoothing kernels are applied to the maps, due to the small number of peaks present in this situation (e.g., $\approx 29$ peaks in a 12.25 deg$^2$ map with FWHM = 8 arcmin Gaussian window). Non-Gaussianity in reconstructed maps {#sec:non-gauss_recon} ------------------------------------- ![image](plot/plot_noisy_PDF_morebins.pdf){width="48.00000%"} ![image](plot/plot_noisy_PDF_filtered_morebins.pdf){width="48.00000%"} ![image](plot/plot_noisy_peaks_morebins.pdf){width="48.00000%"} ![image](plot/plot_noisy_peaks_filtered_morebins.pdf){width="48.00000%"} We show the PDF and peak counts of the reconstructed $\kappa$ maps in Figs. \[fig:noisyPDF\] and \[fig:noisypk\], respectively. The left panels of these figures show the results using maps with an 8 arcmin Gaussian smoothing window. We further consider a Wiener filter, which is often used to filter out noise based on some known information in a signal (i.e., the noiseless power spectrum in our case). The right panels show the Wiener-filtered results, where we inverse-variance weight each pixel in Fourier space, i.e., each Fourier mode is weighted by the ratio of the noiseless power spectrum to the noisy power spectrum (c.f. Fig. \[fig:recon\]), $$\begin{aligned} f^{\rm Wiener} (\ell) = \frac{C_\ell^{\rm noiseless}}{C_\ell^{\rm noisy}} \,.\end{aligned}$$ Compared to the noiseless results shown in Figs. \[fig:noiseless\_PDF\] and \[fig:noiseless\_pk\], the differences between the PDF and peaks from the $N$-body-derived $\kappa$ maps and those from the GRF-derived $\kappa$ maps persist, but with less significance. For the Wiener-filtered maps, the deviations of the $N$-body-derived $\kappa$ statistics from the GRF case are 9$\sigma$ (PDF) and 6$\sigma$ (peaks), where we derived the significances using the simulated covariance from the $N$-body maps [^8]. These deviations capture the influence of both nonlinear evolution and post-Born effects. ![\[fig:skew\] Top panel: the skewness of the noiseless (triangles) and reconstructed, noisy (diamonds: $N$-body $\kappa$ maps; circles: GRF) PDFs. Bottom panel: the fractional difference between the skewness of the reconstructed $N$-body $\kappa$ and the reconstructed GRF. The error bars are for our map size (12.25 deg$^2$), and are only shown in the top panel for clarity.](plot/plot_skewness3.pdf){width="48.00000%"} While the differences between the $N$-body and GRF cases in Figs. \[fig:noisyPDF\] and \[fig:noisypk\] are clear, understanding their detailed structure is more complex. First, note that the GRF cases exhibit the skewness discussed in Sec. \[sec:recon\_noise\], which arises from the reconstruction noise itself. We show the skewness of the reconstructed PDF (for both the $N$-body and GRF cases) compared with that of the noiseless ($N$-body) PDF for various smoothing scales in Fig. \[fig:skew\]. The noiseless $N$-body maps are positively skewed, as physically expected. The reconstructed, noisy maps are negatively skewed, for both the $N$-body and GRF cases. However, the reconstructed $N$-body results are less negatively skewed than the reconstructed GRF results (bottom panel of Fig. \[fig:skew\]), presumably because the $N$-body PDF (and peaks) contain contributions from the physical skewness, which is positive (see Figs. \[fig:noiseless\_PDF\] and \[fig:noiseless\_pk\]). However, the physical skewness is not large enough to overcome the negative “$N^{(0)}$”-type skewness coming from the reconstruction noise. We attribute the somewhat-outlying point at FWHM $=8$ arcmin in the bottom panel of Fig. \[fig:skew\] to a noise fluctuation, as the number of pixels at this smoothing scale is quite low (the deviation is consistent with zero). The decrease in $|S|$ between the FWHM $=2$ arcmin and 1 arcmin cases in the top panel of Fig. \[fig:skew\] for the noisy maps is due to the large increase in $\sigma_{\kappa}$ between these smoothing scales, as the noise is blowing up on small scales. The denominator of Eq. (\[eq.skewdef\]) thus increases dramatically, compared to the numerator. Comparisons between the reconstructed PDF in the $N$-body case and GRF case are further complicated by the fact that higher-order “biases” arise due to the reconstruction. For example, the skewness of the reconstructed $N$-body $\kappa$ receives contributions from many other terms besides the physical skewness and the “$N^{(0)}$ bias” described above — there will also be Wick contractions involving combinations of two- and four-point functions of the CMB temperature and $\kappa$ (and perhaps an additional bias coming from a different contraction of the three-point function of $\kappa$, analogous to the “$N^{(1)}$” bias for the power spectrum [@Hanson2011]). So the overall “bias” on the reconstructed skewness will differ from that in the simple GRF case. This likely explains why we do not see an excess of positive $\kappa$ values over the GRF case in the PDFs shown in Fig. \[fig:noisyPDF\]. While this excess is clearly present in the noiseless case (Fig. \[fig:noiseless\_PDF\]), and it matches physical intuition there, the picture in the reconstructed case is not simple, because there is no guarantee that the reconstruction biases in the $N$-body and GRF cases are exactly the same. Thus, a comparison of the reconstructed $N$-body and GRF PDFs contains a mixture of the difference in the biases and the physical difference that we expect to see. Similar statements hold for comparisons of the peak counts. Clearly, a full accounting of all such individual biases would be quite involved, but the key point here is that all these effects are fully present in our end-to-end simulation pipeline. While an analytic understanding would be helpful, it is not necessary for the forecasts we present below. Cosmological constraints {#sec:constraints} ------------------------ Before we proceed to present the cosmological constraints from non-Gaussian statistics, it is necessary to do a sanity check by comparing the forecasted contour from our simulated power spectra to that from an analytic Fisher estimate, $$\begin{aligned} \FB_{\alpha \beta}=\frac{1}{2} {\rm Tr} \left\{\CB^{-1}_{\rm Gauss} \left[\left(\frac {\partial C_\ell}{\partial p_\alpha} \right) \left(\frac {\partial C_\ell}{\partial p_\beta}\right)^T+ \left(\alpha\leftrightarrow\beta \right) \right]\right\},\end{aligned}$$ where $\left\{ \alpha,\beta \right\} = \left\{ \Omega_m,\sigma_8 \right\}$ and the trace is over $\ell$ bins. $\CB_{\rm Gauss}$ is the Gaussian covariance matrix, with off-diagonal terms set to zero, and diagonal terms equal to the Gaussian variance, $$\begin{aligned} \sigma^2_\ell=\frac{2(C_\ell+N_\ell)^2}{f_{\rm sky}(2\ell+1)\Delta\ell}\end{aligned}$$ We compute the theoretical power spectrum $C_\ell$ using the HaloFit model [@Smith2003; @Takahashi2012], with fractional parameter variations of $+1$% to numerically obtain $\partial C_\ell / \partial p$. $N_\ell$ is the reconstruction noise power spectrum, originating from primordial CMB fluctuations and instrumental/atmospheric noise (note that we only consider white noise here). The sky fraction $f_{\rm sky}=0.485$ corresponds to the 20,000 deg$^2$ coverage expected for AdvACT. $(F^{-1}_{\alpha\alpha})^{\frac{1}{2}}$ is the marginalized error on parameter $\alpha$. Both theoretical and simulated contours use the power spectrum within the $\ell$ range of \[100, 2,000\]. The comparison is shown in Fig. \[fig:contour\_fisher\]. The contour from full $N$-body simulations shows good agreement with the analytical Fisher contour. This result indicates that approximations made in current analytical CMB lensing power spectrum forecasts are accurate, in particular the neglect of non-Gaussian covariances from nonlinear growth. A comparison of the analytic and reconstructed power spectra will be presented in Ref. [@Sherwin2016]. ![\[fig:contour\_fisher\] 68% C.L. contours from an AdvACT-like CMB lensing power spectrum measurement. The excellent agreement between the simulated and analytic results confirms that non-Gaussian covariances arising from nonlinear growth and reconstruction noise do not strongly bias current analytic CMB lensing power spectrum forecasts (up to $\ell = 2,000$).](plot/plot_contour_fisher.pdf){width="48.00000%"} Fig. \[fig:contour\_noiseless\] shows contours derived using noiseless maps for the PDF and peak count statistics, compared with that from the noiseless power spectrum. We compare three different smoothing scales (1.0, 5.0, 8.0 arcmin), and find that smaller smoothing scales have stronger constraining power. However, even with the smallest smoothing scale (1.0 arcmin), the PDF contour is still significantly larger than that of the power spectrum. Peak counts using 1.0 arcmin smoothing show almost equivalent constraining power as the power spectrum. However, we note that 1.0 arcmin smoothing is not a fair comparison to the power spectrum with cutoff at $\ell<2,000$, because in reality, the beam size and instrument noise is likely to smear out signals smaller than a few arcmin scale (see below). At first, it may seem surprising that the PDF is not at least as constraining as the power spectrum in Fig. \[fig:contour\_noiseless\], since the PDF contains the information in the variance. However, this only captures an overall amplitude of the two-point function, whereas the power spectrum contains scale-dependent information.[^9] We illustrate this in Fig. \[fig:cell\_diff\], where we compare the fiducial power spectrum to that with a 1% increase in $\Omega_m$ or $\sigma_8$ (while keeping other parameters fixed). While $\sigma_8$ essentially re-scales the power spectrum by a factor $\sigma_8^2$, apart from a steeper dependence at high-$\ell$ due to nonlinear growth, $\Omega_m$ has a strong shape dependence. This is related to the change in the scale of matter-radiation equality [@planck2015xv]. Thus, for a noiseless measurement, the shape of the power spectrum contains significant additional information about these parameters, which is not captured by a simple change in the overall amplitude of the two-point function. This is the primary reason that the power spectrum is much more constraining than the PDF in Fig. \[fig:contour\_noiseless\]. ![image](plot/plot_contour_noiseless_PDF_clough.pdf){width="48.00000%"} ![image](plot/plot_contour_noiseless_Peaks_clough.pdf){width="48.00000%"} ![\[fig:cell\_diff\] Fractional difference of the CMB lensing power spectrum after a 1% increase in $\Omega_m$ (thick solid line) or $\sigma_8$ (thin solid line), compared to the fiducial power spectrum. Other parameters are fixed at their fiducial values.](plot/plot_Cell_diff.pdf){width="48.00000%"} ![image](plot/plot_contour_noisy_PDF_clough.pdf){width="48.00000%"} ![image](plot/plot_contour_noisy_Peaks_clough.pdf){width="48.00000%"} ![\[fig:contour\_comb\] 68% C.L. contours derived using two combinations of the power spectrum, PDF, and peak counts, compared to using the power spectrum alone. Reconstruction noise corresponding to an AdvACT-like survey is included. The contours are scaled to AdvACT sky coverage of 20,000 deg$^2$.](plot/plot_contour_noisy_comb_clough.pdf){width="48.00000%"} Fig. \[fig:contour\_noisy\] shows contours derived using the reconstructed, noisy $\kappa$ maps. We show results for three different filters — Gaussian windows of 1.0 and 5.0 arcmin and the Wiener filter. The 1.0 arcmin contour is the worst among all, as noise dominates at this scale. The 5.0 arcmin-smoothed and Wiener-filtered contours show similar constraining power. Using the PDF or peak counts alone, we do not achieve better constraints than using the power spectrum alone, but the parameter degeneracy directions for the statistics are slightly different. This is likely due to the fact that the PDF and peak counts probe non-linear structure, and thus they have a different dependence on the combination $\sigma_8(\Omega_m)^\gamma$ than the power spectrum does, where $\gamma$ specifies the degeneracy direction. Combination $\Delta \Omega_m$ $\Delta \sigma_8 $ ------------------ ------------------- -------------------- PS only 0.0065 0.0044 PDF + Peaks 0.0076 0.0035 PS + PDF + Peaks 0.0045 0.0030 : \[tab: constraints\] Marginalized constraints on $\Omega_m$ and $\sigma_8$ for an AdvACT-like survey from combinations of the power spectrum (PS), PDF, and peak counts, as shown in Fig. \[fig:contour\_comb\]. The error contour derived using all three statistics is shown in Fig. \[fig:contour\_comb\], where we use the 5.0 arcmin Gaussian smoothed maps. The one-dimensional marginalized errors are listed in Table \[tab: constraints\]. The combined contour shows moderate improvement ($\approx 30\%$ smaller error contour area) compared to the power spectrum alone. The improvement is due to the slightly different parameter degeneracy directions for the statistics, which break the $\sigma_8$-$\Omega_m$ degeneracy somewhat more effectively when combined. It is worth noting that we have not included information from external probes that constrain $\Omega_m$ (e.g., baryon acoustic oscillations), which can further break the $\Omega_m$-$\sigma_8$ degeneracy. Conclusion {#sec:conclude} ========== In this paper, we use $N$-body ray-tracing simulations to explore the additional information in CMB lensing maps beyond the traditional power spectrum. In particular, we investigate the one-point PDF and peak counts (local maxima in the convergence map). We also apply realistic reconstruction procedures that take into account primordial CMB fluctuations and instrumental noise for an AdvACT-like survey, with sky coverage of 20,000 deg$^2$, noise level 6 $\mu$K-arcmin, and $1.4$ arcmin beam. Our main findings are: 1. We find significant deviations of the PDF and peak counts of $N$-body-derived $\kappa$ maps from those of Gaussian random field $\kappa$ maps, both in the noiseless and noisy reconstructed cases (see Figs. \[fig:noiseless\_PDF\], \[fig:noiseless\_pk\], \[fig:noisyPDF\], and \[fig:noisypk\]). For AdvACT, we forecast the detection of non-Gaussianity to be $\approx$ 9$\sigma$ (PDF) and 6$\sigma$ (peak counts), after accounting for the non-Gaussianity of the reconstruction noise itself. The non-Gaussianity of the noise has been neglected in previous estimates, but we show that it is non-negligible (Fig. \[fig:recon\]). 2. We confirm that current analytic forecasts for CMB lensing power spectrum constraints are accurate when confronted with constraints derived from our $N$-body pipeline that include the full non-Gaussian covariance (Fig. \[fig:contour\_fisher\]). 3. An improvement of $\approx 30\%$ in the forecasted $\Omega_m$-$\sigma_8$ error contour is seen when the power spectrum is combined with PDF and peak counts (assuming AdvACT-level noise), compared to using the power spectrum alone. The covariance between the power spectrum and the other two non-Gaussian statistics is relatively small (with cross-covariance $< 20\%$ of the diagonal components), meaning the latter is complementary to the power spectrum. 4. For noiseless $\kappa$ maps (i.e., ignoring primordial CMB fluctuations and instrumental/atmospheric noise), a smaller smoothing kernel can help extract the most information from the PDF and peak counts (Fig. \[fig:contour\_noiseless\]). For example, peak counts of 1.0 arcmin Gaussian smoothed maps alone can provide equally tight constraints as from the power spectrum. 5. We find non-zero skewness in the PDF and peak counts of reconstructed GRFs, which is absent from the input noiseless GRFs by definition. This skewness is the result of the quadratic estimator used for CMB lensing reconstruction from the temperature or polarization maps. Future forecasts for non-Gaussian CMB lensing statistics should include these effects, as we have here, or else the expected signal-to-noise could be overestimated. In this work, we have only considered temperature-based reconstruction estimators, but in the near future polarization-based estimators will have equally (and, eventually, higher) signal-to-noise. Moreover, the polarization estimators allow the lensing field to be mapped out to smaller scales, which suggests that they could be even more useful for non-Gaussian statistics. In summary, there is rich information in CMB lensing maps that is not captured by two-point statistics, especially on small scales where nonlinear evolution is significant. In order to extract this information from future data from ongoing CMB Stage-III and near-future Stage-IV surveys, such as AdvACT, SPT-3G [@Benson2014], Simons Observatory[^10], and CMB-S4 [@Abazajian2015], non-Gaussian statistics must be studied and modeled carefully. We have shown that non-Gaussian statistics will already contain useful information for Stage-III surveys, which suggests that their role in Stage-IV analyses will be even more important. The payoff of these efforts could be significant, such as a quicker route to a neutrino mass detection. We thank Nick Battaglia, Francois Bouchet, Simone Ferraro, Antony Lewis, Mark Neyrinck, Emmanuel Schaan, and Marcel Schmittfull for useful discussions. We acknowledge helpful comments from an anonymous referee. JL is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1602663. This work is partially supported by a Junior Fellowship from the Simons Foundation to JCH and a Simons Fellowship to ZH. BDS is supported by a Fellowship from the Miller Institute for Basic Research in Science at the University of California, Berkeley. This work is partially supported by NSF grant AST-1210877 (to ZH) and by a ROADS award at Columbia University. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by NSF grant ACI-1053575. Computations were performed on the GPC supercomputer at the SciNet HPC consortium. SciNet is funded by the Canada Foundation for Innovation under the auspices of Compute Canada, the Government of Ontario, the Ontario Research Fund — Research Excellence, and the Univ. of Toronto. [^1]: For example, higher order moments [@Bernardeau1997; @Hui1999; @vanWaerbeke2001; @Takada2002; @Zaldarriaga2003; @Kilbinger2005; @Petri2015], three-point functions [@Takada2003; @Vafaei2010], bispectra [@Takada2004; @DZ05; @Sefusatti2006; @Berge2010], peak counts , Minkowski functionals [@Kratochvil2012; @Shirasakiyoshida2014; @Petri2013; @Petri2015], and Gaussianized power spectrum [@Neyrinck2009; @Neyrinck2014; @Yu2012]. [^2]: <http://wwwmpa.mpa-garching.mpg.de/gadget/> [^3]: <https://pypi.python.org/pypi/lenstools/> [^4]: <http://camb.info/> [^5]: While the number of potential planes could be a limiting factor in our sensitivity to these effects, we note that our procedure uses $\approx 40$-70 planes for each ray-tracing calculation (depending on the cosmology), which closely matches the typical number of lensing deflections experienced by a CMB photon. [^6]: We find that this filter is necessary for numerical stability (and also because our simulated $\kappa$ maps do not recover all structure on these small scales, as seen in Fig. \[fig:theory\_ps\]), but our results are unchanged for moderate perturbations to the filter scale. [^7]: Due to our limited number of models, linear interpolation is slightly more vulnerable to sampling artifacts than the Clough-Tocher method, because the linear method only utilizes the nearest points in parameter space. The Clough-Tocher method also uses the derivative information. Therefore, we choose Clough-Tocher for our analysis. [^8]: We note that the signal-to-noise ratios predicted here are comparable to the $\approx 7\sigma$ bispectrum prediction that would be obtained by rescaling the SPT-3G result from Table I of Ref. [@Pratten2016] to the AdvACT sky coverage (which is a slight overestimate given AdvACT’s higher noise level). The higher significance for the PDF found here could be due to several reasons: (i) additional contributions to the signal-to-noise for the PDF from higher-order polyspectra beyond the bispectrum; (ii) inaccuracy of the nonlinear fitting formula used in Ref. [@Pratten2016] on small scales, as compared to the N-body methods used here; (iii) reduced cancellation between the nonlinear growth and post-Born effects in higher-order polyspectra (for the bispectrum, these contributions cancel to a large extent, reducing the signal-to-noise [@Pratten2016]). [^9]: Note that measuring the PDF or peak counts for different smoothing scales can recover additional scale-dependent information as well. [^10]: <http://www.simonsobservatory.org/>
--- abstract: 'We continue our study of Cartan schemes and their Weyl groupoids. The results in this paper provide an algorithm to determine connected simply connected Cartan schemes of rank three, where the real roots form a finite irreducible root system. The algorithm terminates: Up to equivalence there are exactly 55 such Cartan schemes, and the number of corresponding real roots varies between $6$ and $37$. We identify those Weyl groupoids which appear in the classification of Nichols algebras of diagonal type.' address: - 'Michael Cuntz, Fachbereich Mathematik, Universität Kaiserslautern, Postfach 3049, D-67653 Kaiserslautern, Germany' - 'István Heckenberger, Philipps-Universität Marburg, Fachbereich Mathematik und Informatik, Hans-Meerwein-Straße, D-35032 Marburg, Germany' author: - 'M. Cuntz' - 'I. Heckenberger' bibliography: - 'quantum.bib' title: Finite Weyl groupoids of rank three --- Introduction ============ Root systems associated with Cartan matrices are widely studied structures in many areas of mathematics, see [@b-BourLie4-6] for the fundaments. The origins of the theory of root systems go back at least to the study of Lie groups by Lie, Killing and Cartan. The symmetry of the root system is commonly known as its Weyl group. Root systems associated with a family of Cartan matrices appeared first in connection with Lie superalgebras [@a-Kac77 Prop.2.5.6] and with Nichols algebras [@a-Heck06a], [@a-Heck08a]. The corresponding symmetry is not a group but a groupoid, and is called the Weyl groupoid of the root system. Weyl groupoids of root systems properly generalize Weyl groups. The nice properties of this more general structure have been the main motivation to develop an axiomatic approach to the theory, see [@a-HeckYam08], [@a-CH09a]. In particular, Weyl groupoids are generated by reflections and Coxeter relations, and they satisfy a Matsumoto type theorem [@a-HeckYam08]. To see more clearly the extent of generality it would be desirable to have a classification of finite Weyl groupoids.[^1] However, already the appearance of a large family of examples of Lie superalgebras and Nichols algebras of diagonal type indicated that a classification of finite Weyl groupoids is probably much more complicated than the classification of finite Weyl groups. Additionally, many of the usual classification tools are not available in this context because of the lack of the adjoint action and a positive definite bilinear form. In previous work, see [@a-CH09b] and [@p-CH09a], we have been able to determine all finite Weyl groupoids of rank two. The result of this classification is surprisingly nice: We found a close relationship to the theory of continued fractions and to cluster algebras of type $A$. The structure of finite rank two Weyl groupoids and the associated root systems has a natural characterization in terms of triangulations of convex polygons by non-intersecting diagonals. In particular, there are infinitely many such groupoids. At first view there is no reason to assume that the situation for finite Weyl groupoids of rank three would be much different from the rank two case. In this paper we give some theoretical indications which strengthen the opposite point of view. For example in Theorem \[cartan\_6\] we show that the entries of the Cartan matrices in a finite Weyl groupoid cannot be smaller than $-7$. Recall that for Weyl groupoids there is no lower bound for the possible entries of generalized Cartan matrices. Our main achievement in this paper is to provide an algorithm to classify finite Weyl groupoids of rank three. Our algorithm terminates within a short time, and produces a finite list of examples. In the appendix we list the root systems characterizing the Weyl groupoids of the classification: There are $55$ of them which correspond to pairwise non-isomorphic Weyl groupoids. The number of positive roots in these root systems varies between $6$ and $37$. Among our root systems are the usual root systems of type $A_3$, $B_3$, and $C_3$, but for most of the other examples we don’t have yet an explanation. It is remarkable that the number $37$ has a particular meaning for simplicial arrangements in the real projective plane. An arrangement is the complex generated by a family of straight lines not forming a pencil. The vertices of the complex are the intersection points of the lines, the edges are the segments of the lines between two vertices, and the faces are the connected components of the complement of the set of lines generating the arrangement. An arrangement is called simplicial, if all faces are triangles. Simplicial arrangements have been introduced in [@a-Melchi41]. The classification of simplicial arrangements in the real projective plane is an open problem. The largest known exceptional example is generated by $37$ lines. Grünbaum conjectures that the list given in [@a-Gruenb09] is complete. In our appendix we provide some data of our root systems which can be used to compare Grünbaum’s list with Weyl groupoids. There is an astonishing analogy between the two lists, but more work has to be done to be able to explain the precise relationship. This would be desirable in particular since our classification of finite Weyl groupoids of rank three does not give any evidence for the range of solutions besides the explicit computer calculation. In order to ensure the termination of our algorithm, besides Theorem \[cartan\_6\] we use a weak convexity property of certain affine hyperplanes, see Theorem \[convex\_diff2\]: We can show that any positive root in an affine hyperplane “next to the origin” is either simple or can can be written as the sum of a simple root and another positive root. Our algorithm finally becomes practicable by the use of Proposition \[pr:suminR\], which can be interpreted as another weak convexity property for affine hyperplanes. It is hard to say which of these theorems are the most valuable because avoiding any of them makes the algorithm impracticable (unless one has some replacement). The paper is organized as follows. We start with two sections proving the necessary theorems to formulate the algorithm: The results which do not require that the rank is three are in Section \[gen\_res\], the obstructions for rank three in Section \[rk3\_obst\]. We then describe the algorithm in the next section. Finally we summarize the resulting data and make some observations in the last section. **Acknowledgement.** We would like to thank B. M[ü]{}hlherr for pointing out to us the importance of the number $37$ for simplicial arrangements in the real projective plane. Cartan schemes and Weyl groupoids {#gen_res} ================================= We mainly follow the notation in [@a-CH09a; @a-CH09b]. The fundaments of the general theory have been developed in [@a-HeckYam08] using a somewhat different terminology. Let us start by recalling the main definitions. Let $I$ be a non-empty finite set and $\{{\alpha }_i\,|\,i\in I\}$ the standard basis of ${\mathbb{Z}}^I$. By [@b-Kac90 §1.1] a generalized Cartan matrix ${C}=({c}_{ij})_{i,j\in I}$ is a matrix in ${\mathbb{Z}}^{I\times I}$ such that 1. ${c}_{ii}=2$ and ${c}_{jk}\le 0$ for all $i,j,k\in I$ with $j\not=k$, 2. if $i,j\in I$ and ${c}_{ij}=0$, then ${c}_{ji}=0$. Let $A$ be a non-empty set, ${\rho }_i : A \to A$ a map for all $i\in I$, and ${C}^a=({c}^a_{jk})_{j,k \in I}$ a generalized Cartan matrix in ${\mathbb{Z}}^{I \times I}$ for all $a\in A$. The quadruple $${\mathcal{C}}= {\mathcal{C}}(I,A,({\rho }_i)_{i \in I}, ({C}^a)_{a \in A})$$ is called a *Cartan scheme* if 1. ${\rho }_i^2 = \id$ for all $i \in I$, 2. ${c}^a_{ij} = {c}^{{\rho }_i(a)}_{ij}$ for all $a\in A$ and $i,j\in I$. Let ${\mathcal{C}}= {\mathcal{C}}(I,A,({\rho }_i)_{i \in I}, ({C}^a)_{a \in A})$ be a Cartan scheme. For all $i \in I$ and $a \in A$ define ${\sigma }_i^a \in \operatorname{Aut}({\mathbb{Z}}^I)$ by $$\begin{aligned} {\sigma }_i^a ({\alpha }_j) = {\alpha }_j - {c}_{ij}^a {\alpha }_i \qquad \text{for all $j \in I$.} \label{eq:sia} \end{aligned}$$ The *Weyl groupoid of* ${\mathcal{C}}$ is the category ${\mathcal{W}}({\mathcal{C}})$ such that ${\mathrm{Ob}}({\mathcal{W}}({\mathcal{C}}))=A$ and the morphisms are compositions of maps ${\sigma }_i^a$ with $i\in I$ and $a\in A$, where ${\sigma }_i^a$ is considered as an element in $\operatorname{Hom}(a,{\rho }_i(a))$. The category ${\mathcal{W}}({\mathcal{C}})$ is a groupoid in the sense that all morphisms are isomorphisms. The set of morphisms of ${\mathcal{W}}({\mathcal{C}})$ is denoted by $\operatorname{Hom}({\mathcal{W}}({\mathcal{C}}))$, and we use the notation $$\Homsfrom{a}=\mathop{\cup }_{b\in A}\operatorname{Hom}(a,b) \quad \text{(disjoint union)}.$$ For notational convenience we will often neglect upper indices referring to elements of $A$ if they are uniquely determined by the context. For example, the morphism ${\sigma }_{i_1}^{{\rho }_{i_2}\cdots {\rho }_{i_k}(a)} \cdots \s_{i_{k-1}}^{{\rho }_{i_k(a)}}{\sigma }_{i_k}^a\in \operatorname{Hom}(a,b)$, where $k\in {\mathbb{N}}$, $i_1,\dots,i_k\in I$, and $b={\rho }_{i_1}\cdots {\rho }_{i_k}(a)$, will be denoted by ${\sigma }_{i_1}\cdots {\sigma }_{i_k}^a$ or by ${\mathrm{id}}_b{\sigma }_{i_1}\cdots \s_{i_k}$. The cardinality of $I$ is termed the *rank of* ${\mathcal{W}}({\mathcal{C}})$. A Cartan scheme is called *connected* if its Weyl groupoid is connected, that is, if for all $a,b\in A$ there exists $w\in \operatorname{Hom}(a,b)$. The Cartan scheme is called *simply connected*, if $\operatorname{Hom}(a,a)=\{{\mathrm{id}}_a\}$ for all $a\in A$. Let ${\mathcal{C}}$ be a Cartan scheme. For all $a\in A$ let $${(R\re)^{a}}=\{ {\mathrm{id}}_a {\sigma }_{i_1}\cdots \s_{i_k}({\alpha }_j)\,|\, k\in {\mathbb{N}}_0,\,i_1,\dots,i_k,j\in I\}\subset {\mathbb{Z}}^I.$$ The elements of the set ${(R\re)^{a}}$ are called *real roots* (at $a$). The pair $({\mathcal{C}},({(R\re)^{a}})_{a\in A})$ is denoted by ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$. A real root ${\alpha }\in {(R\re)^{a}}$, where $a\in A$, is called positive (resp. negative) if ${\alpha }\in {\mathbb{N}}_0^I$ (resp. ${\alpha }\in -{\mathbb{N}}_0^I$). In contrast to real roots associated to a single generalized Cartan matrix, ${(R\re)^{a}}$ may contain elements which are neither positive nor negative. A good general theory, which is relevant for example for the study of Nichols algebras, can be obtained if ${(R\re)^{a}}$ satisfies additional properties. Let ${\mathcal{C}}={\mathcal{C}}(I,A,({\rho }_i)_{i\in I},({C}^a)_{a\in A})$ be a Cartan scheme. For all $a\in A$ let $R^a\subset {\mathbb{Z}}^I$, and define $m_{i,j}^a= |R^a \cap (\ndN_0 {\alpha }_i + \ndN_0 {\alpha }_j)|$ for all $i,j\in I$ and $a\in A$. We say that $${\mathcal{R}}= {\mathcal{R}}({\mathcal{C}}, (R^a)_{a\in A})$$ is a *root system of type* ${\mathcal{C}}$, if it satisfies the following axioms. 1. $R^a=R^a_+\cup - R^a_+$, where $R^a_+=R^a\cap \ndN_0^I$, for all $a\in A$. 2. $R^a\cap \ndZ{\alpha }_i=\{{\alpha }_i,-{\alpha }_i\}$ for all $i\in I$, $a\in A$. 3. ${\sigma }_i^a(R^a) = R^{{\rho }_i(a)}$ for all $i\in I$, $a\in A$. 4. If $i,j\in I$ and $a\in A$ such that $i\not=j$ and $m_{i,j}^a$ is finite, then $({\rho }_i{\rho }_j)^{m_{i,j}^a}(a)=a$. The axioms (R2) and (R3) are always fulfilled for ${\mathcal{R}}{^\mathrm{re}}$. The root system ${\mathcal{R}}$ is called *finite* if for all $a\in A$ the set $R^a$ is finite. By [@a-CH09a Prop.2.12], if ${\mathcal{R}}$ is a finite root system of type ${\mathcal{C}}$, then ${\mathcal{R}}={\mathcal{R}}{^\mathrm{re}}$, and hence ${\mathcal{R}}{^\mathrm{re}}$ is a root system of type ${\mathcal{C}}$ in that case. In [@a-CH09a Def.4.3] the concept of an *irreducible* root system of type ${\mathcal{C}}$ was defined. By [@a-CH09a Prop.4.6], if ${\mathcal{C}}$ is a Cartan scheme and ${\mathcal{R}}$ is a finite root system of type ${\mathcal{C}}$, then ${\mathcal{R}}$ is irreducible if and only if for all $a\in A$ the generalized Cartan matrix $C^a$ is indecomposable. If ${\mathcal{C}}$ is also connected, then it suffices to require that there exists $a\in A$ such that $C^a$ is indecomposable. Let ${\mathcal{C}}={\mathcal{C}}(I,A,({\rho }_i)_{i\in I},({C}^a)_{a\in A})$ be a Cartan scheme. Let $\Gamma $ be a nondirected graph, such that the vertices of $\Gamma $ correspond to the elements of $A$. Assume that for all $i\in I$ and $a\in A$ with ${\rho }_i(a)\not=a$ there is precisely one edge between the vertices $a$ and ${\rho }_i(a)$ with label $i$, and all edges of $\Gamma $ are given in this way. The graph $\Gamma $ is called the *object change diagram* of ${\mathcal{C}}$. ![The object change diagram of a Cartan scheme of rank three (nr. 15 in Table 1)[]{data-label="fig:14posroots"}](wg14){width="6cm"} In the rest of this section let $\cC={\mathcal{C}}(I,A,({\rho }_i)_{i\in I}, (C^a)_{a\in A})$ be a Cartan scheme such that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system. For brevity we will write $R^a$ instead of ${(R\re)^{a}}$ for all $a\in A$. We say that a subgroup $H\subset {\mathbb{Z}}^I$ is a *hyperplane* if ${\mathbb{Z}}^I/H\cong {\mathbb{Z}}$. Then ${\mathrm{rk}}\,H=\#I-1$ is the rank of $H$. Sometimes we will identify ${\mathbb{Z}}^I$ with its image under the canonical embedding ${\mathbb{Z}}^I\to {\mathbb{Q}}\otimes _{\mathbb{Z}}{\mathbb{Z}}^I\cong {\mathbb{Q}}^I$. \[le:hyperplane\] Let $a\in A$ and let $H\subset {\mathbb{Z}}^I$ be a hyperplane. Suppose that $H$ contains ${\mathrm{rk}}\,H$ linearly independent elements of $R^a$. Let $\mathfrak{n}_H$ be a normal vector of $H$ in ${\mathbb{Q}}^I$ with respect to a scalar product $(\cdot ,\cdot )$ on ${\mathbb{Q}}^I$. If $(\mathfrak{n}_H,{\alpha })\ge 0$ for all ${\alpha }\in R^a_+$, then $H$ contains ${\mathrm{rk}}\,H$ simple roots, and all roots contained in $H$ are linear combinations of these simple roots. The assumptions imply that any positive root in $H$ is a linear combination of simple roots contained in $H$. Since $R^a=R^a_+\cup -R^a_+$, this implies the claim. Let $a\in A$ and let $H\subset {\mathbb{Z}}^I$ be a hyperplane. Suppose that $H$ contains ${\mathrm{rk}}\,H$ linearly independent elements of $R^a$. Then there exist $b\in A$ and $w\in \operatorname{Hom}(a,b)$ such that $w(H)$ contains ${\mathrm{rk}}\,H$ simple roots. \[le:hyperplane2\] Let $(\cdot ,\cdot )$ be a scalar product on ${\mathbb{Q}}^I$. Choose a normal vector $\mathfrak{n}_H$ of $H$ in ${\mathbb{Q}}^I$ with respect to $(\cdot ,\cdot )$. Let $m=\# \{{\alpha }\in R^a_+\,|\,(\mathfrak{n}_H,{\alpha })<0\}$. Since ${\mathcal{R}}\re ({\mathcal{C}})$ is finite, $m$ is a nonnegative integer. We proceed by induction on $m$. If $m=0$, then $H$ contains ${\mathrm{rk}}\,H$ simple roots by Lemma \[le:hyperplane\]. Otherwise let $j\in I$ with $(\mathfrak{n}_H,{\alpha }_j)<0$. Let $a'={\rho }_j(a)$ and $H'=\s_j^a(H)$. Then $\s_j^a(\mathfrak{n}_H)$ is a normal vector of $H'$ with respect to the scalar product $(\cdot ,\cdot )'= (\s_j^{{\rho }_j(a)}(\cdot ),\s_j^{{\rho }_j(a)}(\cdot ))$. Since $\s_j^a:R^a_+\setminus \{{\alpha }_j\}\to R^{a'}_+\setminus \{{\alpha }_j\}$ is a bijection and $\s_j^a({\alpha }_j)=-{\alpha }_j$, we conclude that $$\begin{aligned} \# \{\beta \in R^{a'}_+\,|\,(\s^a_j(\mathfrak{n}_H),\beta )'<0\}= \# \{{\alpha }\in R^a_+\,|\,(\mathfrak{n}_H,{\alpha })<0\}-1. \end{aligned}$$ By induction hypothesis there exists $b\in A$ and $w'\in \operatorname{Hom}(a',b)$ such that $w'(H')$ contains ${\mathrm{rk}}\,H'={\mathrm{rk}}\,H$ simple roots. Then the claim of the lemma holds for $w=w'\s_j^a$. The following “volume” functions will be useful for our analysis. Let $k\in {\mathbb{N}}$ with $k\le \#I$. By the Smith normal form there is a unique left ${\mathrm{GL}}({\mathbb{Z}}^I)$-invariant right ${\mathrm{GL}}({\mathbb{Z}}^k)$-invariant function ${\mathrm{Vol}}_k:({\mathbb{Z}}^I)^k\to {\mathbb{Z}}$ such that $$\begin{aligned} {\mathrm{Vol}}_k(a_1{\alpha }_1,\dots,a_k{\alpha }_k)=|a_1\cdots a_k| \quad \text{for all $a_1,\dots,a_k\in {\mathbb{Z}}$,}\end{aligned}$$ where $|\cdot |$ denotes absolute value. In particular, if $k=1$ and $\beta \in {\mathbb{Z}}^I\setminus \{0\}$, then $\Vol _1(\beta )$ is the largest integer $v$ such that $\beta =v\beta '$ for some $\beta '\in {\mathbb{Z}}^I$. Further, if $k=\#I$ and $\beta _1,\dots,\beta _k\in {\mathbb{Z}}^I$, then ${\mathrm{Vol}}_k(\beta _1,\dots,\beta _k)$ is the absolute value of the determinant of the matrix with columns $\beta _1,\dots,\beta _k$. Let $a\in A$, $k\in \{1,2,\dots,\#I\}$, and let $\beta _1,\dots,\beta _k\in R^a$ be linearly independent elements. We write $V^a(\beta _1,\dots,\beta _k)$ for the unique maximal subgroup $V\subseteq {\mathbb{Z}}^I$ of rank $k$ which contains $\beta _1,\dots,\beta _k$. Then ${\mathbb{Z}}^I/V^a(\beta _1,\dots,\beta _k)$ is free. In particular, $V^a(\beta _1,\dots,\beta _{\#I})={\mathbb{Z}}^I$ for all $a\in A$ and any linearly independent subset $\{\beta _1,\dots,\beta _{\#I}\}$ of $R^a$. \[de:base\] Let $W\subseteq {\mathbb{Z}}^I$ be a cofree subgroup (that is, ${\mathbb{Z}}^I/W$ is free) of rank $k$. We say that $\{\beta _1,\dots,\beta _k\}$ is a *base for $W$ at $a$*, if $\beta _i\in W$ for all $i\in \{1,\dots,k\}$ and $W\cap R^a\subseteq \sum _{i=1}^k{\mathbb{N}}_0\beta _i\cup -\sum _{i=1}^k{\mathbb{N}}_0\beta _i$. Now we discuss the relationship of linearly independent roots in a root system. Recall that ${\mathcal{C}}$ is a Cartan scheme such that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system of type ${\mathcal{C}}$. \[th:genposk\] Let $a\in A$, $k\in \{1,\dots,\#I\}$, and let $\beta _1,\dots,\beta _k\in R^a$ be linearly independent roots. Then there exist $b\in A$, $w\in \operatorname{Hom}(a,b)$, and a permutation $\tau $ of $I$ such that $$w(\beta _i)\in {\mathrm{span}}_{\mathbb{Z}}\{{\alpha }_{\tau (1)}, \ldots ,{\alpha }_{\tau (i)}\} \cap R^b_+$$ for all $i\in \{1,\dots,k\}$. Let $r=\#I$. Since $R^a$ contains $r$ simple roots, any linearly independent subset of $R^a$ can be enlarged to a linearly independent subset of $r$ elements. Hence it suffices to prove the theorem for $k=r$. We proceed by induction on $r$. If $r=1$, then the claim holds. Assume that $r>1$. Lemma \[le:hyperplane2\] with $H=V^a(\beta _1,\dots,\beta _{r-1})$ tells that there exist $d\in A$ and $v\in \operatorname{Hom}(a,d)$ such that $v(H)$ is spanned by simple roots. By multiplying $v$ from the left with the longest element of ${\mathcal{W}}({\mathcal{C}})$ in the case that $v(\beta _r)\in -{\mathbb{N}}_0^I$, we may even assume that $v(\beta _r)\in {\mathbb{N}}_0^I$. Now let $J$ be the subset of $I$ such that $\#J=r-1$ and ${\alpha }_i\in v(H)$ for all $i\in J$. Consider the restriction ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})|_{J}$ of ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ to the index set $J$, see [@a-CH09a Def. 4.1]. Since $v(\beta _i)\in H$ for all $i\in \{1,\dots,r-1\}$, induction hypothesis provides us with $b\in A$, $u\in \operatorname{Hom}(d,b)$, and a permutation $\tau '$ of $J$ such that $u$ is a product of simple reflections $\s_i$, where $i\in J$, and $$uv(\beta _n)\in {\mathrm{span}}_{\mathbb{Z}}\{{\alpha }_{\tau '(j_1)}, \ldots ,{\alpha }_{\tau '(j_n)}\} \cap R^b_+$$ for all $n\in \{1,2,\dots,r-1\}$, where $J=\{j_1,\dots,j_{r-1}\}$. Since $v(\beta _r)\notin v(H)$ and $v(\beta _r)\in {\mathbb{N}}_0^I$, the $i$-th entry of $v(\beta _r)$, where $i\in I\setminus J$, is positive. This entry does not change if we apply $u$. Therefore $uv(\beta _r)\in {\mathbb{N}}_0^I$, and hence the theorem holds with $w=uv\in \operatorname{Hom}(a,b)$ and with $\tau $ the unique permutation with $\tau (n)=\tau '(j_n)$ for all $n\in \{1,\dots,r-1\}$. \[simple\_rkk\] Let $a\in A$, $k\in \{1,\dots,\#I\}$, and let $\beta _1,\dots,\beta _k\in R^a$ be linearly independent elements. Then $\{\beta _1,\dots,\beta _k\}$ is a base for $V^a(\beta _1,\dots,\beta _k)$ at $a$ if and only if there exist $b\in A$, $w\in \operatorname{Hom}(a,b)$, and a permutation $\tau $ of $I$ such that $w(\beta _i)={\alpha }_{\tau (i)}$ for all $i\in \{1,\dots,k\}$. In this case ${\mathrm{Vol}}_k(\beta _1,\dots,\beta _k)=1$. The if part of the claim holds by definition of a base and by the axioms for root systems. Let $b,w$ and $\tau $ be as in Theorem \[th:genposk\]. Let $i\in \{1,\dots,k\}$. The elements $w(\beta _1),\dots,w(\beta _i)$ are linearly independent and are contained in $V^b({\alpha }_{\tau (1)}, \dots , {\alpha }_{\tau (i)})$. Thus ${\alpha }_{\tau (i)}$ is a rational linear combination of $w(\beta _1),\dots,w(\beta _i)$. Now by assumption, $\{w(\beta _1),\dots,w(\beta _k)\}$ is a base for $V^b(w(\beta _1),\dots,w(\beta _k))$ at $b$. Hence ${\alpha }_{\tau (i)}$ is a linear combination of the positive roots $w(\beta _1),\dots,w(\beta _i)$ with nonnegative integer coefficients. This is possible only if $\{w(\beta _1),\dots,w(\beta _i)\}$ contains ${\alpha }_{\tau (i)}$. By induction on $i$ we obtain that ${\alpha }_{\tau (i)}=w(\beta _i)$. In the special case $k=\#I$ the above corollary tells that the bases of $\ndZ ^I$ at an object $a\in A$ are precisely those subsets which can be obtained as the image, up to a permutation, of the standard basis of ${\mathbb{Z}}^I$ under the action of an element of ${\mathcal{W}}({\mathcal{C}})$. In [@p-CH09a] the notion of an ${\mathcal{F}}$-sequence was given, and it was used to explain the structure of root systems of rank two. Consider on ${\mathbb{N}}_0^2$ the total ordering $\le _{\mathbb{Q}}$, where $(a_1,a_2)\le _{\mathbb{Q}}(b_1,b_2)$ if and only if $a_1 b_2\le a_2 b_1$. A finite sequence $(v_1,\dots ,v_n)$ of vectors in ${\mathbb{N}}_0^2$ is an ${\mathcal{F}}$-sequence if and only if $v_1<_{\mathbb{Q}}v_2 <_{\mathbb{Q}}\cdots <_{\mathbb{Q}}v_n$ and one of the following holds. - $n=2$, $v_1=(0,1)$, and $v_2=(1,0)$. - $n>2$ and there exists $i\in \{2,3,\dots,n-1\}$ such that $v_i=v_{i-1}+v_{i+1}$ and $(v_1,\dots,v_{i-1}.v_{i+1},\dots,v_n)$ is an ${\mathcal{F}}$-sequence. In particular, any ${\mathcal{F}}$-sequence of length $\ge 3$ contains $(1,1)$. \[pr:R=Fseq\] [@p-CH09a Prop.3.7] Let ${\mathcal{C}}$ be a Cartan scheme of rank two. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system. Then for any $a\in A$ the set $R^a_+$ ordered by $\le _\QQ$ is an ${\mathcal{F}}$-sequence. \[pr:sumoftwo\] [@p-CH09a Cor. 3.8] Let ${\mathcal{C}}$ be a Cartan scheme of rank two. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system. Let $a\in A$ and let $\beta \in R^a_+$. Then either $\beta $ is simple or it is the sum of two positive roots. \[co:r2conv\] Let $a\in A$, $n\in {\mathbb{N}}$, and let ${\alpha },\beta \in R^a$ such that $\beta -n{\alpha }\in R^a$. Assume that $\{{\alpha },\beta -n{\alpha }\}$ is a base for $V^a({\alpha },\beta )$ at $a$. Then $\beta -k{\alpha }\in R^a$ for all $k\in \{1,2,\dots ,n\}$. By Corollary \[simple\_rkk\] there exist $b\in A$, $w\in \operatorname{Hom}(a,b)$, and $i,j\in I$ such that $w({\alpha })={\alpha }_i$, $w(\beta -n{\alpha })={\alpha }_j$. Then $n{\alpha }_i+{\alpha }_j=w(\beta )\in R^b_+$. Hence $(n-k){\alpha }_i+{\alpha }_j\in R^b$ for all $k\in \{1,2,\dots,n\}$ by Proposition \[pr:sumoftwo\] and (R2). This yields the claim of the corollary. \[co:cij\] Let $a\in A$, $k\in {\mathbb{Z}}$, and $i,j\in I$ such that $i\not=j$. Then ${\alpha }_j+k{\alpha }_i\in R^a$ if and only if $0\le k\le -c^a_{i j}$, Axiom (R1) tells that ${\alpha }_j+k{\alpha }_i\notin R^a$ if $k<0$. Since $c^{{\rho }_i(a)}_{i j}=c^a_{i j}$ by (C2), Axiom (R3) gives that $\al _j-c^a_{i j}{\alpha }_i=\sigma _i^{{\rho }_i(a)}({\alpha }_j)\in R^a$ and that ${\alpha }_j+k{\alpha }_i\notin R^a$ if $k>-c^a_{i j}$. Finally, if $0<k<-c^a_{i j}$ then ${\alpha }_j+k{\alpha }_i\in R^a$ by Corollary \[co:r2conv\] for ${\alpha }={\alpha }_i$, $\beta ={\alpha }_j-c^a_{i j}{\alpha }_i$, and $n=-c^a_{i j}$. Proposition \[pr:sumoftwo\] implies another important fact. \[root\_is\_sum\] Let ${\mathcal{C}}$ be a Cartan scheme. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system of type ${\mathcal{C}}$. Let $a\in A$ and ${\alpha }\in R^a_+$. Then either ${\alpha }$ is simple, or it is the sum of two positive roots. Assume that ${\alpha }$ is not simple. Let $i\in I$, $b\in A$, and $w\in \operatorname{Hom}(b,a)$ such that ${\alpha }=w({\alpha }_i)$. Then $\ell(w)>0$. We may assume that for all $j\in I$, $b'\in A$, and $w'\in \operatorname{Hom}(b',a)$ with $w'(\alpha_j)={\alpha }$ we have $\ell(w')\ge\ell(w)$. Since $w(\alpha_i)\in {\mathbb{N}}_0^I$, we obtain that $\ell(w\s_i)>\ell(w)$ [@a-HeckYam08 Cor. 3]. Therefore, there is a $j\in I\setminus \{i\}$ with $\ell(w\s_j)<\ell(w)$. Let $w=w_1w_2$ such that $\ell(w)=\ell(w_1)+\ell(w_2)$, $\ell(w_1)$ minimal and $w_2=\ldots \s_i\s_j\s_i\s_j^b$. Assume that $w_2=\s_i\cdots \s_i\s_j^b$ — the case $w_2=\s_j\cdots \s_i \s_j^b$ can be treated similarly. The length of $w_1$ is minimal, thus $\ell(w_1\s_j)>\ell(w_1)$, and $\ell(w)=\ell(w_1)+\ell(w_2)$ yields that $\ell(w_1\s_i)>\ell(w_1)$. Using once more [@a-HeckYam08 Cor. 3] we conclude that $$\begin{aligned} \label{eq:twopos} w_1(\alpha_i)\in {\mathbb{N}}_0^I,\quad w_1(\alpha_j)\in {\mathbb{N}}_0^I.\end{aligned}$$ Let $\beta=w_2(\alpha_i)$. Then $\beta \in \NN_0 \alpha_i+\NN_0 \alpha_j$, since $\ell (w_2\s_i)>\ell (w_2)$. Moreover, $\beta $ is not simple. Indeed, $\alpha=w(\alpha_i)=w_1(\beta)$, so $\beta$ is not simple, since $\ell(w_1)<\ell(w)$ and $\ell(w)$ was chosen of minimal length. By Proposition \[pr:sumoftwo\] we conclude that $\beta$ is the sum of two positive roots $\beta_1$, $\beta_2\in {\mathbb{N}}_0{\alpha }_i+{\mathbb{N}}_0{\alpha }_j$. It remains to check that $w_1(\beta_1)$, $w_1(\beta_2)$ are positive. But this follows from . Obstructions for Weyl groupoids of rank three {#rk3_obst} ============================================= In this section we analyze the structure of finite Weyl groupoids of rank three. Let ${\mathcal{C}}$ be a Cartan scheme of rank three, and assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type ${\mathcal{C}}$. In this case a hyperplane in ${\mathbb{Z}}^I$ is the same as a cofree subgroup of rank two, which will be called a *plane* in the sequel. For simplicity we will take $I=\{1,2,3\}$, and we write $R^a$ for the set of positive real roots at $a\in A$. Recall the definition of the functions ${\mathrm{Vol}}_k$, where $k\in \{1,2,3\}$, from the previous section. As noted, for three elements ${\alpha },\beta,\gamma\in\ZZ^3$ we have ${\mathrm{Vol}}_3(\alpha,\beta,\gamma )=1$ if and only if $\{\alpha,\beta,\gamma\}$ is a basis of ${\mathbb{Z}}^3$. Also, we will heavily use the notion of a base, see Definition \[de:base\]. \[rootmultiple\] Let $a\in A$ and $\alpha,\beta \in R^a$. Assume that ${\alpha }\not=\pm \beta $ and that $\{{\alpha },\beta\}$ is not a base for $V^a({\alpha },\beta )$ at $a$. Then there exist $k,l\in {\mathbb{N}}$ and $\delta \in R^a$ such that $\beta -k{\alpha }=l\delta $ and $\{{\alpha },\delta \}$ is a base for $V^a({\alpha },\beta )$ at $a$. The claim without the relation $k>0$ is a special case of Theorem \[th:genposk\]. The relation $\beta \not=\delta $ follows from the assumption that $\{{\alpha },\beta \}$ is not a base for $V^a({\alpha },\beta )$ at $a$. Let $a\in A$ and $\alpha,\beta \in R^a$ such that ${\alpha }\not=\pm \beta $. Then $\{{\alpha },\beta \}$ is a base for $V^a(\al, \beta )$ if and only if ${\mathrm{Vol}}_2({\alpha },\beta )=1$ and ${\alpha }-\beta \notin R^a$. \[le:base2\] Assume first that $\{{\alpha },\beta \}$ is a base for $V^a({\alpha },\beta )$ at $a$. By Corollary \[simple\_rkk\] we may assume that ${\alpha }$ and $\beta $ are simple roots. Therefore ${\mathrm{Vol}}_2({\alpha },\beta )=1$ and ${\alpha }-\beta \notin R^a$. Conversely, assume that ${\mathrm{Vol}}_2({\alpha },\beta )=1$, ${\alpha }-\beta \notin R^a$, and that $\{{\alpha },\beta \}$ is not a base for $V^a({\alpha },\beta )$ at $a$. Let $k,l,\delta $ as in Lemma \[rootmultiple\]. Then $$1={\mathrm{Vol}}_2({\alpha },\beta )={\mathrm{Vol}}_2({\alpha },\beta -k{\alpha })=l{\mathrm{Vol}}_2({\alpha },\delta ).$$ Hence $l=1$, and $\{{\alpha },\delta \}=\{{\alpha },\beta -k{\alpha }\}$ is a base for $V^a({\alpha },\beta )$ at $a$. Then $\beta -{\alpha }\in R^a$ by Corollary \[co:r2conv\] and since $k>0$. This gives the desired contradiction to the assumption ${\alpha }-\beta \notin R^a$. Recall that a semigroup ordering $<$ on a commutative semigroup $(S,+)$ is a total ordering such that for all $s,t,u\in S$ with $s<t$ the relations $s+u<t+u$ hold. For example, the lexicographic ordering on ${\mathbb{Z}}^I$ induced by any total ordering on $I$ is a semigroup ordering. \[posrootssemigroup\] Let $a\in A$, and let $V\subset {\mathbb{Z}}^I$ be a plane containing at least two positive roots of $R^a$. Let $<$ be a semigroup ordering on ${\mathbb{Z}}^I$ such that $0<\gamma $ for all $\gamma \in R^a_+$, and let ${\alpha },\beta $ denote the two smallest elements in $V\cap R^a_+$ with respect to $<$. Then $\{{\alpha },\beta \}$ is a base for $V$ at $a$. Let ${\alpha }$ be the smallest element of $V\cap R^a_+$ with respect to $<$, and let $\beta $ be the smallest element of $V\cap (R^a_+\setminus \{{\alpha }\})$. Then $V=V^a(\al, \beta )$ by (R2). By Lemma \[rootmultiple\] there exists $\delta \in V\cap R^a$ such that $\{{\alpha },\delta \}$ is a base for $V$ at $a$. First suppose that $\delta <0$. Let $m\in {\mathbb{N}}_0$ be the smallest integer with $\delta +(m+1){\alpha }\notin R^a$. Then $\delta +n{\alpha }<0$ for all $n\in {\mathbb{N}}_0$ with $n\le m$. Indeed, this holds for $n=0$ by assumption. By induction on $n$ we obtain from $\delta +n{\alpha }<0$ and the choice of ${\alpha }$ that $\delta +n{\alpha }<-{\alpha }$, since $\delta $ and ${\alpha }$ are not collinear. Hence $\delta +(n+1){\alpha }<0$. We conclude that $-(\delta +m{\alpha })>0$. Moreover, $\{{\alpha },-(\delta +m{\alpha })\}$ is a base for $V$ at $a$ by Lemma \[le:base2\] and the choice of $m$. Therefore, by replacing $\{{\alpha },\delta \}$ by $\{{\alpha },-(\delta +m{\alpha })\}$, we may assume that $\delta >0$. Since $\beta >0$, we conclude that $\beta =k{\alpha }+l\delta $ for some $k,l\in {\mathbb{N}}_0$. Since $\beta $ is not a multiple of ${\alpha }$, this implies that $\beta =\delta $ or $\beta >\delta $. Then the choice of $\beta $ and the positivity of $\delta $ yield that $\delta =\beta $, that is, $\{{\alpha },\beta \}$ is a base for $V$ at $a$. \[le:badroots\] Let $k\in {\mathbb{N}}_{\ge 2}$, $a\in A$, ${\alpha }\in R^a_+$, and $\beta \in {\mathbb{Z}}^I$ such that ${\alpha }$ and $\beta $ are not collinear and ${\alpha }+k\beta \in R^a$. Assume that ${\mathrm{Vol}}_2({\alpha },\beta )=1$ and that $(-{\mathbb{N}}{\alpha }+{\mathbb{Z}}\beta ) \cap {\mathbb{N}}_0^I=\emptyset $. Then $\beta \in R^a$ and ${\alpha }+l\beta \in R^a$ for all $l\in \{1,2,\dots,k\}$. We prove the claim indirectly. Assume that $\beta \notin R^a$. By Lemma \[posrootssemigroup\] there exists a base $\{\gamma _1,\gamma _2\}$ for $V^a({\alpha },\beta )$ at $a$ such that $\gamma _1,\gamma _2\in R^a_+$. The assumptions of the lemma imply that there exist $m_1,l_1\in {\mathbb{N}}_0$ and $m_2,l_2\in {\mathbb{Z}}$ such that $\gamma _1=m_1{\alpha }+m_2\beta $, $\gamma _2=l_1{\alpha }+l_2\beta $. Since $\beta \notin R^a$, we obtain that $m_1\ge 1$ and $m_2\ge 1$. Therefore relations ${\alpha },{\alpha }+k\beta \in R^a_+$ imply that $\{{\alpha },{\alpha }+k\beta \}=\{\gamma _1,\gamma _2\}$. The latter is a contradiction to ${\mathrm{Vol}}_2(\gamma _1,\gamma _2)=1$ and ${\mathrm{Vol}}_2({\alpha },{\alpha }+k\beta )=k>1$. Thus $\beta \in R^a$. By Lemma \[rootmultiple\] we obtain that $\{\beta ,{\alpha }-m\beta \}$ is a base for $V^a({\alpha },\beta )$ at $a$ for some $m\in {\mathbb{N}}_0$. Then Corollary \[co:r2conv\] and the assumption that ${\alpha }+k\beta \in R^a$ imply the last claim of the lemma. We say that a subset $S$ of ${\mathbb{Z}}^3$ is *convex*, if any rational convex linear combination of elements of $S$ is either in $S$ or not in ${\mathbb{Z}}^3$. We start with a simple example. \[le:square\] Let $a\in A$. Assume that $c^a_{12}=0$. \(1) Let $k_1,k_2\in {\mathbb{Z}}$. Then ${\alpha }_3+k_1{\alpha }_1+k_2{\alpha }_2\in R^a$ if and only if $0\le k_1\le -c^a_{13}$ and $0\le k_2\le -c^a_{23}$. \(2) Let $\gamma \in ({\alpha }_3+{\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2)\cap R^a$. Then $\gamma -{\alpha }_1\in R^a$ or $\gamma +{\alpha }_1\in R^a$. Similarly $\gamma -{\alpha }_2\in R^a$ or $\gamma +{\alpha }_2\in R^a$. \(1) The assumption $c^a_{12}=0$ implies that $c^{{\rho }_1(a)}_{23}=c^a_{23}$, see [@a-CH09a Lemma4.5]. Applying ${\sigma }_1^{{\rho }_1(a)}$, ${\sigma }_2^{{\rho }_2(a)}$, and ${\sigma }_1{\sigma }_2^{{\rho }_2{\rho }_1(a)}$ to ${\alpha }_3$ we conclude that ${\alpha }_3-c^a_{13}{\alpha }_1$, ${\alpha }_3-c^a_{23}{\alpha }_2$, ${\alpha }_3-c^a_{13}\al _1-c^a_{23}{\alpha }_2\in R^a_+$. Thus Lemma \[le:badroots\] implies that ${\alpha }_3+m_1{\alpha }_1+m_2{\alpha }_2\in R^a$ for all $m_1,m_2\in {\mathbb{Z}}$ with $0\le m_1\le -c^a_{13}$ and $0\le m_2\le -c^a_{23}$. Further, (R1) gives that ${\alpha }_3+k_1{\alpha }_1+k_2{\alpha }_2\notin R^a$ if $k_1<0$ or $k_2<0$. Applying again the simple reflections ${\sigma }_1$ and ${\sigma }_2$, a similar argument proves the remaining part of the claim. Observe that the proof does not use the fact that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is irreducible. \(2) Since $c^a_{12}=0$, the irreducibility of ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ yields that $c^a_{13},c^a_{23}<0$ by [@a-CH09a Def.4.5, Prop.4.6]. Hence the claim follows from (1). \[pr:suminR\] Let $a\in A$ and let $\gamma _1,\gamma _2,\gamma _3\in R^a$. Assume that ${\mathrm{Vol}}_3(\gamma _1,\gamma _2,\gamma _3)=1$ and that $\gamma _1-\gamma _2,\gamma _1-\gamma _3\notin R^a$. Then $\gamma _1+\gamma _2\in R^a$ or $\gamma _1+\gamma _3\in R^a$. Since $\gamma _1-\gamma _2\notin R^a$ and ${\mathrm{Vol}}_3(\gamma _1,\gamma _2,\gamma _3)=1$, Theorem \[th:genposk\] and Lemma \[le:base2\] imply that there exists $b\in A$, $w\in \operatorname{Hom}(a,b)$ and $i_1,i_2,i_3\in I$ such that $w(\gamma _1)={\alpha }_{i_1}$, $w(\gamma _2)={\alpha }_{i_2}$, and $w(\gamma _3)={\alpha }_{i_3}+k_1{\alpha }_{i_1}+k_2{\alpha }_{i_2}$ for some $k_1,k_2\in {\mathbb{N}}_0$. Assume that $\gamma _1+\gamma _2\notin R^a$. Then $c^b_{i_1i_2}=0$. Since $\gamma _3-\gamma _1\notin R^a$, Lemma \[le:square\](2) with $\gamma =w(\gamma _3)$ gives that $\gamma _3+\gamma _1\in R^a$. This proves the claim. \[le:root\_diffs1\] Assume that $R^a\cap ({\mathbb{N}}_0{\alpha }_1+{\mathbb{N}}_0{\alpha }_2)$ contains at most $4$ positive roots. \(1) The set $S_3:=({\alpha }_3+{\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2)\cap R^a$ is convex. \(2) Let $\gamma \in S_3$. Then $\gamma ={\alpha }_3$ or $\gamma -{\alpha }_1\in R^a$ or $\gamma -{\alpha }_2\in R^a$. Consider the roots of the form $w^{-1}({\alpha }_3)\in R^a$, where $w\in \Homsfrom{a}$ is a product of reflections ${\sigma }_1^b$, ${\sigma }_2^b$ with $b\in A$. All of these roots belong to $S_3$. Using Lemma \[le:badroots\] the claims of the lemma can be checked case by case, similarly to the proof of Lemma \[le:square\]. The lemma can be proven by elementary calculations, since all nonsimple positive roots in $({\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2)\cap R^a$ are of the form say ${\alpha }_1+k{\alpha }_2$, $k\in {\mathbb{N}}$. We will see in Theorem \[th:class\] that the classification of connected Cartan schemes of rank three admitting a finite irreducible root system has a finite set of solutions. Thus it is possible to check the claim of the lemma for any such Cartan scheme. Using computer calculations one obtains that the lemma holds without any restriction on the (finite) cardinality of $R^a\cap ({\mathbb{N}}_0{\alpha }_1+{\mathbb{N}}_0{\alpha }_2)$. \[le:root\_diffs2\] Let ${\alpha },\beta ,\gamma \in R^a$ such that ${\mathrm{Vol}}_3({\alpha },\beta ,\gamma )=1$. Assume that ${\alpha }-\beta $, $\beta -\gamma $, ${\alpha }-\gamma \notin R^a$ and that $\{{\alpha },\beta ,\gamma \}$ is not a base for ${\mathbb{Z}}^I$ at $a$. Then the following hold. \(1) There exist $w\in \Homsfrom{a}$ and $n_1,n_2\in {\mathbb{N}}$ such that $w({\alpha })$, $w(\beta )$, and $w(\gamma -n_1{\alpha }-n_2\beta )$ are simple roots. \(2) None of the vectors ${\alpha }-k\beta $, ${\alpha }-k\gamma $, $\beta -k{\alpha }$, $\beta -k\gamma $, $\gamma -k{\alpha }$, $\gamma -k\beta $, where $k\in {\mathbb{N}}$, is contained in $R^a$. \(3) ${\alpha }+\beta $, ${\alpha }+\gamma $, $\beta +\gamma \in R^a$. \(4) One of the sets $\{{\alpha }+2\beta ,\beta +2\gamma ,\gamma +2{\alpha }\}$ and $\{2{\alpha }+\beta ,2\beta +\gamma ,2\gamma +{\alpha }\}$ is contained in $R^a$, the other one has trivial intersection with $R^a$. \(5) None of the vectors $\gamma -{\alpha }-k\beta $, $\gamma -k{\alpha }-\beta $, $\beta -\gamma -k{\alpha }$, $\beta -k\gamma -{\alpha }$, ${\alpha }-\beta -k\gamma $, ${\alpha }-k\beta -\gamma $, where $k\in {\mathbb{N}}_0$, is contained in $R^a$. \(6) Assume that ${\alpha }+2\beta \in R^a$. Let $k\in {\mathbb{N}}$ such that ${\alpha }+k\beta \in R^a$, ${\alpha }+(k+1)\beta \notin R^a$. Let ${\alpha }'={\alpha }+k\beta $, $\beta '=-\beta $, $\gamma '=\gamma +\beta $. Then ${\mathrm{Vol}}_3({\alpha }',\beta ',\gamma ')=1$, $\{{\alpha }',\beta ',\gamma '\}$ is not a base for ${\mathbb{Z}}^I$ at $a$, and none of ${\alpha }'-\beta '$, ${\alpha }'-\gamma '$, $\beta '-\gamma '$ is contained in $R^a$. \(7) None of the vectors ${\alpha }+3\beta $, $\beta +3\gamma $, $\gamma +3{\alpha }$, $3{\alpha }+\beta $, $3\beta +\gamma $, $3\gamma +{\alpha }$ is contained in $R^a$. In particular, $k=2$ holds in (6). \(1) By Theorem \[th:genposk\] there exist $m_1,m_2,n_1,n_2,n_3\in \ndN _0$, $i_1,i_2,i_3\in I$, and $w\in \Homsfrom{a}$, such that $w({\alpha })={\alpha }_{i_1}$, $w(\beta )=m_1{\alpha }_{i_1}+m_2{\alpha }_{i_2}$, and $w(\gamma )=n_1{\alpha }_{i_1}+n_2{\alpha }_{i_2}+n_3{\alpha }_{i_3}$. Since $\det w\in \{\pm 1\}$ and ${\mathrm{Vol}}_3({\alpha },\beta ,\gamma )=1$, this implies that $m_2=n_3=1$. Further, $\beta -{\alpha }\notin R^a$, and hence $w(\beta )={\alpha }_{i_2}$ by Corollary \[co:cij\]. Since $\{{\alpha },\beta ,\gamma \}$ is not a base for ${\mathbb{Z}}^I$ at $a$, we conclude that $w(\gamma )\not={\alpha }_{i_3}$. Then Corollary \[co:cij\] and the assumptions $\gamma -{\alpha }$, $\gamma -\beta \notin R^a$ imply that $w(\gamma )\notin {\alpha }_{i_3}+{\mathbb{N}}_0{\alpha }_{i_1}$ and $w(\gamma )\notin {\alpha }_{i_3}+{\mathbb{N}}_0{\alpha }_{i_2}$. Thus the claim is proven. \(2) By (1), $\{{\alpha },\beta \}$ is a base for $V^a({\alpha },\beta )$ at $a$. Thus ${\alpha }-k\beta \notin R^a$ for all $k\in {\mathbb{N}}$. The remaining claims follow by symmetry. \(3) Suppose that ${\alpha }+\beta \notin R^a$. By (1) there exist $b\in A$, $w\in \operatorname{Hom}(a,b)$, $i_1,i_2,i_3\in I$ and $n_1,n_2\in {\mathbb{N}}$ such that $w({\alpha })={\alpha }_{i_1}$, $w(\beta )={\alpha }_{i_2}$, and $w(\gamma )={\alpha }_{i_3}+n_1{\alpha }_{i_1}+n_2{\alpha }_{i_2}\in R^b_+$. By Theorem \[root\_is\_sum\] there exist $n'_1,n'_2\in {\mathbb{N}}_0$ such that $n'_1\le n_1$, $n'_2\le n_2$, $n'_1+n'_2<n_1+n_2$, and $${\alpha }_{i_3}+n'_1{\alpha }_{i_1}+n'_2{\alpha }_{i_2}\in R^b_+,\quad (n_1-n'_1){\alpha }_{i_1}+(n_2-n'_2){\alpha }_{i_2}\in R^b_+.$$ Since ${\alpha }+\beta \notin R^a$, Proposition \[pr:R=Fseq\] yields that $R^b_+\cap {\mathrm{span}}_{\mathbb{Z}}\{{\alpha }_{i_1},{\alpha }_{i_2}\}=\{{\alpha }_{i_1},{\alpha }_{i_2}\}$. Thus $\gamma -{\alpha }\in R^a$ or $\gamma -\beta \in R^a$. This is a contradiction to the assumption of the lemma. Hence ${\alpha }+\beta \in R^a$. By symmetry we obtain that ${\alpha }+\gamma $, $\beta +\gamma \in R^a$. \(4) Suppose that ${\alpha }+2\beta $, $2{\alpha }+\beta \notin R^a$. By (1) the set $\{{\alpha },\beta \}$ is a base for $V^a({\alpha },\beta )$ at $a$, and ${\alpha }+\beta \in R^a$ by (3). Then Proposition \[pr:R=Fseq\] implies that $R^a\cap {\mathrm{span}}_{\mathbb{Z}}\{{\alpha },\beta \}=\{\pm {\alpha },\pm \beta ,\pm ({\alpha }+\beta )\}$. Thus (1) and Lemma \[le:root\_diffs1\](2) give that $\gamma -{\alpha }\in R^a$ or $\gamma -\beta \in R^a$, a contradiction to the initial assumption of the lemma. Hence by symmetry each of the sets $\{{\alpha }+2\beta ,2\al +\beta \}$, $\{{\alpha }+2\gamma ,2{\alpha }+\gamma \}$, $\{\beta +2\gamma ,2\beta +\gamma \}$ contains at least one element of $R^a$. Assume now that $\gamma +2{\alpha }$, $\gamma +2\beta \in R^a$. By changing the object via (1) we may assume that ${\alpha }$, $\beta $, and $\gamma -n_1{\alpha }-n_2\beta $ are simple roots for some $n_1,n_2\in {\mathbb{N}}$. Then Lemma \[le:badroots\] applies to $\gamma +2{\alpha }\in R^a_+$ and $\beta -{\alpha }$, and tells that $\beta -{\alpha }\in R^a$. This gives a contradiction. By the previous two paragraphs we conclude that if $\gamma +2{\alpha }\in R^a$, then $\gamma +2\beta \notin R^a$, and hence $\beta +2\gamma \in R^a$. Similarly, we also obtain that ${\alpha }+2\beta \in R^a$. By symmetry this implies (4). \(5) By symmetry it suffices to prove that $\gamma -({\alpha }+k\beta )\notin R^a$ for all $k\in {\mathbb{N}}_0$. For $k=0$ the claim holds by assumption. First we prove that $\gamma -({\alpha }+2\beta )\notin R^a$. By (3) we know that $\gamma +{\alpha }$, ${\alpha }+\beta \in R^a$, and $\gamma -\beta \notin R^a$ by assumption. Since ${\mathrm{Vol}}_2(\gamma +{\alpha },{\alpha }+\beta )=1$, Lemma \[le:base2\] gives that $\{\gamma +{\alpha }, {\alpha }+\beta \}$ is a base for $V^a(\gamma +{\alpha },{\alpha }+\beta )$ at $a$. Since $\gamma -({\alpha }+2\beta )=(\gamma +{\alpha }) -2({\alpha }+\beta )$, we conclude that $\gamma -({\alpha }+2\beta )\notin R^a$. Now let $k\in {\mathbb{N}}$. Assume that $\gamma -({\alpha }+k\beta )\in R^a$ and that $k$ is minimal with this property. Let ${\alpha }'=-{\alpha }$, $\beta '=-\beta $, $\gamma '=\gamma -({\alpha }+k\beta )$. Then ${\alpha }',\beta ',\gamma '\in R^a$ with ${\mathrm{Vol}}_3({\alpha }',\beta ',\gamma ')=1$. Moreover, ${\alpha }'-\beta '\notin R^a$ by assumption, ${\alpha }'-\gamma '=-(\gamma -k\beta )\notin R^a$ by (2), and $\beta '-\gamma '=-(\gamma -{\alpha }-(k-1)\beta )\notin R^a$ by the minimality of $k$. Further, $\{{\alpha }',\beta ',\gamma '\}$ is not a base for $R^a$, since $\gamma =\gamma '-{\alpha }'-k\beta '$. Hence Claim (3) holds for $\al ',\beta ',\gamma '$. In particular, $$\gamma '+\beta '=\gamma -({\alpha }+(k+1)\beta )\in R^a.$$ This and the previous paragraph imply that $k\ge 3$. We distinguish two cases depending on the parity of $k$. First assume that $k$ is even. Let ${\alpha }'=\gamma +{\alpha }$ and $\beta '=-({\alpha }+k/2\beta )$. Then ${\mathrm{Vol}}_2({\alpha }',\beta ')=1$ and ${\alpha }'+2\beta '=\gamma -({\alpha }+k\beta )\in R^a$. Lemma \[le:badroots\] applied to ${\alpha }',\beta '$ gives that $\gamma -k/2\beta ={\alpha }'+\beta '\in R^a$, which contradicts (2). Finally, the case of odd $k$ can be excluded similarly by considering $V^a(\gamma +{\alpha },\gamma -({\alpha }+(k+1)\beta ))$. \(6) We get ${\mathrm{Vol}}_3({\alpha }',\beta ',\gamma ')=1$ since ${\mathrm{Vol}}_3({\alpha },\beta ,\gamma )=1$ and ${\mathrm{Vol}}_3$ is invariant under the right action of ${\mathrm{GL}}({\mathbb{Z}}^3)$. Further, $\beta '-\gamma '=-(2\beta +\gamma )\notin R^a$ by (4), and ${\alpha }'-\gamma '\notin R^a$ by (5). Finally, $({\alpha }',\beta ',\gamma ')$ is not a base for ${\mathbb{Z}}^I$ at $a$, since $R^a\ni \gamma -n_1{\alpha }-n_2\beta =\gamma '-n_1{\alpha }'+(1+n_2-kn_1)\beta '$, where $n_1,n_2\in {\mathbb{N}}$ are as in (1). \(7) We prove that $\gamma +3{\alpha }\notin R^a$. The rest follows by symmetry. If $2{\alpha }+\beta \in R^a$, then $\gamma +2{\alpha }\notin R^a$ by (4), and hence $\gamma +3{\alpha }\notin R^a$. Otherwise ${\alpha }+2\beta ,\gamma +2{\alpha }\in R^a$ by (4). Let $k$, ${\alpha }'$, $\beta '$, $\gamma '$ be as in (6). Then (6) and (3) give that $R^a\ni \gamma '+{\alpha }'=\gamma +{\alpha }+(k+1)\beta $. Since $\gamma +{\alpha }\in R^a$, Lemma \[le:badroots\] implies that $\gamma +{\alpha }+2\beta \in R^a$. Let $w$ be as in (1). If $\gamma +3{\alpha }\in R^a$, then Lemma \[le:badroots\] for the vectors $w(\gamma +{\alpha }+2\beta )$ and $w({\alpha }-\beta )$ implies that $w({\alpha }-\beta )\in R^a$, a contradiction. Thus $\gamma +3{\alpha }\notin R^a$. Recall that $\cC$ is a Cartan scheme of rank three and ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type $\cC$. \[root\_diffs\] Let $a\in A$ and ${\alpha },\beta,\gamma\in R^a$. If ${\mathrm{Vol}}_3({\alpha },\beta,\gamma)=1$ and none of $\alpha-\beta$, $\alpha-\gamma$, $\beta-\gamma$ are contained in $R^a$, then $\{{\alpha },\beta,\gamma \}$ is a base for ${\mathbb{Z}}^I$ at $a$. Assume to the contrary that $\{{\alpha },\beta ,\gamma \}$ is not a base for ${\mathbb{Z}}^I$ at $a$. Exchanging ${\alpha }$ and $\beta $ if necessary, by Lemma \[le:root\_diffs2\](4) we may assume that ${\alpha }+2\beta \in R^a$. By Lemma \[le:root\_diffs2\](6),(7) the triple $({\alpha }+2\beta ,-\beta , \gamma +\beta )$ satisfies the assumptions of Lemma \[le:root\_diffs2\], and $({\alpha }+2\beta )+2(-\beta )={\alpha }\in R^a$. Hence $2{\alpha }+3\beta =2({\alpha }+2\beta )+(-\beta )\notin R^a$ by Lemma \[le:root\_diffs2\](4). Thus $V^a({\alpha },\beta )\cap R^a=\{\pm {\alpha }, \pm ({\alpha }+\beta ),\pm ({\alpha }+2\beta ), \pm \beta \}$ by Proposition \[pr:R=Fseq\], and hence, using Lemma \[le:root\_diffs2\](1), we obtain from Lemma \[le:root\_diffs1\](2) that $\gamma -{\alpha }\in R^a$ or $\gamma -\beta \in R^a$. This is a contradiction to our initial assumption, and hence $\{{\alpha },\beta ,\gamma \}$ is a base for ${\mathbb{Z}}^I$ at $a$. \[convex\_diff2\] Let $a\in A$ and $\gamma _1,\gamma _2,{\alpha }\in R^a$. Assume that $\{\gamma _1,\gamma _2\}$ is a base for $V^a(\gamma _1,\gamma _2)$ at $a$ and that ${\mathrm{Vol}}_3(\gamma _1,\gamma _2,{\alpha })=1$. Then either $\{\gamma _1,\gamma _2,{\alpha }\}$ is a base for ${\mathbb{Z}}^I$ at $a$, or one of ${\alpha }-\gamma _1$, $\al-\gamma _2$ is contained in $R^a$. For the proof of Theorem \[th:class\] we need a bound for the entries of the Cartan matrices of ${\mathcal{C}}$. To get this bound we use the following. \[le:someroots\] Let $a\in A$. \(1) At most one of $c^a_{12}$, $c^a_{13}$, $c^a_{23}$ is zero. \(2) ${\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$. \(3) Let $k\in {\mathbb{Z}}$. Then $k{\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$ if and only if $k_1\le k\le k_2$, where $$\begin{aligned} k_1= \begin{cases} 0 & \text{if $c^a_{23}<0$,}\\ 1 & \text{if $c^a_{23}=0$,} \end{cases} \quad k_2= \begin{cases} -c^a_{12}-c^a_{13} & \text{if $c^{{\rho }_1(a)}_{23}<0$,}\\ -c^a_{12}-c^a_{13}-1 & \text{if $c^{{\rho }_1(a)}_{23}=0$.} \end{cases} \end{aligned}$$ \(4) We have $2{\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$ if and only if either $c^a_{12}+c^a_{13}\le -3$ or $c^a_{12}+c^a_{13}=-2$, $c^{\rfl _1(a)}_{23}<0$. \(5) Assume that $$\begin{aligned} \#(R^a_+\cap ({\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2))\ge 5. \label{eq:Rbig} \end{aligned}$$ Then there exist $k\in {\mathbb{N}}_0$ such that $k{\alpha }_1+2{\alpha }_2+{\alpha }_3\in R^a$. Let $k_0$ be the smallest among all such $k$. Then $k_0$ is given by the following. $$\begin{aligned} \begin{cases} 0 & \text{if $c^a_{23}\le -2$,}\\ 1 & \text{if $-1\le c^a_{23}\le 0$, $c^a_{21}+c^a_{23}\le -2$, $c^{{\rho }_2(a)}_{13}<0$,}\\ 1 & \text{if $-1\le c^a_{23}\le 0$, $c^a_{21}+c^a_{23}\le -3$, $c^{{\rho }_2(a)}_{13}=0$,}\\ 2 & \text{if $c^a_{21}=c^a_{23}=-1$, $c^{{\rho }_2(a)}_{13}=0$,}\\ 2 & \text{if $c^a_{21}=-1$, $c^a_{23}=0$, $c^{{\rho }_2(a)}_{13}\le -2$,}\\ 3 & \text{if $c^a_{21}=-1$, $c^a_{23}=0$, $c^{{\rho }_2(a)}_{13}=-1$, $c^{{\rho }_2(a)}_{12}\le -3$,}\\ 3 & \text{if $c^a_{21}=-1$, $c^a_{23}=0$, $c^{{\rho }_2(a)}_{13}=-1$, $c^{{\rho }_2(a)}_{12}=-2$, $c^{{\rho }_1{\rho }_2(a)}_{23}<0$,}\\ 4 & \text{otherwise.} \end{cases} \end{aligned}$$ Further, if $c^a_{13}=0$ then $k_0\le 2$. We may assume that ${\mathcal{C}}$ is connected. Then, since ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is irreducible, Claim (1) holds by [@a-CH09a Def.4.5, Prop.4.6]. \(2) The claim is invariant under permutation of $I$. Thus by (1) we may assume that $c^a_{23}\not=0$. Hence ${\alpha }_2+{\alpha }_3\in R^a$. Assume first that $c^a_{13}=0$. Then $c^{{\rho }_1(a)}_{13}=0$ by (C2), $c^{{\rho }_1(a)}_{23}\not=0$ by (1), and ${\alpha }_2+{\alpha }_3\in R^{{\rho }_1(a)}_+$. Hence $\s^{{\rho }_1(a)}_1({\alpha }_2+{\alpha }_3)=-c^a_{12}{\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$. Therefore (2) holds by Lemma \[le:badroots\] for ${\alpha }={\alpha }_2+{\alpha }_3$ and $\beta ={\alpha }_1$. Assume now that $c^a_{13}\not=0$. By symmetry and the previous paragraph we may also assume that $c^a_{12},c^a_{23}\not=0$. Let $b={\rho }_1(a)$. If $c^b_{23}=0$ then ${\alpha }_1+{\alpha }_2+{\alpha }_3\in R^b$ by the previous paragraph. Then $$R^a\ni \s^b_1({\alpha }_1+{\alpha }_2+{\alpha }_3) =(-c^a_{12}-c^a_{13}-1){\alpha }_1+{\alpha }_2+{\alpha }_3,$$ and the coefficient of ${\alpha }_1$ is positive. Further, ${\alpha }_2+{\alpha }_3\in R^a$, and hence (2) holds in this case by Lemma \[le:badroots\]. Finally, if $c^b_{23}\not=0$, then ${\alpha }_2+\al _3\in R^b_+$, and hence $(-c^a_{12}-c^a_{13}){\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$. Since $-c^a_{12}-c^a_{13}>0$, (2) follows again from Lemma \[le:badroots\]. \(3) If $c^a_{23}<0$, then ${\alpha }_2+{\alpha }_3\in R^a$ and $-k{\alpha }_1+{\alpha }_2+{\alpha }_3\notin R^a$ for all $k\in {\mathbb{N}}$. If $c^a_{23}=0$, then ${\alpha }_1+{\alpha }_2+{\alpha }_3\in R^a$ by (2), and $-k{\alpha }_1+{\alpha }_2+{\alpha }_3\notin R^a$ for all $k\in {\mathbb{N}}_0$. Applying the same argument to $R^{{\rho }_1(a)}$ and using the reflection $\s^{{\rho }_1(a)}_1$ and Lemma \[le:badroots\] gives the claim. \(4) This follows immediately from (3). \(5) The first case follows from Corollary \[co:cij\] and the second and third cases are obtained from (4) by interchanging the elements $1$ and $2$ of $I$. We also obtain that if $k_0$ exists then $k_0\ge 2$ in all other cases. By and Proposition \[pr:R=Fseq\] we conclude that ${\alpha }_1+{\alpha }_2\in R^a$. Then $c^a_{21}<0$ by Corollary \[co:cij\], and hence we are left with calculating $k_0$ if $-1\le c^a_{23}\le 0$, $c^a_{21}+c^a_{23}=-2$, $c^{{\rho }_2(a)}_{13}=0$, or $c^a_{21}=-1$, $c^a_{23}=0$. By (1), if $c^{{\rho }_2(a)}_{13}=0$ then $c^{{\rho }_2(a)}_{23}\not=0$, and hence $c^a_{23}<0$ by (C2). Thus we have to consider the elements $k{\alpha }_1+2{\alpha }_2+{\alpha }_3$, where $k\ge 2$, under the assumption that $$\begin{aligned} c^a_{21}=c^a_{23}=-1, \, c^{{\rho }_2(a)}_{13}=0 \quad \text{or}\quad c^a_{21}=-1,\, c^a_{23}=0. \label{eq:ccond1} \end{aligned}$$ Since $c^a_{21}=-1$, Condition  gives that $$c^{{\rho }_2(a)}_{12}\le -2,$$ see [@a-CH09a Lemma4.8]. Further, the first set of equations in implies that $c^{{\rho }_1{\rho }_2(a)}_{13}=0$, and hence $c^{{\rho }_1{\rho }_2(a)}_{23}<0$ by (1). Since ${\sigma }_2^a(2{\alpha }_1+2{\alpha }_2+{\alpha }_3)=2{\alpha }_1+{\alpha }_3-c^a_{23}\al _2$, the first set of equations in and (4) imply that $k_0=2$. Similarly, Corollary \[co:cij\] tells that $k_0=2$ under the second set of conditions in if and only if $c^{{\rho }_2(a)}_{13}\le -2$. It remains to consider the situation for $$\begin{aligned} c^a_{21}=-1,\,c^a_{23}=0,\,c^{{\rho }_2(a)}_{13}=-1. \label{eq:ccond2} \end{aligned}$$ Indeed, equation $c^a_{23}=0$ implies that $c^{{\rho }_2(a)}_{23}=0$ by (C2), and hence $c^{{\rho }_2(a)}_{13}<0$ by (1), Assuming we obtain that ${\sigma }_2^a(3{\alpha }_1+2{\alpha }_2+{\alpha }_3)=3{\alpha }_1+{\alpha }_2+{\alpha }_3$, and hence (3) implies that $k_0=3$ if and only if the corresponding conditions in (5) are valid. The rest follows by looking at $\sigma _1 \sigma _2^a(4{\alpha }_1+2{\alpha }_2+\al _3)$ and is left to the reader. The last claim holds since $c^a_{13}=0$ implies that $c^a_{23}\not=0$ by (1). The assumption $\#(R^a_+\cap ({\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2))\ge 5$ is needed to exclude the case $c^a_{21}=-1$, $c^{{\rho }_2(a)}_{12}=-2$, $c^{{\rho }_1\rfl _2(a)}_{21}=-1$, where $R^a_+\cap ({\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2)=\{{\alpha }_2,{\alpha }_1+{\alpha }_2, 2{\alpha }_1+{\alpha }_2,{\alpha }_1\}$, by using Proposition \[pr:R=Fseq\] and Corollary \[co:cij\], see also the proof of [@a-CH09a Lemma4.8]. \[cartan\_6\] Let $\cC$ be a Cartan scheme of rank three. Assume that ${\mathcal{R}}{^\mathrm{re}}$ is a finite irreducible root system of type $\cC$. Then all entries of the Cartan matrices of $\cC$ are greater or equal to $-7$. It can be assumed that ${\mathcal{C}}$ is connected. We prove the theorem indirectly. To do so we may assume that $a\in A$ such that $c^a_{12}\le -8$. Then Proposition \[pr:R=Fseq\] implies that $\# (R^a_+\cap ({\mathbb{Z}}{\alpha }_1+{\mathbb{Z}}{\alpha }_2))\ge 5$. By Lemma \[le:someroots\] there exists $k_0\in \{0,1,2,3,4\}$ such that ${\alpha }:=k_0{\alpha }_1+2{\alpha }_2+{\alpha }_3\in R^a_+$ and ${\alpha }-{\alpha }_1\notin R^a$. By Lemma \[le:base2\] and the choice of $k_0$ the set $\{{\alpha },{\alpha }_1\}$ is a base for $V^a({\alpha },{\alpha }_1)$ at $a$. Corollary \[simple\_rkk\] implies that there exists a root $\gamma \in R^a$ such that $\{{\alpha },{\alpha }_1,\gamma \}$ is a base for ${\mathbb{Z}}^I$ at $a$. Let $d\in A$, $w\in \operatorname{Hom}(a,d)$, and $i_1,i_2,i_3\in I$ such that $w({\alpha })={\alpha }_{i_1}$, $w({\alpha }_1)={\alpha }_{i_2}$, $w(\gamma )={\alpha }_{i_3}$. Let $b={\rho }_1(a)$. Again by Lemma \[le:someroots\] there exists $k_1\in \{0,1,2,3,4\}$ such that $\beta :=k_1{\alpha }_1+2{\alpha }_2+{\alpha }_3\in R^b_+$. Thus $$R^a_+\ni \s_1^b(\beta )=(-k_1-2c^a_{12}-c^a_{13}){\alpha }_1+2{\alpha }_2+\al _3.$$ Further, $$-k_1-2c^a_{12}-c^a_{13}-k_0>-c^a_{12}$$ since $k_0\le 2$ if $c^a_{13}=0$. Hence ${\alpha }_{i_1}+(1-c^a_{12}){\alpha }_{i_2} \in R^d$, that is, $c^d_{i_2 i_1}<c^a_{1 2}\le -8$. We conclude that there exists no lower bound for the entries of the Cartan matrices of ${\mathcal{C}}$, which is a contradiction to the finiteness of ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$. This proves the theorem. The bound in the theorem is not sharp. After completing the classification one can check that the entries of the Cartan matrices of ${\mathcal{C}}$ are always at least $-6$. The entry $-6$ appears for example in the Cartan scheme corresponding to the root system with number $53$, see Corollary \[co:cij\]. \[Euler\_char\] Let ${\mathcal{C}}$ be an irreducible connected simply connected Cartan scheme of rank three. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite root system of type ${\mathcal{C}}$. Let $e$ be the number of vertices, $k$ the number of edges, and $f$ the number of ($2$-dimensional) faces of the object change diagram of ${\mathcal{C}}$. Then $e-k+f=2$. Vertices of the object change diagram correspond to elements of $A$. Since ${\mathcal{C}}$ is connected and ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is finite, the set $A$ is finite. Consider the equivalence relation on $I\times A$, where $(i,a)$ is equivalent to $(j,b)$ for some $i,j\in I$ and $a,b\in A$, if and only if $i=j$ and $b\in \{a,{\rho }_i(a)\}$. (This is also known as the pushout of $I\times A$ along the bijections ${\mathrm{id}}:I\times A\to I\times A$ and ${\rho }:I\times A\to I\times A$, $(i,a)\mapsto (i,{\rho }_i(a))$.) Since ${\mathcal{C}}$ is simply connected, ${\rho }_i(a)\not=a$ for all $i\in I$ and $a\in A$. Then edges of the object change diagram correspond to equivalence classes in $I\times A$. Faces of the object change diagram can be defined as equivalence classes of triples $(i,j,a)\in I\times I\times A\setminus \{(i,i,a)\,|\,i\in I,a\in A\}$, where $(i,j,a)$ and $(i',j',b)$ are equivalent for some $i,j,i',j'\in I$ and $a,b\in A$ if and only if $\{i,j\}=\{i',j'\}$ and $b\in \{({\rho }_j{\rho }_i)^m(a), {\rho }_i({\rho }_j{\rho }_i)^m(a)\,|\,m\in {\mathbb{N}}_0\}$. Since ${\mathcal{C}}$ is simply connected, (R4) implies that the face corresponding to a triple $(i,j,a)$ is a polygon with $2m_{i,j}^a$ vertices. For each face choose a triangulation by non-intersecting diagonals. Let $d$ be the total number of diagonals arising this way. Now consider the following two-dimensional simplicial complex $C$: The $0$-simplices are the objects. The $1$-simplices are the edges and the chosen diagonals of the faces of the object change diagram. The $2$-simplices are the $f+d$ triangles. Clearly, each edge is contained in precisely two triangles. By [@b-tomDieck91 Ch.III, (3.3), 2,3] the geometric realization $X$ of $C$ is a closed $2$-dimensional surface without boundary. The space $X$ is connected and compact. Any two morphisms of ${\mathcal{W}}({\mathcal{C}})$ with same source and target are equal because ${\mathcal{C}}$ is simply connected. By [@a-CH09a Thm.2.6] this equality follows from the Coxeter relations. A Coxeter relation means for the object change diagram that for the corresponding face and vertex the two paths along the sides of the face towards the opposite vertex yield the same morphism. Hence diagonals can be interpreted as representatives of paths in a face between two vertices, and then all loops in $C$ become homotopic to the trivial loop. Hence $X$ is simply connected and therefore homeomorphic to a two-dimensional sphere by [@b-tomDieck91 Ch.III, Satz 6.9]. Its Euler characteristic is $2=e-(k+d)+(f+d)=e-k+f$. \[re:planesandfaces\] Assume that ${\mathcal{C}}$ is connected and simply connected, and let $a\in A$. Then any pair of opposite $2$-dimensional faces of the object change diagram can be interpreed as a plane in ${\mathbb{Z}}^I$ containing at least two positive roots ${\alpha },\beta \in R^a_+$. Indeed, let $b\in A$ and $i_1,i_2\in I$ with $i_1\not=i_2$. Since ${\mathcal{C}}$ is connected and simply connected, there exists a unique $w\in \operatorname{Hom}(a,b)$. Then $V^a(w^{-1}({\alpha }_{i_1}),w^{-1}({\alpha }_{i_2}))$ is a plane in ${\mathbb{Z}}^I$ containing at least two positive roots. One can easily check that this plane is independent of the choice of the representative of the face determined by $(i_1,i_2,b)\in I\times I\times A$. Further, let $w_0\in \operatorname{Hom}(b,d)$, where $d\in A$, be the longest element in ${\mathrm{Hom}(b,{\mathcal{W}}({\mathcal{C}}))}$. Let $j_1,j_2\in I$ such that $w_0({\alpha }_{i_n})=-{\alpha }_{j_n}$ for $n=1,2$. Then $(j_1,j_2,d)$ determines the plane $$V^a( (w_0w)^{-1}({\alpha }_{j_1}),(w_0w)^{-1}({\alpha }_{j_2}))= V^a(w^{-1}({\alpha }_{i_1}),w^{-1}({\alpha }_{i_2})).$$ This way we attached to any pair of ($2$-dimensional) opposite faces of the object change diagram a plane containing at least two positive roots. ![The object change diagram of the last root system of rank three[]{data-label="fig:37posroots"}](wg37){width="9cm"} Let $<$ be a semigroup ordering on ${\mathbb{Z}}^I$ such that $0<\gamma $ for all $\gamma \in R^a_+$. Let ${\alpha },\beta \in R^a_+$ with ${\alpha }\not=\beta $, and assume that ${\alpha }$ and $\beta $ are the smallest elements in $R^a_+\cap V^a({\alpha },\beta )$ with respect to $<$. Then $\{{\alpha },\beta \}$ is a base for $V^a({\alpha },\beta )$ at $a$ by Lemma \[posrootssemigroup\]. By Corollary \[simple\_rkk\] there exists $b\in A$ and $w\in \operatorname{Hom}(a,b)$ such that $w({\alpha }),w(\beta )\in R^b_+$ are simple roots. Hence any plane in ${\mathbb{Z}}^I$ containing at least two elements of $R^a_+$ can be obtained by the construction in the previous paragraph. It remains to show that different pairs of opposite faces give rise to different planes. This follows from the fact that for any $b\in A$ and $i_1,i_2\in I$ with $i_1\not=i_2$ the conditions $$d\in A,\ u\in \operatorname{Hom}(b,d),\ j_1,j_2\in I,\ u({\alpha }_{i_1})={\alpha }_{j_1},\ u({\alpha }_{i_2})={\alpha }_{j_2}$$ have precisely two solutions: $u={\mathrm{id}}_b$ on the one side, and $u=w_0w_{i_1i_2}$ on the other side, where $w_{i_1i_2}=\cdots \s_{i_1}\s_{i_2}\s_{i_1}{\mathrm{id}}_b\in {\mathrm{Hom}(b,{\mathcal{W}}({\mathcal{C}}))}$ is the longest product of reflections ${\sigma }_{i_1}$, ${\sigma }_{i_2}$, and $w_0$ is an appropriate longest element of ${\mathcal{W}}({\mathcal{C}})$. The latter follows from the fact that $u$ has to map the base $\{\al _{i_1},{\alpha }_{i_2},{\alpha }_{i_3}\}$ for ${\mathbb{Z}}^I$ at $b$, where $I=\{i_1,i_2,i_3\}$, to another base, and any base consisting of two simple roots can be extended precisely in two ways to a base of ${\mathbb{Z}}^I$: by adding the third simple root or by adding a uniquely determined negative root. It follows from the construction and by [@a-CH09b Lemma 6.4] that the faces corresponding to a plane $V^a({\alpha },\beta )$, where ${\alpha },\beta \in R^a_+$ with ${\alpha }\not=\beta $, have as many edges as the cardinality of $V^a({\alpha },\beta )\cap R^a$ (or twice the cardinality of $V^a({\alpha },\beta )\cap R^a_+$). \[sum\_rank2\] Let $\cC$ be a connected simply connected Cartan scheme of rank three. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type $\cC$. Let $a\in A$ and let $M$ be the set of planes containing at least two elements of $R^a_+$. Then $$\sum_{V\in M} \#(V\cap R^a_+) = 3(\# M-1).$$ Let $e,k,f$ be as in Proposition \[Euler\_char\]. Then $\#M=f/2$ by Remark \[re:planesandfaces\]. For any vertex $b\in A$ there are three edges starting at $b$, and any edge is bounded by two vertices. Hence counting vertices in two different ways one obtains that $3e=2k$. Proposition \[Euler\_char\] gives that $e-k+f=2$. Hence $2k = 3e = 3(2-f+k)$, that is, $k=3f-6$. Any plane $V$ corresponds to a face which is a polygon consisting of $2\# (V\cap R^a_+)$ edges, see Remark \[re:planesandfaces\]. Summing up the edges twice over all planes (that is summing up over all faces of the object change diagram), each edge is counted twice. Hence $$2 \sum_{V\in M} 2\#(V\cap R^a_+) = 2k = 2(3f-6),$$ which is the formula claimed in the theorem. \[ex\_square\] Let $\cC$ be a connected simply connected Cartan scheme of rank three. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type $\cC$. Then there exists an object $a\in A$ and $\alpha,\beta,\gamma\in R^a_+$ such that $\{\alpha,\beta,\gamma\}=\{\al_1,\al_2,\al_3\}$ and $$\label{square_hexagon} \#(V^a(\alpha,\beta)\cap R^a_+)=2, \quad \#(V^a(\alpha,\gamma)\cap R^a_+)=3.$$ Further $\alpha+\gamma, \beta +\gamma , \alpha+\beta+\gamma\in R^a_+$. Let $M$ be as in Thm. \[sum\_rank2\]. Let $a$ be any object and assume $\#(V\cap R^a_+)>2$ for all $V\in M$, then $\sum_{V\in M} \#(V\cap R^a_+) \ge 3\# M$ contradicting Thm. \[sum\_rank2\]. Hence for all objects $a$ there exists a plane $V$ with $\#(V\cap R^a_+)=2$. Now consider the object change diagram and count the number of faces: Let $2q_i$ be the number of faces with $2i$ edges. Then Thm. \[sum\_rank2\] translates to $$\label{thm_trans} \sum_{i\ge 2} i q_i = -3+3\sum_{i\ge 2} q_i.$$ Assume that there exists no object adjacent to a square and a hexagon. Since ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is irreducible, no two squares have a common edge, see Lemma \[le:someroots\](1). Look at the edges ending in vertices of squares, and count each edge once for both polygons adjacent to it. One checks that there are at least twice as many edges adjacent to a polygon with at least $8$ vertices as edges of squares. This gives that $$\sum_{i\ge 4} 2i \cdot 2q_i \ge 2\cdot 4\cdot 2q_2.$$ By Equation  we then have $-3+3\sum_{i\ge 2}q_i\ge 4q_2+2q_2+3q_3$, that is, $q_2 < \sum_{i\ge 4}q_i$. But then in average each face has more than $6$ edges which contradicts Thm. \[sum\_rank2\]. Hence there is an object $a$ such that there exist $\alpha,\beta,\gamma\in R^a_+$ as above satisfying Equation . We have $\alpha+\gamma, \beta +\gamma , \alpha+\beta+\gamma\in R^a_+$ by Lemma \[le:someroots\](1),(2) and Corollary \[co:cij\]. The classification {#sec:class} ================== In this section we explain the classification of connected simply connected Cartan schemes of rank three such that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system. We formulate the main result in Theorem \[th:class\]. The proof of Theorem \[th:class\] is performed using computer calculations based on results of the previous sections. Our algorithm described below is sufficiently powerful: The implementation in $C$ terminates within a few hours on a usual computer. Removing any of the theorems, the calculations would take at least several weeks. \(1) Let $\cC$ be a connected Cartan scheme of rank three with $I=\{1,2,3\}$. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type $\cC$. Then there exists an object $a\in A$ and a linear map $\tau \in \operatorname{Aut}({\mathbb{Z}}^I)$ such that $\tau ({\alpha }_i)\in \{{\alpha }_1,{\alpha }_2,{\alpha }_3\}$ for all $i\in I$ and $\tau (R^a_+)$ is one of the sets listed in Appendix \[ap:rs\]. Moreover, $\tau (R^a_+)$ with this property is uniquely determined. \(2) Let $R$ be one of the $55$ subsets of ${\mathbb{Z}}^3$ appearing in Appendix \[ap:rs\]. There exists up to equivalence a unique connected simply connected Cartan scheme ${\mathcal{C}}(I,A,({\rho }_i)_{i\in I},(C^a)_{a\in A})$ such that $R\cup -R$ is the set of real roots $R^a$ in an object $a\in A$. Moreover ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type $\cC $. \[th:class\] Let $<$ be the lexicographic ordering on ${\mathbb{Z}}^3$ such that $\al_3<\al_2<\al_1$. Then ${\alpha }>0$ for any ${\alpha }\in {\mathbb{N}}_0^3\setminus \{0\}$. Let ${\mathcal{C}}$ be a connected Cartan scheme with $I=\{1,2,3\}$. Assume that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is a finite irreducible root system of type ${\mathcal{C}}$. Let $a\in A$. By Theorem \[root\_is\_sum\] we may construct $R^a_+$ inductively by starting with $R^a_+=\{{\alpha }_3,{\alpha }_2,{\alpha }_1\}$, and appending in each step a sum of a pair of positive roots which is greater than all roots in $R^a_+$ we already have. During this process, we keep track of all planes containing at least two positive roots, and the positive roots on them. Lemma \[posrootssemigroup\] implies that for any known root ${\alpha }$ and new root $\beta $ either $V^a({\alpha },\beta )$ contains no other known positive roots, or $\beta $ is not part of the unique base for $V^a({\alpha },\beta )$ at $a$ consisting of positive roots. In the first case the roots ${\alpha },\beta $ generate a new plane. It can happen that ${\mathrm{Vol}}_2({\alpha },\beta )>1$, and then $\{{\alpha },\beta \}$ is not a base for $V^a({\alpha },\beta )$ at $a$, but we don’t care about that. In the second case the known roots in $V^a({\alpha },\beta )\cap R^a_+$ together with $\beta $ have to form an $\cF$-sequence by Proposition \[pr:R=Fseq\]. Sometimes, by some theorem (see the details below) we know that it is not possible to add more positive roots to a plane. Then we can mark it as “finished”. Remark that to obtain a finite number of root systems as output, we have to ensure that we compute only irreducible systems since there are infinitely many inequivalent reducible root systems of rank two. Hence starting with $\{{\alpha }_3,{\alpha }_2,{\alpha }_1\}$ will not work. However, by Corollary \[ex\_square\], starting with $\{{\alpha }_3,{\alpha }_2,\al_2+\al_3,{\alpha }_1,\al_1+\al_2,\al_1+\al_2+\al_3\}$ will still yield at least one root system for each desired Cartan scheme (notice that any roots one would want to add are lexicographically greater). In this section, we will call [*root system fragment*]{} (or [*rsf*]{}) the following set of data associated to a set of positive roots $R$ in construction: - normal vectors for the planes with at least two positive roots - labels of positive roots on these planes - Cartan entries corresponding to the root systems of the planes - an array of flags for finished planes - the sum $s_R$ of $\#(V\cap R)$ over all planes $V$ with at least two positive roots, see Theorem \[sum\_rank2\] - for each root $r\in R$ the list of planes it belongs to. These data can be obtained directly from $R$, but the calculation is faster if we continuously update them. We divide the algorithm into three parts. The main part is Algorithm 4.4, see below. The first part updates a root system fragment to a new root and uses Theorems \[root\_diffs\] and \[cartan\_6\] to possibly refuse doing so: [**Algorithm ..**]{} [**AppendRoot**]{}($\alpha$,$B$,$\tilde B$,$\hat\alpha$)\ [*Append a root to an rsf*]{}.\ [**Input:**]{} a root $\alpha$, an rsf $B$, an empty rsf $\tilde B$, a root $\hat\alpha$.\ [**Output:**]{} $\begin{cases} 0: & \mbox{if } \alpha \mbox{ may be appended, new rsf is then in } \tilde B, \\ 1: & \mbox{if } \alpha \mbox{ may not be appended}, \\ 2: & \mbox{if $\alpha \in R^a_+$ implies the existence of $\beta \in R^a_+$}\\ & \mbox{with $\hat\alpha<\beta <\alpha $.} \end{cases}$\ [**.**]{} Let $r$ be the number of planes containing at least two elements of $R$. For documentation purposes let $V_1,\dots,V_r$ denote these planes. For any $i\in \{1,\dots,r\}$ let $v_i$ be a normal vector for $V_i$, and let $R_i$ be the $\cF$-sequence of $V_i\cap R$. Set $i \leftarrow 1$, $g \leftarrow 1$, $c\leftarrow [\:]$, $p \leftarrow [\:]$, $d\leftarrow\{\:\}$. During the algorithm $c$ will be an ordered subset of $\{1,\dots,r\}$, $p$ a corresponding list of “positions”, and $d$ a subset of $R$. \[A1\_2\] If $i\le r$ and $g\ne 0$, then compute the scalar product $g:=(\alpha , v_i)$. (Then $g=\det ({\alpha },\gamma _1,\gamma _2) =\pm {\mathrm{Vol}}_3({\alpha },\gamma _1,\gamma _2)$, where $\{\gamma _1,\gamma _2\}$ is the basis of $V_i$ consisting of positive roots.) Otherwise go to Step \[A1\_6\]. \[A1\_3\] If $g=0$ then do the following: If $V_i$ is not finished yet, then check if ${\alpha }$ extends $R_i$ to a new ${\mathcal{F}}$-sequence. If yes, add the roots of $R_i$ to $d$, append $i$ to $c$, append the position of the insertion of ${\alpha }$ in $R_i$ to $p$, let $g \leftarrow 1$, and go to Step 5. If $g^2=1$, then use Corollary \[convex\_diff2\]: Let $\gamma_1$ and $\gamma_2$ be the beginning and the end of the $\cF $-sequence $R_i$, respectively. (Then $\{\gamma _1,\gamma _2\}$ is a base for $V_i$ at $a$). Let $\delta_1 \leftarrow \alpha - \gamma_1$, $\delta_2 \leftarrow \alpha - \gamma_2$. If $\delta_1,\delta _2\notin R$, then return $1$ if $\delta _1,\delta _2 \le \hat {\alpha }$ and return $2$ otherwise. Set $i \leftarrow i+1$ and go to Step \[A1\_2\]. \[A1\_6\] If there is no objection appending $\alpha$ so far, i.e. $g \ne 0$, then copy $B$ to $\tilde B$ and include $\alpha$ into $\tilde B$: use $c,p$ to extend existing ${\mathcal{F}}$-sequences, and use (the complement of) $d$ to create new planes. Finally, apply Theorem \[cartan\_6\]: If there is a Cartan entry lesser than $-7$ then return 1, else return 0. If $g=0$ then return 2. The second part looks for small roots which we must include in any case. The function is based on Proposition \[pr:suminR\]. This is a strong restriction during the process. [**Algorithm ..**]{} [**RequiredRoot**]{}($R$,$B$,$\hat\alpha$)\ [*Find a smallest required root under the assumption that all roots $\le \hat {\alpha }$ are known*]{}.\ [**Input:**]{} $R$ a set of roots, $B$ an rsf for $R$, $\hat \alpha $ a root.\ [**Output:**]{} $\begin{cases} 0 & \mbox{if we cannot determine such a root}, \\ 1,\varepsilon & \mbox{if we have found a small missing root $\varepsilon $ with $\varepsilon >\hat \alpha $},\\ 2 & \mbox{if the given configuration is impossible}. \end{cases}$\ [**.**]{} Initialize the return value $f \leftarrow 0$. \[A2\_0\] We use the same notation as in Algo. 4.2, step 1. For all $\gamma _1$ in $R$ and all $(j,k)\in \{1,\dots,r\}\times \{1,\dots,r\}$ such that $j\not=k$, $\gamma _1\in R_j\cap R_k$, and both $R_j,R_k$ contain two elements, let $\gamma _2,\gamma _3\in R$ such that $R_j=\{\gamma _1,\gamma _2\}$, $R_k=\{\gamma _1,\gamma _3\}$. If ${\mathrm{Vol}}_3(\gamma _1,\gamma _2,\gamma _3) = 1$, then do Steps \[A2\_a\] to \[A2\_b\]. \[A2\_a\] $\xi_2 \leftarrow \gamma_1+\gamma_2$, $\xi_3 \leftarrow \gamma_1+\gamma_3$. If $\hat\alpha \ge \xi_2$: If $\hat\alpha \ge \xi_3$ or plane $V_k$ is already finished, then return 2. If $f=0$ or $\varepsilon > \xi_3$, then $\varepsilon \leftarrow \xi_3$, $f\leftarrow 1$. Go to Step \[A2\_0\] and continue loop. If $\hat\alpha \ge \xi_3$: If plane $V_j$ is already finished, then return 2. If $f=0$ or $\varepsilon > \xi_2$, then $\varepsilon \leftarrow \xi_2$, $f\leftarrow 1$. \[A2\_b\] Go to Step \[A2\_0\] and continue loop. Return $f,\varepsilon$. Finally, we resursively add roots to a set, update the rsf and include required roots: [**Algorithm ..**]{} [**CompleteRootSystem**]{}($R$,$B$,$\hat\alpha$,$u$,$\beta$)\ [*\[mainalg\]Collects potential new roots, appends them and calls itself again*]{}.\ [**Input:**]{} $R$ a set of roots, $B$ an rsf for $R$, $\hat\alpha$ a lower bound for new roots, $u$ a flag, $\beta$ a vector which is necessarily a root if $u=$ True.\ [**Output:**]{} Root systems containing $R$.\ [**.**]{} \[A3\] Check Theorem \[sum\_rank2\]: If $s_R = 3(r-1)$, where $r$ is the number of planes containing at least two positive roots, then output $R$ (and continue). We have found a potential root system. If we have no required root yet, i.e. $u=$ False, then\ $f,\varepsilon:=$RequiredRoot$(R,B,\hat{\alpha })$. If $f=1$, then we have found a required root; we call CompleteRootSystem($R,B,\hat\alpha, True, \varepsilon$) and terminate. If $f=2$, then terminate. Potential new roots will be collected in $Y\leftarrow \{\:\}$; $\tilde B$ will be the new rsf. For all planes $V_i$ of $B$ which are not finished, do Steps \[A3\_a\] to \[A3\_b\]. \[A3\_a\] $\nu \leftarrow 0$. For $\zeta$ in the set of roots that may be added to the plane $V_i$ such that $\zeta> \hat\alpha$, do the following: - set $\nu \leftarrow \nu+1$. - If $\zeta \notin Y$, then $Y \leftarrow Y \cup \{\zeta\}$. If moreover $u=$ False or $\beta > \zeta$, then - $y \leftarrow$ AppendRoot($\zeta,B,\tilde B,\hat\alpha$); - if $y = 0$ then CompleteRootSystem($R\cup\{\zeta\},\tilde B,\zeta , u, \beta$). - if $y = 1$ then $\nu \leftarrow \nu-1$. \[A3\_b\] If $\nu = 0$, then mark $V_i$ as finished in $\tilde B$. if $u =$ True and AppendRoot($\beta,B,\tilde B,\hat\alpha$) = 0, then call\ CompleteRootSystem($R\cup\{\beta\},\tilde B,\beta, \textrm{False}, \beta$).\ Terminate the function call. Note that we only used necessary conditions for root systems, so after the computation we still need to check which of the sets are indeed root systems. A short program in [Magma]{} confirms that Algorithm 4.4 yields only root systems, for instance using this algorithm: [**Algorithm ..**]{} [**RootSystemsForAllObjects**]{}($R$)\ [*Returns the root systems for all objects if $R=R^a_+$ determines a Cartan scheme ${\mathcal{C}}$ such that ${\mathcal{R}}{^\mathrm{re}}({\mathcal{C}})$ is an irreducible root system*]{}.\ [**Input:**]{} $R$ the set of positive roots at one object.\ [**Output:**]{} the set of root systems at all objects, or $\{\}$ if $R$ does not yield a Cartan scheme as desired.\ [**.**]{} \[A4\] $N \leftarrow [R]$, $M \leftarrow \{\}$. While $|N| > 0$, do steps \[begwhile\] to \[endwhile\]. Let $F$ be the last element of $N$. Remove $F$ from $N$ and include it to $M$.\[begwhile\] Let $C$ be the Cartan matrix of $F$. Compute the three simple reflections given by $C$. For each simple reflection $s$, do:\[endwhile\] - Compute $G:=\{s(v)\mid v\in F\}$. If an element of $G$ has positive and negative coefficients, then return $\{\}$. Otherwise mutliply the negative roots of $G$ by $-1$. - If $G\notin M$, then append $G$ to $N$. Return $M$. We list all $55$ root systems in Appendix \[ap:rs\]. It is also interesting to summarize some of the invariants, which is done in Table 1. Let ${\mathcal O}=\{R^a \mid a \in A\}$ denote the set of different root systems. By identifying objects with the same root system one obtains a quotient Cartan scheme of the simply connected Cartan scheme of the classification. In the fifth column we give the automorphism group of one (equivalently, any) object of this quotient. The last column gives the multiplicities of planes; for example $3^7$ means that there are $7$ different planes containing precisely $3$ positive roots. Nr. $|R_+^a|$ $|{\mathcal O}|$ $|A|$ $\operatorname{Hom}(a)$ planes ------ ----------- ------------------ ------- --------------------------- -------------------------------------------------------- $1$ $6$ $1$ $24$ $A_3$ $2^{3}, 3^{4}, $ $2$ $7$ $4$ $32$ $A_1\times A_1\times A_1$ $2^{3}, 3^{6}, $ $3$ $8$ $5$ $40$ $B_2$ $2^{4}, 3^{6}, 4^{1}, $ $4$ $9$ $1$ $48$ $B_3$ $2^{6}, 3^{4}, 4^{3}, $ $5$ $9$ $1$ $48$ $B_3$ $2^{6}, 3^{4}, 4^{3}, $ $6$ $10$ $5$ $60$ $A_1\times A_2$ $2^{6}, 3^{7}, 4^{3}, $ $7$ $10$ $10$ $60$ $A_2$ $2^{6}, 3^{7}, 4^{3}, $ $8$ $11$ $9$ $72$ $A_1\times A_1\times A_1$ $2^{7}, 3^{8}, 4^{4}, $ $9$ $12$ $21$ $84$ $A_1\times A_1$ $2^{8}, 3^{10}, 4^{3}, 5^{1}, $ $10$ $12$ $14$ $84$ $A_2$ $2^{9}, 3^{7}, 4^{6}, $ $11$ $13$ $4$ $96$ $G_2\times A_1$ $2^{9}, 3^{12}, 4^{3}, 6^{1}, $ $12$ $13$ $12$ $96$ $A_1\times A_1\times A_1$ $2^{10}, 3^{10}, 4^{3}, 5^{2}, $ $13$ $13$ $2$ $96$ $B_3$ $2^{12}, 3^{4}, 4^{9}, $ $14$ $13$ $2$ $96$ $B_3$ $2^{12}, 3^{4}, 4^{9}, $ $15$ $14$ $56$ $112$ $A_1$ $2^{11}, 3^{12}, 4^{4}, 5^{2}, $ $16$ $15$ $16$ $128$ $A_1\times A_1\times A_1$ $2^{13}, 3^{12}, 4^{6}, 5^{2}, $ $17$ $16$ $36$ $144$ $A_1\times A_1$ $2^{14}, 3^{15}, 4^{6}, 5^{1}, 6^{1}, $ $18$ $16$ $24$ $144$ $A_2$ $2^{15}, 3^{13}, 4^{6}, 5^{3}, $ $19$ $17$ $10$ $160$ $B_2\times A_1$ $2^{16}, 3^{16}, 4^{7}, 6^{2}, $ $20$ $17$ $10$ $160$ $B_2\times A_1$ $2^{16}, 3^{16}, 4^{7}, 6^{2}, $ $21$ $17$ $10$ $160$ $B_2\times A_1$ $2^{18}, 3^{12}, 4^{7}, 5^{4}, $ $22$ $18$ $30$ $180$ $A_2$ $2^{18}, 3^{18}, 4^{6}, 5^{3}, 6^{1}, $ $23$ $18$ $90$ $180$ $A_1$ $2^{19}, 3^{16}, 4^{6}, 5^{5}, $ $24$ $19$ $25$ $200$ $A_1\times A_1\times A_1$ $2^{20}, 3^{20}, 4^{6}, 5^{4}, 6^{1}, $ $25$ $19$ $8$ $192$ $G_2\times A_1$ $2^{21}, 3^{18}, 4^{6}, 6^{4}, $ $26$ $19$ $50$ $200$ $A_1\times A_1$ $2^{20}, 3^{20}, 4^{6}, 5^{4}, 6^{1}, $ $27$ $19$ $25$ $200$ $A_1\times A_1\times A_1$ $2^{20}, 3^{20}, 4^{6}, 5^{4}, 6^{1}, $ $28$ $19$ $8$ $192$ $G_2\times A_1$ $2^{24}, 3^{12}, 4^{6}, 5^{6}, 6^{1}, $ $29$ $20$ $27$ $216$ $B_2$ $2^{20}, 3^{26}, 4^{4}, 5^{4}, 8^{1}, $ $30$ $20$ $110$ $220$ $A_1$ $2^{21}, 3^{24}, 4^{6}, 5^{4}, 7^{1}, $ $31$ $20$ $110$ $220$ $A_1$ $2^{23}, 3^{20}, 4^{7}, 5^{5}, 6^{1}, $ $32$ $21$ $15$ $240$ $B_2\times A_1$ $2^{22}, 3^{28}, 4^{6}, 5^{4}, 8^{1}, $ $33$ $21$ $30$ $240$ $A_1\times A_1\times A_1$ $2^{26}, 3^{20}, 4^{9}, 5^{4}, 6^{2}, $ $34$ $21$ $5$ $240$ $B_3$ $2^{24}, 3^{24}, 4^{9}, 6^{4}, $ $35$ $22$ $44$ $264$ $A_2$ $2^{27}, 3^{25}, 4^{9}, 5^{3}, 6^{3}, $ $36$ $25$ $42$ $336$ $A_1\times A_1\times A_1$ $2^{33}, 3^{34}, 4^{12}, 5^{2}, 6^{3}, 8^{1}, $ $37$ $25$ $14$ $336$ $G_2\times A_1$ $2^{36}, 3^{30}, 4^{9}, 5^{6}, 6^{4}, $ $38$ $25$ $28$ $336$ $A_1\times A_2$ $2^{36}, 3^{30}, 4^{9}, 5^{6}, 6^{4}, $ $39$ $25$ $7$ $336$ $B_3$ $2^{36}, 3^{28}, 4^{15}, 6^{6}, $ $40$ $26$ $182$ $364$ $A_1$ $2^{35}, 3^{39}, 4^{10}, 5^{4}, 6^{3}, 8^{1}, $ $41$ $26$ $182$ $364$ $A_1$ $2^{37}, 3^{36}, 4^{9}, 5^{6}, 6^{3}, 7^{1}, $ $42$ $27$ $49$ $392$ $A_1\times A_1\times A_1$ $2^{38}, 3^{42}, 4^{9}, 5^{6}, 6^{3}, 8^{1}, $ $43$ $27$ $98$ $392$ $A_1\times A_1$ $2^{39}, 3^{40}, 4^{10}, 5^{6}, 6^{2}, 7^{2}, $ $44$ $27$ $98$ $392$ $A_1\times A_1$ $2^{39}, 3^{40}, 4^{10}, 5^{6}, 6^{2}, 7^{2}, $ $45$ $28$ $420$ $420$ $1$ $2^{41}, 3^{44}, 4^{11}, 5^{6}, 6^{2}, 7^{1}, 8^{1}, $ $46$ $28$ $210$ $420$ $A_1$ $2^{42}, 3^{42}, 4^{12}, 5^{6}, 6^{1}, 7^{3}, $ $47$ $28$ $70$ $420$ $A_2$ $2^{42}, 3^{42}, 4^{12}, 5^{6}, 6^{1}, 7^{3}, $ $48$ $29$ $56$ $448$ $A_1\times A_1\times A_1$ $2^{44}, 3^{46}, 4^{13}, 5^{6}, 6^{2}, 8^{2}, $ $49$ $29$ $112$ $448$ $A_1\times A_1$ $2^{45}, 3^{44}, 4^{14}, 5^{6}, 6^{1}, 7^{2}, 8^{1}, $ $50$ $29$ $112$ $448$ $A_1\times A_1$ $2^{45}, 3^{44}, 4^{14}, 5^{6}, 6^{1}, 7^{2}, 8^{1}, $ $51$ $30$ $238$ $476$ $A_1$ $2^{49}, 3^{44}, 4^{17}, 5^{6}, 6^{1}, 7^{1}, 8^{2}, $ $52$ $31$ $21$ $504$ $G_2\times A_1$ $2^{54}, 3^{42}, 4^{21}, 5^{6}, 6^{1}, 8^{3}, $ $53$ $31$ $21$ $504$ $G_2\times A_1$ $2^{54}, 3^{42}, 4^{21}, 5^{6}, 6^{1}, 8^{3}, $ $54$ $34$ $102$ $612$ $A_2$ $2^{60}, 3^{63}, 4^{18}, 5^{6}, 6^{4}, 8^{3}, $ $55$ $37$ $15$ $720$ $B_3$ $2^{72}, 3^{72}, 4^{24}, 6^{10}, 8^{3}, $ [Table 1: Invariants of irreducible root systems of rank three]{} At first sight, one is tempted to look for a formula for the number of objects in the universal covering depending on the number of roots. There is an obvious one: consider the coefficients of $4/((1-x)^2(1-x^4))$. However, there are exceptions, for example nr. 29 with $20$ positive roots and $216$ objects (instead of $220$). Rank 3 Nichols algebras of diagonal type with finite irreducible arithmetic root system are classified in [@a-Heck05b Table 2]. In Table 2 we identify the Weyl groupoids of these Nichols algebras. ----------------------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- row in [@a-Heck05b Table 2] 1 2 3 4 5 6 7 8 9 Weyl groupoid 1 5 4 1 5 3 11 1 2 row in [@a-Heck05b Table 2] 10 11 12 13 14 15 16 17 18 Weyl groupoid 2 2 5 13 5 6 7 8 14 ----------------------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- Irreducible root systems of rank three {#ap:rs} ====================================== We give the roots in a multiplicative notation to save space: The word $1^x2^y3^z$ corresponds to $x\alpha_3+y\alpha_2+z\alpha_1$. Notice that we have chosen a “canonical” object for each groupoid. Write $\pi(R^a_+)$ for the set $R^a_+$ where the coordinates are permuted via $\pi\in S_3$. Then the set listed below is the minimum of $\{\pi(R^a_+)\mid a\in A,\:\: \pi\in S_3\}$ with respect to the lexicographical ordering on the sorted sequences of roots. Nr. $1$ with $6$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $123$\ Nr. $2$ with $7$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $23$, $123$\ Nr. $3$ with $8$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{2}23$\ Nr. $4$ with $9$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{2}23$, $1^{2}23^{2}$\ Nr. $5$ with $9$ positive roots:\ $1$, $2$, $3$, $12$, $23$, $1^{2}2$, $123$, $1^{2}23$, $1^{2}2^{2}3$\ Nr. $6$ with $10$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{2}23$, $1^{3}23$\ Nr. $7$ with $10$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $23$, $1^{2}2$, $123$, $1^{2}23$, $1^{2}2^{2}3$\ Nr. $8$ with $11$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{2}23$, $1^{3}23$, $1^{3}2^{2}3$\ Nr. $9$ with $12$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$\ Nr. $10$ with $12$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{2}23$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$\ Nr. $11$ with $13$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$\ Nr. $12$ with $13$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}23$, $1^{4}23$, $1^{4}2^{2}3$\ Nr. $13$ with $13$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{2}23$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$\ Nr. $14$ with $13$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $13^{2}$, $1^{2}23$, $123^{2}$, $1^{2}23^{2}$, $1^{3}23^{2}$, $1^{3}2^{2}3^{2}$\ Nr. $15$ with $14$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$\ Nr. $16$ with $15$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$\ Nr. $17$ with $16$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$\ Nr. $18$ with $16$ positive roots:\ $1$, $2$, $3$, $12$, $23$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $12^{2}3$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{4}2^{3}3$, $1^{4}2^{3}3^{2}$\ Nr. $19$ with $17$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}23$, $1^{4}23$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$\ Nr. $20$ with $17$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{5}2^{2}3^{2}$\ Nr. $21$ with $17$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{5}2^{3}3$, $1^{5}2^{3}3^{2}$, $1^{6}2^{3}3^{2}$\ Nr. $22$ with $18$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{5}2^{3}3$, $1^{6}2^{3}3$\ Nr. $23$ with $18$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $23$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $12^{2}3$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{3}2^{3}3$, $1^{4}2^{3}3$, $1^{4}2^{3}3^{2}$\ Nr. $24$ with $19$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{3}3^{2}$\ Nr. $25$ with $19$ positive roots:\ $1$, $2$, $3$, $12$, $23$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{6}2^{3}3$, $1^{6}2^{3}3^{2}$\ Nr. $26$ with $19$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $23$, $1^{2}2$, $12^{2}$, $123$, $1^{3}2$, $1^{2}23$, $12^{2}3$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{3}2^{3}3$, $1^{4}2^{3}3$, $1^{4}2^{3}3^{2}$\ Nr. $27$ with $19$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $23$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $12^{2}3$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{3}2^{3}3$, $1^{4}2^{3}3$, $1^{4}2^{3}3^{2}$\ Nr. $28$ with $19$ positive roots:\ $1$, $2$, $3$, $12$, $23$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{3}2^{3}3$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{6}2^{3}3$, $1^{6}2^{4}3$\ Nr. $29$ with $20$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$\ Nr. $30$ with $20$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{3}3^{2}$\ Nr. $31$ with $20$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{6}2^{3}3^{2}$\ Nr. $32$ with $21$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{3}3^{2}$\ Nr. $33$ with $21$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{6}2^{3}3$, $1^{6}2^{3}3^{2}$\ Nr. $34$ with $21$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{5}2^{3}3$, $1^{5}2^{2}3^{2}$, $1^{6}2^{3}3$, $1^{6}2^{3}3^{2}$, $1^{7}2^{3}3^{2}$\ Nr. $35$ with $22$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{5}2^{2}3^{2}$, $1^{5}2^{3}3^{2}$, $1^{6}2^{3}3^{2}$\ Nr. $36$ with $25$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{8}2^{3}3^{2}$\ Nr. $37$ with $25$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\ Nr. $38$ with $25$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $12^{2}$, $123$, $1^{3}2$, $1^{2}23$, $12^{2}3$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{3}2^{3}3$, $1^{3}2^{2}3^{2}$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{4}2^{3}3^{2}$, $1^{5}2^{3}3^{2}$, $1^{6}2^{3}3^{2}$, $1^{7}2^{4}3^{2}$\ Nr. $39$ with $25$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{3}2^{2}$, $1^{3}23$, $1^{2}2^{2}3$, $1^{4}23$, $1^{3}2^{2}3$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{4}2^{3}3$, $1^{5}2^{3}3$, $1^{5}2^{2}3^{2}$, $1^{6}2^{3}3$, $1^{5}2^{3}3^{2}$, $1^{6}2^{3}3^{2}$, $1^{7}2^{3}3^{2}$, $1^{7}2^{4}3^{2}$\ Nr. $40$ with $26$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$\ Nr. $41$ with $26$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\ Nr. $42$ with $27$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\ Nr. $43$ with $27$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\ Nr. $44$ with $27$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{2}3^{2}$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\ Nr. $45$ with $28$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\ Nr. $46$ with $28$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{9}2^{4}3^{2}$\ Nr. $47$ with $28$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{11}2^{4}3^{2}$\ Nr. $48$ with $29$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{2}3^{2}$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$\ Nr. $49$ with $29$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{9}2^{4}3^{2}$\ Nr. $50$ with $29$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{11}2^{4}3^{2}$\ Nr. $51$ with $30$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{2}3^{2}$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{9}2^{4}3^{2}$\ Nr. $52$ with $31$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}23$, $1^{5}2$, $1^{4}23$, $1^{6}2$, $1^{5}23$, $1^{4}2^{2}3$, $1^{6}23$, $1^{5}2^{2}3$, $1^{7}23$, $1^{6}2^{2}3$, $1^{7}2^{2}3$, $1^{8}2^{2}3$, $1^{9}2^{2}3$, $1^{10}2^{2}3$, $1^{9}2^{3}3$, $1^{10}2^{3}3$, $1^{11}2^{3}3$, $1^{10}2^{3}3^{2}$, $1^{11}2^{3}3^{2}$, $1^{12}2^{3}3^{2}$\ Nr. $53$ with $31$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{2}3^{2}$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{7}2^{2}3^{2}$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{3}3^{2}$, $1^{9}2^{3}3^{2}$, $1^{9}2^{4}3^{2}$, $1^{11}2^{4}3^{2}$\ Nr. $54$ with $34$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{3}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{8}2^{4}3$, $1^{8}2^{3}3^{2}$, $1^{9}2^{4}3$, $1^{9}2^{3}3^{2}$, $1^{9}2^{4}3^{2}$, $1^{11}2^{4}3^{2}$, $1^{11}2^{5}3^{2}$, $1^{12}2^{5}3^{2}$\ Nr. $55$ with $37$ positive roots:\ $1$, $2$, $3$, $12$, $13$, $1^{2}2$, $1^{2}3$, $123$, $1^{3}2$, $1^{2}23$, $1^{4}2$, $1^{3}2^{2}$, $1^{3}23$, $1^{4}23$, $1^{3}2^{2}3$, $1^{5}2^{2}$, $1^{5}23$, $1^{4}2^{2}3$, $1^{5}2^{2}3$, $1^{6}2^{2}3$, $1^{5}2^{3}3$, $1^{7}2^{2}3$, $1^{6}2^{3}3$, $1^{7}2^{3}3$, $1^{8}2^{3}3$, $1^{7}2^{3}3^{2}$, $1^{9}2^{3}3$, $1^{8}2^{4}3$, $1^{8}2^{3}3^{2}$, $1^{9}2^{4}3$, $1^{9}2^{3}3^{2}$, $1^{10}2^{4}3$, $1^{9}2^{4}3^{2}$, $1^{11}2^{4}3^{2}$, $1^{11}2^{5}3^{2}$, $1^{12}2^{5}3^{2}$, $1^{13}2^{5}3^{2}$ [^1]: In this introduction by a Weyl groupoid we will mean the Weyl groupoid of a connected Cartan scheme, and we assume that the real roots associated to the Weyl groupoid form an irreducible root system in the sense of [@a-CH09a].
--- abstract: 'Let $(R,{\mathfrak{m}})$ be a complete Noetherian local ring and let $M$ be a finite $R$–module of positive Krull dimension $n$. It is shown that any subset $T$ of ${\mbox{Assh}\,}_R(M)$ can be expressed as the set of attached primes of the top local cohomology module ${\mbox{H}\, }^n_{\mathfrak{a}}(M)$ for some ideal ${\mathfrak{a}}$ of $R$. Moreover if ${\mathfrak{a}}$ is an ideal of $R$ such that the set of attached primes of ${\mbox{H}\, }^n_{\mathfrak{a}}(M)$ is a non–empty proper subset of ${\mbox{Assh}\,}_R(M)$, then ${\mbox{H}\, }^n_{\mathfrak{a}}(M)\cong{\mbox{H}\, }^n_{\mathfrak{b}}(M)$ for some ideal ${\mathfrak{b}}$ of $R$ with ${\mbox{dim}\,}_R (R/{\mathfrak{b}})=1$.' address: - | Mohammad T. Dibaei\ Faculty of Mathematical Sciences, Teacher Training University, Tehran, Iran, and Institute for Theoretical Physics and Mathematics (IPM), Tehran, Iran. - | Raheleh Jafari\ Faculty of Mathematical Sciences, Teacher Training University, Tehran, Iran author: - 'Mohammad T. Dibaei' - Raheleh Jafari title: | Top local cohomology modules\ with specified attached primes --- Introduction ============ Throughout $(R,{\mathfrak{m}})$ is a commutative Noetherian local ring with maximal ideal ${\mathfrak{m}}$, $M$ is a non-zero finite (i.e. finitely generated) $R$–module with positive Krull dimension $n:={\mbox{dim}\,}_R(M)$ and ${\mathfrak{a}}$ denotes an ideal of $R$. Recall that for an $R$–module $N$, a prime ideal ${\mathfrak{p}}$ of $R$ is said to be an [*attached prime*]{} of $N$, if ${\mathfrak{p}}={\mbox{Ann}\,}_R(N/K)$ for some submodule $K$ of $N$ (see [@MS]). The set of attached primes of $N$ is denoted by ${\mbox{Att}\,}_R(N)$. If $N$ is an Artinian $R$–module so that $N$ admits a reduced secondary representation $N=N_1+\cdots+N_r$ such that $N_i$ is ${\mathfrak{p}}_i$–secondary, $i=1,\ldots,r$, then ${\mbox{Att}\,}_R(N)=\{{\mathfrak{p}}_1,\ldots,{\mathfrak{p}}_r\}$ is a finite set. Denote by ${\mbox{H}\, }^n_{\mathfrak{a}}(M)$ the $n$th right derived functor of $$\Gamma_{\mathfrak{a}}(M)=\{x\in M|\, {\mathfrak{a}}^rx=0 \ \mbox{for some positive integer} \ r \}$$ applied to $M$. It is well-known that ${\mbox{H}\, }^n_{\mathfrak{a}}(M)$ is an Artinian module. Macdonald and Sharp, in [@MS], studied ${\mbox{H}\, }^n_{\mathfrak{m}}(M)$ and showed that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{m}}(M))= {\mbox{Assh}\,}_R(M)$ where ${\mbox{Assh}\,}_R(M):=\{{\mathfrak{p}}\in {\mbox{Ass}\,}_R(M)|\, {\mbox{dim}\,}_R(R/{\mathfrak{p}})=n\}$. It is shown in [@DY1 Theorem A], that for any arbitrary ideal ${\mathfrak{a}}$ of $R$, ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))=\{{\mathfrak{p}}\in{\mbox{Ass}\,}_R(M)|\, {\mbox{H}\, }^n_{\mathfrak{a}}(R/{\mathfrak{p}})\neq 0\}$ which is a subset of ${\mbox{Assh}\,}_R(M)$. In [@DY2], the structure of ${\mbox{H}\, }^n_{\mathfrak{a}}(M)$ is studied by the first author and Yassemi and they showed that, in case $R$ is complete, for any pair of ideals ${\mathfrak{a}}$ and ${\mathfrak{b}}$ of $R$, if ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))={\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{b}}(M))$, then ${\mbox{H}\, }^n_{\mathfrak{a}}(M) \cong {\mbox{H}\, }^n_{\mathfrak{b}}(M)$. They also raised the following question in [@DY3 Question 2.9] which is the main object of this paper. [**Question.**]{} For any subset $T$ of ${\mbox{Assh}\,}_R(M)$, is there an ideal ${\mathfrak{a}}$ of $R$ such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M)) = T$? This paper provides a positive answer for this question in the case $R$ is complete. Main Result =========== In this section we assume that $R$ is complete with respect to the ${\mathfrak{m}}$–adic topology. As mentioned above, ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{m}}(M)) = {\mbox{Assh}\,}_R(M)$ and ${\mbox{Att}\,}_R({\mbox{H}\, }^n_R(M)) = \emptyset$ is the empty set. Also ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M)) \subseteq {\mbox{Assh}\,}_R(M)$ for all ideals ${\mathfrak{a}}$ of $R$. Our aim is to show that as ${\mathfrak{a}}$ varies over ideals of $R$, the set ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))$ takes all possible subsets of ${\mbox{Assh}\,}_R(M)$ (see Theorem 2.8). In the following results we always assume that $T$ is a non–empty proper subset of ${\mbox{Assh}\,}_R(M)$ In our first result we find a characterization for a subset of ${\mbox{Assh}\,}_R (M)$ to be the set of attached primes of the top local cohomology of $M$ with respect to an ideal ${\mathfrak{a}}$. Assume that $n:={\mbox{dim}\,}_R(M)\geq 1$ and that $T$ is a proper non-empty subset of ${\mbox{Assh}\,}_R(M)$. Set ${\mbox{Assh}\,}_R(M)\setminus T=\{{\mathfrak{q}}_1,\ldots,{\mathfrak{q}}_r\}$. The following statements are equivalent. 1. There exists an ideal ${\mathfrak{a}}$ of $R$ such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))=T$. 2. For each $i,\,1\leq i\leq r$, there exists $Q_i\in {\mbox{Supp}\,}_R(M)$ with ${\mbox{dim}\,}_R(R/Q_i)=1$ such that $$\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_i \quad \mbox{and} \quad {\mathfrak{q}}_i\subseteq Q_i.$$ With $Q_i,\, 1\leq i\leq r$, as above, ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))=T$ where ${\mathfrak{a}}=\bigcap\limits_{i=1}^rQ_i$. $(i)\Rightarrow (ii)$. By [@DY1 Theorem A], ${\mbox{H}\, }^n_{\mathfrak{a}}(R/{\mathfrak{p}})\neq 0$ for all ${\mathfrak{p}}\in T$, that is ${\mathfrak{a}}+{\mathfrak{p}}$ is ${\mathfrak{m}}$–primary for all ${\mathfrak{p}}\in T$ (by [Lichtenbaum-Hartshorne Theorem]{}). On the other hand, for $1\leq i\leq r, {\mathfrak{q}}_i\notin T$ which is equivalent to say that ${\mathfrak{a}}+{\mathfrak{q}}_i$ is not an ${\mathfrak{m}}$–primary ideal. Hence there exists a prime ideal $Q_i\in {\mbox{Supp}\,}_R(M)$ such that ${\mbox{dim}\,}_R(R/Q_i)=1$ and ${\mathfrak{a}}+{\mathfrak{q}}_i\subseteq Q_i$. It follows that $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_i$.\ $(ii)\Rightarrow (i)$. Set ${\mathfrak{a}}:=\bigcap\limits_{i=1}^rQ_i$. For each $i, 1\leq i\leq r$, ${\mathfrak{a}}+{\mathfrak{q}}_i\subseteq Q_i$ implies that ${\mathfrak{a}}+{\mathfrak{q}}_i$ is not ${\mathfrak{m}}$–primary and so ${\mbox{H}\, }^n_{\mathfrak{a}}(R/{\mathfrak{q}}_i)= 0$. Thus ${\mbox{Att}\,}_R{\mbox{H}\, }_{\mathfrak{a}}^n(M)\subseteq T$. Assume ${\mathfrak{p}}\in T$ and $Q\in {\mbox{Supp}\,}(M)$ such that ${\mathfrak{a}}+{\mathfrak{p}}\subseteq Q$. Then $Q_i\subseteq Q$ for some $i, 1\leq i\leq r$. Since ${\mathfrak{p}}\nsubseteq Q_i$, we have $Q_i\neq Q$, so $Q={\mathfrak{m}}$. Hence ${\mathfrak{a}}+ {\mathfrak{p}}$ is ${\mathfrak{m}}$–primary ideal. Now, by [Lichtenbaum-Hartshorne Theorem]{}, and by [@DY1 Theorem A], it follows that ${\mathfrak{p}}\in{\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))$. If ${\mbox{H}\, }^n_{\mathfrak{a}}(M)\not=o$ then there is an ideal ${\mathfrak{b}}$ of $R$ such that ${\mbox{dim}\,}_R(R/{\mathfrak{b}})\leq 1$ and ${\mbox{H}\, }^n_{\mathfrak{a}}(M)\cong{\mbox{H}\, }^n_{\mathfrak{b}}(M)$. If ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))= {\mbox{Assh}\,}_R(M)$, then ${\mbox{H}\, }^n_{\mathfrak{a}}(M)= {\mbox{H}\, }^n_{\mathfrak{m}}(M)$. Otherwise $n\geq 1$ and ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))$ is a proper subset of ${\mbox{Assh}\,}_R(M)$. Set ${\mbox{Assh}\,}_R(M)\setminus {\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M)):=\{{\mathfrak{q}}_1, \cdots, {\mathfrak{q}}_r\}$. By Proposition 2.1, there are $Q_i\in {\mbox{Supp}\,}_R(M)$ with ${\mbox{dim}\,}_R(R/Q_i)= 1, \ i=1, \cdots, r$, such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))= {\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{b}}(M))$ with ${\mathfrak{b}}= {\bigcap\limits_{i=1}^rQ_i}$. Now, by [@DY2 Theorem 1.6], we have ${\mbox{H}\, }^n_{\mathfrak{a}}(M)\cong {\mbox{H}\, }^n_{\mathfrak{b}}(M)$. As ${\mbox{dim}\,}(R/{\mathfrak{b}})= 1$, the proof is complete. If ${\mbox{dim}\,}_R(M)=1$ then any subset $T$ of ${\mbox{Assh}\,}_R(M)$ is equal to the set ${\mbox{Att}\,}_R({\mbox{H}\, }^1_{\mathfrak{a}}(M))$ for some ideal ${\mathfrak{a}}$ of $R$. With notations as in Proposition 2.1, we take $Q_i={\mathfrak{q}}_i$ for $i=1,\cdots, r$. By a straightforward argument one may notice that the condition “complete" is superficial, for if $T$ is a non–empty proper subset of ${\mbox{Assh}\,}_R(M)$, then $T={\mbox{Att}\,}_R({\mbox{H}\, }^1_{\mathfrak{a}}(M))$, where ${\mathfrak{a}}=\underset{{\mathfrak{p}}\in{\mbox{Assh}\,}_R(M)\setminus T}{\cap}{\mathfrak{p}}$. The following is an example to Proposition 2.1. Set $R=k[[X,Y,Z,W]]$, where $k$ is a field and $X,Y,Z,W$ are independent indeterminates. Then $R$ is a complete Noetherian local ring with maximal ideal ${\mathfrak{m}}=(X,Y,Z,W)$. Consider prime ideals $${\mathfrak{p}}_1=(X,Y) \quad , \quad {\mathfrak{p}}_2=(Z,W)\quad , \quad {\mathfrak{p}}_3=(Y,Z) \quad , \quad {\mathfrak{p}}_4=(X,W)$$ and set $\displaystyle M=\frac{R}{{\mathfrak{p}}_1{\mathfrak{p}}_2{\mathfrak{p}}_3{\mathfrak{p}}_4}$ as an $R$–module, so that we have ${\mbox{Assh}\,}_R(M)=\{{\mathfrak{p}}_1,{\mathfrak{p}}_2,{\mathfrak{p}}_3,{\mathfrak{p}}_4\}$ and ${\mbox{dim}\,}_R(M)=2$. We get $\{{\mathfrak{p}}_i\}={\mbox{Att}\,}_R({\mbox{H}\, }^2_{{\mathfrak{a}}_i}(M))$, where ${\mathfrak{a}}_1={\mathfrak{p}}_2, {\mathfrak{a}}_2={\mathfrak{p}}_1, {\mathfrak{a}}_3={\mathfrak{p}}_4, {\mathfrak{a}}_4={\mathfrak{p}}_3$, and $\{{\mathfrak{p}}_i,{\mathfrak{p}}_j\}={\mbox{Att}\,}_R({\mbox{H}\, }^2_{{\mathfrak{a}}_{ij}}(M))$, where $$\begin{array}{l} {\mathfrak{a}}_{12}=(Y^2+YZ,Z^2+YZ,X^2+XW,W^2+WX),\\ {\mathfrak{a}}_{34}=(Z^2+ZW,X^2+YX,Y^2+YX,W^2+WZ),\\ {\mathfrak{a}}_{13}=(Z^2+XZ,W^2+WY,X^2+XZ),\\ {\mathfrak{a}}_{14}=(W^2+WY,Z^2+ZY,Y^2+YW),\\ {\mathfrak{a}}_{23}=(X^2+XZ,Y^2+WY,W^2+ZW),\\ {\mathfrak{a}}_{24}=(X^2+XZ,Y^2+WY,Z^2+ZW).\\ \end{array}$$ Finally, we have $\{{\mathfrak{p}}_i,{\mathfrak{p}}_j,{\mathfrak{p}}_k\}={\mbox{Att}\,}_R({\mbox{H}\, }^2_{{\mathfrak{a}}_{ijk}}(M))$, where ${\mathfrak{a}}_{123}=(X,W,Y+Z)$, ${\mathfrak{a}}_{234}=(X,Y,W+Z)$, ${\mathfrak{a}}_{134}=(Z,W,Y+X)$.\ Assume that $n:={\mbox{dim}\,}_R(M)\geq 2$, and that $T$ is a non-empty subset of ${\mbox{Assh}\,}_R(M)$ such that $\underset{{\mathfrak{p}}\in T}{\bigcap} {\mathfrak{p}}\nsubseteq \underset {{\mathfrak{q}}\in {\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})}{\bigcap} {\mathfrak{q}}$, where $T'={\mbox{Assh}\,}_R(M)\setminus T$. Then there exists a prime ideal $Q\in {\mbox{Supp}\,}_R(M)$ with ${\mbox{dim}\,}_R(R/Q)=1$ and ${\mbox{Att}\,}_R({\mbox{H}\, }^n_Q(M))=T.$ Set $s:={\mbox{ht}\,}_M(\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})$. We have $s\leq n-1$, otherwise ${\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})= \{{\mathfrak{m}}\}$ which contradicts the condition $\underset{{\mathfrak{p}}\in T}{\bigcap} {\mathfrak{p}}\nsubseteq \underset {{\mathfrak{q}}\in {\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})}{\bigcap} {\mathfrak{q}}$. As $R$ is catenary, we have ${\mbox{dim}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})=n-s$. We first prove, by induction on $j$, $0\leq j\leq n-s-1$, that there exists a chain of prime ideals $Q_0 \subset Q_1 \subset \cdots \subset Q_j \subset {\mathfrak{m}}$ such that $Q_0\in{\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})$, ${\mbox{dim}\,}_R(R/Q_j)=n-s-j$ and $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_j$. There is $Q_0\in{\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})$ such that $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_0$. Note that ${\mbox{dim}\,}_R(R/Q_0)={\mbox{dim}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})=n-s$. Now, assume that $0<j\leq n-s-1$ and that we have proved the existence of a chain $Q_0 \subset Q_1 \subset \cdots \subset Q_{j-1}$ of prime ideals such that $Q_0\in{\mbox{Assh}\,}_R(R/\sum\limits_{{\mathfrak{p}}\in T'}{\mathfrak{p}})$, ${\mbox{dim}\,}_R(R/Q_j)=n-s-(j-1)$ and that $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_{j-1}$. Note that we have $n-s-(j-1)=n-s+1-j\geq 2$. Therefore the set $V$ defined as\ $$\begin{array}{ll} V= \{{\mathfrak{q}}\in {\mbox{Supp}\,}_R(M) |& Q_{j-1}\subset {\mathfrak{q}}\subset {\mathfrak{q}}'\subseteq {\mathfrak{m}}, {\mbox{dim}\,}_R(R/{\mathfrak{q}})=n-s-j,\\ & {\mathfrak{q}}'\in{\mbox{Spec}\,}(R)\, \mbox{and}\, {\mbox{dim}\,}_R(R/{\mathfrak{q}}')=n-s-j-1\} \end{array}$$\ is non-empty and so, by Ratliff’s weak existence theorem [@M Theorem 31.2], is not finite. As $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_{j-1}$, we have $Q_{j-1}\subset Q_{j-1}+\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}$. If, for ${\mathfrak{q}}\in V$, $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\subseteq {\mathfrak{q}}$, then ${\mathfrak{q}}$ is a minimal prime of $Q_{j-1}+\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}$. As $V$ is an infinite set, there is $Q_j\in V$ such that $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq Q_j$. Thus the induction is complete. Now by taking $Q:=Q_{n-s-1}$ and by Proposition 2.1, the claim follows. Assume that $n:={\mbox{dim}\,}_R(M)\geq 2$ and $T$ is a non-empty subset of ${\mbox{Assh}\,}_R(M)$ with $|T|=|{\mbox{Assh}\,}_R(M)|-1$. Then there is an ideal ${\mathfrak{a}}$ of $R$ such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))=T$. Note that ${\mbox{Assh}\,}_R(M)\setminus T$ is a singleton set $\{{\mathfrak{q}}\}$, say, and so ${\mbox{ht}\,}_M({\mathfrak{q}})=0$ and $\underset{{\mathfrak{p}}\in T}{\bigcap}{\mathfrak{p}}\nsubseteq {\mathfrak{q}}$. Therefore, by Lemma 2.5, the result follows. Assume that $n:={\mbox{dim}\,}_R(M)\geq 2$ and ${\mathfrak{a}}_1$ and ${\mathfrak{a}}_2$ are ideals of $R$. Then there exists an ideal ${\mathfrak{b}}$ of $R$ such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{b}}(M))={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_{1}}(M))\cap{\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_{2}}(M))$. Set $T_{1}={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_{1}}(M))$ and $T_{2}={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_{2}}(M))$. We may assume that $T_1\bigcap T_2$ is a non–empty proper subset of ${\mbox{Assh}\,}_R(M)$. Assume that ${\mathfrak{q}}\in {\mbox{Assh}\,}_R(M)\setminus (T_1\bigcap T_2)=({\mbox{Assh}\,}_R(M)\setminus T_1)\bigcup({\mbox{Assh}\,}_R(M)\setminus T_2) $. By Proposition 2.1, there exists $Q\in {\mbox{Supp}\,}_R(M)$ with ${\mbox{dim}\,}_R(R/Q)=1$ such that ${\mathfrak{q}}\subseteq Q$ and $\bigcap_{{\mathfrak{p}}\in T_1\bigcap T_2}{\mathfrak{p}}\nsubseteq Q$. Now, by Proposition 2.1, again there exists an ideal ${\mathfrak{b}}$ of $R$ such that ${\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{b}}(M))=T_1\bigcap T_2$. Now we are ready to present our main result. Assume that $T\subseteq {\mbox{Assh}\,}_R(M)$, then there exists an ideal ${\mathfrak{a}}$ of $R$ such that $T={\mbox{Att}\,}_R({\mbox{H}\, }^n_{\mathfrak{a}}(M))$. By Corollary 2.3, we may assume that ${\mbox{dim}\,}_R(M)\geq 2$ and that $T$ is a non-empty proper subset of ${\mbox{Assh}\,}_R(M)$. Set $T=\{{\mathfrak{p}}_1,\ldots,{\mathfrak{p}}_t\}$ and ${\mbox{Assh}\,}_R(M)\setminus T=\{{\mathfrak{p}}_{t+1},\ldots,{\mathfrak{p}}_{t+r}\}$. We use induction on $r$. For $r=1$, Corollary 2.6 proves the first step of induction. Assume that $r>1$ and that the case $r-1$ is proved. Set $T_1=\{{\mathfrak{p}}_1,\ldots,{\mathfrak{p}}_t,{\mathfrak{p}}_{t+1}\}$ and $T_2=\{{\mathfrak{p}}_1,\ldots,{\mathfrak{p}}_t,{\mathfrak{p}}_{t+2}\}$. By induction assumption there exist ideals ${\mathfrak{a}}_1$ and ${\mathfrak{a}}_2$ of $R$ such that $T_1={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_1}(M))$ and $T_2={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}_2}(M))$. Now by the Lemma 2.7 there exists an ideal ${\mathfrak{a}}$ of $R$ such that $T=T_1\bigcap T_2={\mbox{Att}\,}_R({\mbox{H}\, }^n_{{\mathfrak{a}}}(M))$. (See [@C Corollary 1.7]) With the notations as in Theorem 2.8, the number of non-isomorphic top local cohomology modules of $M$ with respect to all ideals of $R$ is equal to $2^{|{\mbox{Assh}\,}_R (M)|}$. It follows from Theorem 2.8 and [@DY2 Theorem 1.6]. [**Acknowledgment.**]{}The authors would like to thank the referee for her/his comments. [10]{} F.  W.  Call, *On local cohomology modules*, J. Pure Appl. Algebra **43** (1986), no. 2, 111–117. M.  T.  Dibaei and S.  Yassemi, *Some regidity results for highest order local cohomology modules*, Algebra Colloq., to appear. M.  T.  Dibaei and S.  Yassemi, *Top local cohomology modules*, Algebra Colloq., to appear. M.  T.  Dibaei and S.  Yassemi, *Attached primes of the top local cohomology modules with respect to an ideal*, Arch. Math. (Basel) **84** (2005), no. 4, 292–297. I.  G.  Macdonald and R.  Y.  Sharp, *An elementary proof of the non-vanishing of certain local cohomology modules*, Quart. J. Math. Oxford **23** (1972), 197–204. H.  Matsumura, *Commutative ring theory*, Cambridge Studies in Advanced Mathematics, 8. Cambridge University Press, Cambridge, 1986.
--- abstract: 'Noisy measurements of a physical unclonable function (PUF) are used to store secret keys with reliability, security, privacy, and complexity constraints. A new set of low-complexity and orthogonal transforms with no multiplication is proposed to obtain bit-error probability results significantly better than all methods previously proposed for key binding with PUFs. The uniqueness and security performance of a transform selected from the proposed set is shown to be close to optimal. An error-correction code with a low-complexity decoder and a high code rate is shown to provide a block-error probability significantly smaller than provided by previously proposed codes with the same or smaller code rates.' address: | Information Theory and Applications Chair, Technische Universität Berlin\ {[guenlue]{}, [rafael.schaefer]{}}[@tu-berlin.de]{}\ bibliography: - 'references.bib' title: | LOW-COMPLEXITY AND RELIABLE TRANSFORMS FOR\ PHYSICAL UNCLONABLE FUNCTIONS --- physical unclonable function (PUF), no multiplication transforms, secret key agreement, low complexity. Introduction ============ Biometric identifiers such as fingerprints are useful to authenticate a user. Similarly, secret keys are traditionally stored in non-volatile memories (NVMs) to authenticate a physical device that contains the key. NVMs require hardware protection even when the device is turned off since an attacker can try to obtain the key at any time. A safe and cheap alternative to storing keys in NVMs is to use physical identifiers, e.g., fine variations of ring oscillator (RO) outputs, as a randomness source. Since invasive attacks to physical identifiers permanently change the identifier output, there is no need for continuous hardware protection for physical identifiers [@pufintheory]. Physical unclonable functions (PUFs) are physical identifiers with reliable and high-entropy outputs [@GassendThesis; @PappuThesis]. PUF outputs are unique to each device, so they are used for safe and low-complexity key storage in digital devices. These keys can be used for private authentication, secure computation, and encryption. Replacing such identifiers is expensive, so key-storage methods should limit the information the public data leak about the identifier outputs. Moreover, the same device should be able to reconstruct a secret key generated from the noiseless outputs by using the noisy outputs and public information. The ultimate secret-key vs. privacy-leakage rate tradeoffs are given in [@IgnaTrans; @LaiTrans; @benimdissertation]. The secret-key and privacy-leakage rate limits for a suboptimal chosen-secret (CS) model called *fuzzy commitment scheme* (FCS) [@FuzzyCommitment] are given in [@IgnatenkoFuzzy]. We consider the FCS to compare different post-processing methods applied to PUFs. Asymptotically optimal CS model constructions are given in [@bizimWZ] and similar comparison results can be obtained by using these constructions. Physical identifier outputs are highly correlated and noisy, which are the two main problems in using PUFs. If errors in the extracted sequences are not corrected, PUF reliability would be low. If correlations are not eliminated, machine learning algorithms can model the PUF outputs [@MLPUF]. To solve the two problems, the discrete cosine transform (DCT) is used in [@bizimtemperature] to generate a uniformly-distributed bit sequence from PUFs under varying environmental conditions. Similarly, the discrete Walsh-Hadamard transform (DWHT), discrete Haar transform (DHT), and Karhunen-Loève transform (KLT) are compared in [@bizimMDPI] in terms of the maximum secret-key length, decorrelation efficiency, reliability, security, and hardware cost. The DCT, DWHT, and DHT provide good reliability and security results, and a hardware implementation of the DWHT in [@bizimMDPI] shows that the DWHT requires a substantially smaller hardware area than other transforms. There are two main reasons why the DWHT can be implemented efficiently. Firstly, the matrix that represents the DWHT has elements $1$ or $-1$, so there is no matrix multiplication. Secondly, an input-selection algorithm that is an extension of the algorithm in [@InputSelection] allows to calculate two-dimensional (2D) DWHT recursively. Based on these observations, we propose a new set of transforms that preserve these properties and that significantly improve the reliability of the sequences extracted from PUFs. The FCS requires error-correction codes (ECCs) to achieve the realistic block-error probability of $\displaystyle P_\text{B}\!=\!10^{-9}$ for RO PUFs. The ECCs proposed in [@bizimMDPI] have better secret-key and privacy-leakage rates than previously proposed codes, but in some cases it is assumed that if multiple bits are extracted from each transform coefficient, each bit is affected by independent errors. This assumption is not valid in general. Thus, we extract only one bit from each transform coefficient. The contributions of this work are as follows. - We propose a new set of 2D orthogonal transforms that have low-complexity hardware implementations and no matrix multiplications. The new set of transforms are shown to provide an average bit-error probability smaller than the most reliable transform considered in the PUF literature, i.e., DCT. - Bit sequences extracted using a transform selected from the new set of transforms are shown to give good uniqueness and security results that are comparable to state-of-the-art results. - We propose a joint transform-quantizer-code design method for the new set of transforms in combination with the FCS to achieve a block-error probability substantially smaller than the common value of $10^{-9}$ with perfect secrecy. This paper is organized as follows. In Section \[sec:fuzzycommitment\], we review the FCS. The transform-coding algorithm to extract secure sequences from RO PUFs is explained in Section \[sec:commonsteps\]. A new set of orthogonal transforms that require a small hardware area and that result in bit-error probabilities smaller than previously considered transforms is proposed in Section \[sec:neworth\]. In Section \[sec:comparisons\], we compare the new transforms with previous methods and show that the proposed ECC provides a block-error probability for the new selected transform (ST) that is smaller than for previously considered transforms. Review of the Fuzzy Commitment Scheme {#sec:fuzzycommitment} ===================================== Fig. \[fig:fuzzycommitment\] shows the FCS, where an encoder ${\mathsf{Enc}}(\cdot)$ adds a codeword $\displaystyle C^N$, uniformly distributed over a set with cardinality $|\mathcal{S}|$, modulo-2 to the binary noiseless PUF-output sequence $\displaystyle X^N$ during enrollment. We show in Section \[sec:commonsteps\] that the sequence $X^N$ and its noisy version $Y^N$ can be obtained by applying the post-processing steps in Fig. \[fig:postprocessing\] to RO outputs $\widetilde{X}^L$ and its noisy version $\widetilde{Y}^L$, respectively. The sum $\displaystyle W^N=C^N{\mathbin{\oplus}}X^N$ is publicly sent through a noiseless and authenticated channel, and it is called *helper data*. The modulo-2 sum of $W^N$ and the noisy PUF-output sequence $Y^N =X^N {\mathbin{\oplus}}E^N$, where $E^N$ is the binary error vector, gives the noisy codeword $\displaystyle C^N{\mathbin{\oplus}}E^N$. Using the noisy codeword, a channel decoder $\displaystyle {\mathsf{Dec}}(\cdot)$ estimates the secret key $S$ during reconstruction. A reliable secret-key agreement is possible by using $X^N$, $Y^N$, and $W^N$ [@AhlswedeCsiz; @Maurer]. One can achieve a (secret-key, privacy-leakage) rate pair $ (R_\text{s}\text{,}R_\ell)$ using the FCS with perfect secrecy if, given any $\epsilon\!>\!0$, there is some $N\!\geq\!1$, and an encoder and a decoder for which $\displaystyle R_\text{s}=\frac{\log|\mathcal{S}|}{N}$ and $$\begin{aligned} {2} &\Pr[S\ne\hat{S}] \leq \epsilon && (\text{reliability}) \label{eq:reliabilityconst}\\ &I\big(S;W^N\big)\!=\!0 && (\text{perfect secrecy})\label{eq:secrecyconst}\\ &\frac{1}{N}I\big(X^N;W^N\big) \leq R_\ell+\epsilon. \quad\quad\quad&&(\text{privacy}) \label{eq:privacyconst}\end{aligned}$$ Condition (\[eq:secrecyconst\]) ensures that the public side information $W^N$ does not leak any information about the secret key, so one achieves perfect secrecy. The normalized information that $W^N$ leaks about the PUF output sequence $X^N$ is considered in (\[eq:privacyconst\]). If one should asymptotically limit the unnormalized privacy leakage $I(X^N;W^N)$, private keys available during enrollment and reconstruction are necessary [@IgnaTrans], which is not realistic or practical; see the discussions in [@bizimWZ]. Suppose the measurement channel $P_{Y|X}$ is a binary symmetric channel (BSC) with crossover probability $p$, and $X$ is independent and identically distributed (i.i.d.) according to a uniform distribution. Define $\displaystyle H_b(p)\!=\!-p\log p-(1\!-p)\log(1\!-p)$ as the binary entropy function. The region $\displaystyle \mathcal{R}$ of all achievable (secret-key, privacy-leakage) rate pairs for the FCS with perfect secrecy is [@IgnatenkoFuzzy] $$\begin{aligned} \mathcal{R}\! =\! \big\{ \left(R_\text{s},R_\ell\right)\!\colon\!\quad 0\leq R_\text{s}\leq 1-H_b(p),\quad R_\ell\geq 1\!-\!R_\text{s} \big\}.\label{eq:ls0}\end{aligned}$$ We plot this region in Section \[sec:comparisons\] to evaluate the secret-key and privacy-leakage rates achieved by the proposed ECC. The FCS is a particular realization of the CS model. The region $\mathcal{R}_{\text{cs}}$ of all achievable (secret-key, privacy-leakage) rate pairs for the CS model, where a generic encoder is used to confidentially transmit an embedded secret key to a decoder that observes $Y^N$ and the helper data $W^N$, is given in [@IgnaTrans; @LaiTrans] as the union over all $P_{U|X}$ of the set of achievable rate pairs $\left(R_\text{s},R_\ell\right)$ such that $$\begin{aligned} \Big\{0\leq R_\text{s}\leq I(U;Y),\qquad R_\ell\geq I(U;X)-I(U;Y)\!\Big\}\label{eq:chosensecret}\end{aligned}$$ where $P_X$ is the probability distribution of $X$ and the alphabet $\mathcal{U}$ of the auxiliary random variable $U$ can be limited to have the size $\displaystyle |\mathcal{U}|\!\leq\!|\mathcal{X}|+1$ as $U-X-Y$ forms a Markov chain. The FCS achieves a boundary point of $\mathcal{R}_{\text{cs}}$ for a BSC $P_{Y|X}$ only at the point $\displaystyle (R_\text{s}^*,R_\ell^*)\!=\!(1\!-\!H_b(p),H_b(p))$. To achieve the other points on the rate-region boundary, one should use a nested code construction as in [@bizimWZ] or a binning based construction as in [@MatthieuPolar], both of which require careful polar code [@Arikan] designs. This is not necessary to illustrate the gains from the new set of transforms and it suffices to combine the new set with the FCS. Post-processing Steps {#sec:commonsteps} ===================== We consider a 2D array of $r\!\times\!c$ ROs. Denote the continuous-valued outputs of $L\!=\!r\!\times\!c$ ROs as the vector random variable $\widetilde{X}^L$, distributed according to $\displaystyle f_{\widetilde{X}^L}$. Suppose that the noise component $\widetilde{E}_j$ on the $j$-th RO output is Gaussian distributed with zero mean for all $j=1,2,\ldots,L$ and that the noise components are mutually independent. Denote the noisy RO outputs as $\widetilde{Y}^L\!=\!\widetilde{X}^L\!+\!\widetilde{E}^L$. We extract binary vectors $X^N$ and $Y^N$ from $\widetilde{X}^L$ and $\widetilde{Y}^L$, respectively, and define binary error variables $\displaystyle E_i\!=\!X_i{\mathbin{\oplus}}Y_i$ for $i\!=\!1,2,\ldots,N$. ![The transform-coding steps.[]{data-label="fig:postprocessing"}](./Transformcodingmodel.eps){width="48.50050%" height="0.5005\textheight"} The post-processing steps used during the enrollment (and reconstruction) to extract a bit sequence $X^N$ (and its noisy version $Y^N$) are depicted in Fig. \[fig:postprocessing\]. These steps are transformation, histogram equalization, quantization, Gray mapping, and concatenation. Since RO outputs $\widetilde{X}^L$ are correlated, we apply a transform $\emph{T}_{r\!\times\!c}(\cdot)$ for decorrelation. We model all transform coefficients and noise components as random variables with Gaussian marginal distributions. A transform-coefficient output $T$ that comes from a distribution with mean $\mu\neq 0$ and variance $\sigma^2\neq 1$ is converted into a standard Gaussian random variable during histogram equalization, which reduces the hardware area when multiple bits are extracted. Independent bits can be extracted from transform coefficients by setting the quantization boundaries of a $K$-bit quantizer to $$\label{eq:quantsteps} b_k=Q^{-1}\left(1-\dfrac{k}{2^K}\right) \text{ for } k=0,1,\dots,2^K$$ where $Q(\cdot)$ is the $Q$-function. Quantizing a coefficient $\hat{T}$ to $k$ if $\displaystyle b_{k-1}\!<\!\hat{T}\!\leq\!b_k$ ensures that $X^N$ is uniformly distributed, which is necessary to achieve the rate point where the FCS is optimal. One can use scalar quantizers without a performance loss in security if the RO output statistics satisfy certain constraints [@benimdissertation]. We do not use the first transform coefficient, i.e., DC coefficient, for bit extraction since it corresponds to the average over the RO array, known by an attacker [@benimdissertation]. Furthermore, Gray mapping ensures that the neighboring quantization intervals result in only one bit flip. This is a good choice as the noise components $E_i$ for all $i=1,2,\ldots,N$ have zero mean. The sequences extracted from transform coefficients are concatenated to obtain the sequence $X^N$ (or $Y^N$). New Orthogonal Transforms {#sec:neworth} ========================= A useful metric to measure the complexity of a transform is the number of operations required for computations. Consider only RO arrays of sizes $ r\!=\!c\!=\!8$ and $16$, which are powers of 2, so fast algorithms are available. In [@benimdissertation], the DWHT is suggested as the best candidate among the set of transforms {DCT, DHT, KLT, DWHT} for RO PUF applications with a low-complexity constraint such as internet of things (IoT) applications. In [@bizimMDPI], we extend an input-selection algorithm to compute the 2D $16\times16$ DWHT by applying a $2\times2$ matrix operation recursively to illustrate that the DWHT requires a small hardware area in a field programmable gate array (FPGA) since it does not require any multiplications. Following this observation, we propose a set of transforms that are orthogonal (to decorrelate the RO outputs better), that have matrix elements $1$ or $-1$ (to eliminate multiplications), and that have size of $16\times16$ (to apply the input-selection algorithm given in [@bizimMDPI] to further reduce complexity). We show in the next section that these transforms provide higher reliability than other transforms previously considered in the literature. Orthogonal Transform Construction and Selection {#subsec:orthtransselection} ----------------------------------------------- Consider an orthogonal matrix $A$ with elements $1$ or $-1$ and of size $k\times k$, i.e., $AA^{T}= I$, where $T$ is the matrix transpose and $I$ is the identity matrix of size $k\times k$. It is straightforward to show that the following matrices are also orthogonal: $$\begin{aligned} &\Biggl[ \begin{matrix} A&A\\ A&\!-\!A \end{matrix} \Biggr], \Biggl[ \begin{matrix} A&A\\ \!-\!A&A \end{matrix} \Biggr], \Biggl[ \begin{matrix} A&\!-\!A\\ A&A \end{matrix} \Biggr], \Biggl[ \begin{matrix} \!-\!A&A\\ A&A \end{matrix} \Biggr],\nonumber\\ \Biggl[ &\begin{matrix} \!-\!A&\!-\!A\\\! -\!A&A \end{matrix} \Biggr], \Biggl[ \begin{matrix} \!-\!A&\!-\!A\\ A&\!-\!A \end{matrix} \Biggr], \Biggl[ \begin{matrix} \!-\!A&A\\ \!-\!A&\!-\!A \end{matrix} \Biggr], \Biggl[ \begin{matrix} A&\!-\!A\\ \!-\!A&\!-\!A \end{matrix} \Biggr].\label{eq1}\end{aligned}$$ Since $2^{k^2}$ possible matrices should be checked for orthogonality, we choose $k\!=\!4$ to keep the complexity of the exhaustive search for orthogonal matrices low. The result of the exhaustive search is a set of orthogonal matrices $A$ of size $4\!\times\! 4$. By applying the matrix construction methods in (\[eq1\]) twice consecutively, we obtain $12288$ unique orthogonal transforms of size $16\!\times\! 16$ with elements $1$ or $\displaystyle -1$. We apply these orthogonal transforms, one of which is the DWHT, to an RO dataset to select the orthogonal transform whose maximum bit-error probability over the transform coefficients is minimum. This selection method provides reliability guarantees to every transform coefficient. An ECC that has a higher code dimension than it is achievable according to the Gilbert-Varshamov (GV) bound [@GilbertGV; @varshamovGV] for the maximum error probability over the transform coefficients of the ST, is given in Section \[subsec:codeselection\]. This illustrates that our selection method is conservative and the block-error probability is substantially smaller than $10^{-9}$. There are also other orthogonal transforms of size $16\times 16$ but we illustrate in the next section that the new set suffices to significantly increase the reliability of the extracted bits as compared to previously considered transforms and previous RO PUF methods. Performance Evaluations {#sec:comparisons} ======================= We use RO arrays of size $16\!\times \!16$ from the RO dataset in [@ROPUF] and apply the transform-coding steps in Fig. \[fig:postprocessing\] to compare the previously considered transforms with the new set of transforms in terms of their reliability, uniqueness, and security. We illustrate that a Bose-Chaudhuri-Hocquenghem (BCH) code can be used for error correction in combination with the FCS to achieve a block-error probability smaller than the common value of $10^{-9}$. Transform Comparisons --------------------- We compare the orthogonal transform selected from the new set, i.e., the ST, with the DCT and DWHT in terms of the bit-error probabilities of the $255$ transform coefficients obtained from the RO dataset in [@ROPUF]. Fig. \[fig:BERComparisonsofTrans\] illustrates the bit-error probabilities of the DCT, DWHT, and the ST. The mean of the ST is smaller than the means of the DCT and DWHT. Furthermore, the maximum bit-error probability of the DCT and ST are almost equal and are less than the maximum error probability of the DWHT. Most importantly, the ST has a large set of transform coefficients with bit-error probabilities close to zero, so an ECC design for the maximum or mean bit-error probability of the ST would give pessimistic rate results. We propose in the next section an ECC for the ST to achieve a smaller block-error probability than the block-error probability for the DCT. Uniqueness and Security {#subsec:uniqueness} ----------------------- A common measure to check the randomness of a bit sequence is uniqueness, i.e., the average fractional Hamming distance (HD) between the sequences extracted from different RO PUFs [@bizimpaper]. The rate region in (\[eq:ls0\]) is valid if the extracted bit sequences are uniformly distributed, making the uniqueness a valid measure for the FCS. Uniqueness results for the DCT, DWHT, KLT, and DHT have a mean HD of $0.5000$ and HD variances of approximately $\displaystyle 7\!\times \!10^{-4}$ [@bizimMDPI], which are close to optimal and better than previous RO PUF results. For the ST, we obtain a mean HD of $0.5001$ and a HD variance of $\displaystyle 2.69\!\times \!10^{-2}$. This suggests that the ST has good average uniqueness performance, but there might be a small set of RO PUFs from which slightly biased bit sequences are extracted. The latter can be avoided during manufacturing by considering uniqueness as a parameter in yield analysis of the chip that embodies the PUF. We apply the national institute of standards and technology (NIST) randomness tests [@NIST] to check whether there is a detectable deviation from the uniform distribution in the sequences extracted by using the ST. The bit sequences generated with the ST pass most of the randomness tests, which is considered to be an acceptable result [@NIST]. A correlation thresholding approach in [@bizimtemperature] further improves security. Code Selection {#subsec:codeselection} -------------- Consider the scenario where secret keys are used as an input to the advanced encryption standard (AES), a symmetric-key cryptosystem, with a key size of $128$ bits, so the code dimension of the ECC should be at least $128$ bits. The maximum error probability over the transform coefficients of the ST is $p_{\text{max}}=0.0149$, as shown in Fig. \[fig:BERComparisonsofTrans\]. Furthermore, assume that we use an ECC with a bounded minimum distance decoder (BMDD) to keep the complexity low. A BMDD can correct all error patterns with up to $\lfloor\frac{d_{\text{min}-1}}{2}\rfloor$ errors, where $d_{\text{min}}$ is the minimum distance of the code. It is straightforward to show that the ECC should have at least a minimum distance of $d_{\text{min}}=41$ to achieve a block-error probability of $P_\text{B}\leq 10^{-9}$ if all transform coefficients are assumed to have a bit-error probability of $p_{\text{max}}$. None of binary BCH and Reed-Solomon (RS) codes, which have good minimum-distance properties, can satisfy these parameters. Similarly, the GV bound computed for $p_{\text{max}}$ shows that there exists a linear binary ECC with code dimension $98$. Consider the binary BCH code with the block length $255$, code dimension $131$ that is greater than the code dimension of $98$ given by the GV bound, and minimum distance $\displaystyle d_{\text{min,BCH}}=37$ that is close to the required value of $d_{\text{min}}=41$. We illustrate in the next section that this BCH code provides a block-error probability significantly smaller than $10^{-9}$. Reliability, Privacy, and Secrecy Analysis of the Code ------------------------------------------------------ We now show that the proposed ECC satisfies the block-error probability constraint. The block-error probability $P_\text{B}$ for the $\text{BCH}(255,131,37)$ code with a BMDD is equal to the probability of having more than $18$ errors in the codeword, i.e., we have $$\begin{aligned} P_\text{B} = \sum_{j=19}^{255}\Bigg[\sum_{\mathcal{D}\in\mathcal{F}_j}\prod_{i\in \mathcal{D}}p_{i}\,\bigcdot\prod_{i\in \mathcal{D}^{c}}(1-p_{i}) \Bigg] \label{eq:blockerrorforbch}\end{aligned}$$ where $p_{i}\leq p_{\text{max}}$ is the bit-error probability of the $i$-th transform coefficient, as in Fig. \[fig:BERComparisonsofTrans\], for $i\!=\!2,3,\ldots,256$, $\displaystyle \mathcal{F}_j$ is the set of all size-$j$ subsets of the set $\displaystyle\{2,3,\ldots,256\}$, and $\mathcal{D}^{c}$ denotes the complement of the set $\mathcal{D}$. The bit-error probabilities $p_{i}$ represent probabilities of independent events due to the mutual independence assumption for transform coefficients and one-bit quantizers used. The evaluation of (\[eq:blockerrorforbch\]) requires $ \sum_{j=0}^{18}{255\choose j}\approx 1.90\!\times\!10^{27}$ different calculations, which is not practical. We therefore apply the discrete Fourier transform - characteristic function (DFT-CF) method [@DFTCF] to (\[eq:blockerrorforbch\]) and obtain the result $P_\text{B}\!\approx\!2.860\!\times\!10^{-12}\!<\!10^{-9}$. This value is smaller than the block-error probabilitiy $P_{\text{B,DCT}}= 1.26\times 10^{-11}$ obtained in [@benimdissertation] for the DCT with the same code. The block-error probability constraint is thus satisfied by using the $\text{BCH}$ code although the conservative analysis suggests otherwise. The rate regions given in (\[eq:ls0\]) and (\[eq:chosensecret\]) are asymptotic results, i.e., they assume $N\rightarrow \infty$. Since separate channel and secrecy coding is optimal for the FCS, we can use the finite length bounds for a BSC $P_{Y|X}$ with crossover probability $p\!=\! \frac{1}{L-1}\sum_{i=2}^Lp_{i}\!\approx\!0.0088$, i.e., the error probability averaged over all used coefficients. In [@benimdissertation], we show that the $\text{BCH}(255,131,37)$ code achieves $(R_{\text{s,BCH}},R_{\ell,\text{BCH}})\approx(0.514,\,0.486)$ bits/source-bit, significantly better than previously proposed codes in the RO PUF literature, so it suffices to compare the proposed code with the best possible finite-length results for the FCS. We use Mrs. Gerber’s lemma [@WZ], giving the optimal auxiliary random variable $U$ in (\[eq:chosensecret\]), to compute all points in the region $\mathcal{R}_{\text{cs}}$. We plot all achievable rate pairs, the (secret-key, privacy-leakage) rate pair of the proposed BCH code, and a finite-length bound for the block length of $N=255$ bits and $P_\text{B}\!=\!10^{-9}$ in Fig. \[fig:ratecomparison\]. The maximum secret-key rate is $R_\text{s}^*\!\approx\!0.9268$ bits/source-bit with a corresponding minimum privacy-leakage rate of $R_\ell^*\!\approx\!0.0732$ bits/source-bit. The gap between the points $(R_{\text{s,BCH}},R_{\ell,\text{BCH}})$ and $(R_{\text{s}}^*,R_\ell^*)$ can be partially explained by the short block length of the code and the small block-error probability. The finite-length bound given in [@Polyanskiy Theorem 52] shows that the rate pair $(R_\text{s},R_\ell)\!=\!(0.7029,0.2971)$ bits/source-bit is achievable by using the FCS, as depicted in Fig. \[fig:ratecomparison\]. One can thus improve the rate pairs by using better codes and decoders with higher hardware complexity, which is undesirable for IoT applications. Fig. \[fig:ratecomparison\] also illustrates the fact that there are operation points of the region $\mathcal{R}_{\text{cs}}$ that cannot be achieved by using the FCS and, e.g., a nested polar code construction from [@bizimWZ] should be used to achieve all points in $\mathcal{R}_{\text{cs}}$. Conclusion {#sec:conclusion} ========== We proposed a new set of transforms that are orthogonal (so that the decorrelation efficiency is high), that have elements $1$ or $-1$ (so that the hardware complexity is low), and that have a size of $k\times k$ where $k$ is a power of 2 (so that an input-selection algorithm can be applied to further decrease complexity). By using one-bit uniform quantizers for each transform coefficient obtained by applying the ST, we obtained bit-error probabilities that are on average smaller than the bit-error probabilities obtained from previously considered transforms. We proposed a BCH code as the ECC for RO PUFs in combination with the FCS. This code achieves the best rate pair in the RO PUF literature and it gives a block-error probability for the ST that is substantially smaller than for the DCT. We illustrated that the FCS cannot achieve all possible rate points. In future work, in combination with the new set of transforms, we will apply a joint vector quantization and error correction method by using nested polar codes to achieve rate pairs that cannot be achieved by the FCS.
--- author: - '$^{1}$Ryosuke Akashi[^1] and $^{1,2}$Ryotaro Arita' title: 'Density Functional Theory for Plasmon-assisted Superconductivity' --- Introduction ============ Superconductivity constitutes one of the most fascinating fields in condensed matter physics ever since its discovery in the early twentieth century. After the success of its description by the Bardeen-Cooper-Schrieffer theory,[@BCS] particular attention has been paid to the material dependence of the superconducting transition temperature ($T_{\rm c}$): that is, why some materials such as the celebrated cuprate[@Bednorz-Muller] exhibit high $T_{\rm c}$ but others do not? Since superconductivity emerges as a result of subtle interplay and competition of interactions between atoms and electrons having much larger energy scales, $T_{\rm c}$ is extremely sensitive to details of the electronic and crystal structure. Thus, an accurate quantitative treatment is essential to understand the emergence of high values of $T_{\rm c}$. For the conventional phonon-mediated mechanism, quantitative calculations have been performed within the Migdal-Eliashberg (ME) theory[@Migdal-Eliashberg] implemented with the first-principles method based on the Kohn-Sham density functional theory[@Kohn-Sham-eq]: In a variety of systems, phonon properties are well reproduced by the density functional perturbation theory[@Baroni-review] or the total-energy method[@Kunc-Martin-frozen] within the local density approximation[@Ceperley-Alder; @PZ81]. By using the calculated phonon spectrum and electron-phonon coupling as inputs, it has been shown that the ME theory explains the qualitative tendency of $T_{\rm c}$ for various materials[@Savrasov-Savrasov; @Choi-MgB2]. However, the ME formalism is not suitable for full *ab initio* calculations since it is difficult to treat electron-electron interaction nonempirically. When we calculate $T_{\rm c}$ by solving the Eliashberg equation or using related approximate formulae such as the McMillan equation,[@McMillan; @AllenDynes] we vary the value of $\mu^{\ast}$ (Ref. ) representing the effective electron-electron Coulomb interaction which suppresses the Cooper-pair formation, and examine whether the range of the resulting $T_{\rm c}$ covers the experimentally observed value. With such a semi-empirical framework, the material dependence of the electron-electron interaction cannot be understood quantitatively. The recent progress in the density functional theory for superconductors (SCDFT)[@Oliveira; @Kreibich; @GrossI] has changed the situation. There, a non-empirical scheme describing the physics in the ME theory was formulated: Based on the Kohn-Sham orbital, it treats the weak-to-strong electron-phonon coupling, the screened electron-electron interaction within the static approximation, and the retardation effect[@Morel-Anderson] due to the difference in the energy ranges of these interactions. This scheme has been demonstrated to reproduce experimental $T_{\rm c}$s of various conventional phonon-mediated superconductors with deviation less than a few K.[@GrossII; @Floris-MgB2; @Sanna-CaC6; @Bersier-CaBeSi] More recently, it has been employed to examine the validity of the ME theory in fully gapped superconductors with high $T_{\rm c}$ such as layered nitrides[@Akashi-MNCl] and alkali-doped fullerides.[@Akashi-fullerene] Through these applications, the current SCDFT has proved to be an informative method well-suited to investigate the nontrivial effects of the electron-electron interaction behind superconducting phenomena. Although the electron-electron interaction just suppresses the pairing in the ME theory, possibilities of superconductivity induced by the electron-electron interaction have long been also explored. Since the discovery of the cuprates,[@Bednorz-Muller] superconductivity induced by short-range Coulomb interaction has been extensively investigated.[@Scalapino-review2012] On the other hand, there has been many proposals of superconducting mechanisms concerning long-range Coulomb interaction since the seminal work of Kohn and Luttinger.[@Kohn-Luttinger] In particular, there is a class of mechanisms that exploit the dynamical structure of the screened Coulomb interaction represented by the frequency-dependent dielectric function $\varepsilon(\omega)$: e.g., the plasmon[@Radhakrishnan1965; @Frohlich1968; @Takada1978; @Rietschel-Sham1983] and exciton[@Little1967] mechanisms. Interestingly, such mechanisms can cooperate with the conventional phonon mechanism. Since they usually favor $s$-wave pairing, they have a chance to enhance $s$-wave superconductivity together with the phonon mechanism. Taking this possibility into account, these mechanisms are important even where they do not alone induce superconductivity. Therefore, they are expected to involve a broader range of systems than originally expected in the early studies. In fact, for a variety of systems having low-energy electronic excitations, theoretical model calculations addressing such a cooperation have been performed: SrTiO$_{3}$ [@Koonce-Cohen1967; @Takada-SrTiO3] with small plasmon frequencies due to small electron densities, $s$-$d$ transition metals [@Garland-sd] where “demon" acoustic plasmons have been discussed,[@Pines-demon; @Ihm-Cohen1981] metals sandwiched by small-gap semiconductors [@Ginzburg-HTSC; @ABB1973], and layered systems where two-dimensional acoustic plasmons are proposed to become relevant [@Kresin1987; @Bill2002-2003]. Moreover, recent experimental discoveries of high-temperature superconductivity in doped band insulators have stimulated more quantitative analyses on effects of the cooperation [@Yamanaka1998; @Bill2002-2003; @Taguchi2006; @Taniguchi2012; @Ye2012]. Considering the above grounds, the situation calls for an *ab initio* theory that treats the phonon-mediated interaction and the dynamical screened Coulomb interaction together, with which one can study from the superconductors governed by phonons or the dynamical Coulomb interaction to those by their cooperation on equal footing. The aim of our present study is to establish this by extending the applicability of SCDFT. In this paper, we review the recent theoretical extension to include the plasmon-induced dynamical screened Coulomb interaction.[@Akashi-plasmon] In Sec. \[sec:theory\], we present the theoretical formulation and its practical implementation, and discuss how plasmons can enhance superconductivity. Section \[sec:appl-Li\] describes the application to elemental lithium under high pressures, for which the plasmon effect is expected to be substantial because of its relatively dilute electron density. In Sec. \[sec:summary\] we summarize our results and give concluding remarks. Formulation {#sec:theory} =========== General formalism {#subsec:theory-general} ----------------- Let us start from a brief review of SCDFT.[@GrossI] The current SCDFT employs the gap equation $$\begin{aligned} \Delta_{n{\bf k}}\!=\!-\mathcal{Z}_{n\!{\bf k}}\!\Delta_{n\!{\bf k}} \!-\!\frac{1}{2}\!\sum_{n'\!{\bf k'}}\!\mathcal{K}_{n\!{\bf k}\!n'{\bf k}'} \!\frac{\mathrm{tanh}[(\!\beta/2\!)\!E_{n'{\bf k'}}\!]}{E_{n'{\bf k'}}}\!\Delta_{n'\!{\bf k'}} \label{eq:gap-eq}\end{aligned}$$ to obtain $T_{\rm c}$, which is specified as the temperature where the calculated value of gap function $\Delta_{n{\bf k}}$ becomes zero. Here, $n$ and ${\bf k}$ denote the band index and crystal momentum, respectively, $\Delta$ is the gap function, and $\beta$ is the inverse temperature. The energy $E_{n {\bf k}}$ is defined as $E_{n {\bf k}}$=$\sqrt{\xi_{n {\bf k}}^{2}+\Delta_{n {\bf k}}^{2}}$ and $\xi_{n {\bf k}}=\epsilon_{n {\bf k}}-\mu$ is the one-electron energy measured from the chemical potential $\mu$, where $\epsilon_{n {\bf k}}$ is obtained by solving the normal Kohn-Sham equation in density functional theory $ \mathcal{H}_{\rm KS}|\varphi_{n{\bf k}}\rangle=\epsilon_{n{\bf k}} |\varphi_{n{\bf k}}\rangle $ with $\mathcal{H}_{\rm KS}$ and $|\varphi_{n{\bf k}}\rangle$ being the Kohn-Sham Hamiltonian and the Kohn-Sham state, respectively. The functions $\mathcal{Z}$ and $\mathcal{K}$, which are called as the exchange-correlation kernels, describe the effects of all the interactions involved: They are defined as the second functional derivative of the free energy with respect to the anomalous electron density. A formulation of the free energy based on the Kohn-Sham perturbation theory\cite{} enables us practical calculations of the exchange-correlation functionals using the Kohn-Sham eigenvalues and eigenfunctions derived from standard *ab initio* methods. The nondiagonal exchange-correlation kernel $\mathcal{K}$ is composed of two parts $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el}$ representing the electron-phonon and electron-electron interactions, whereas the diagonal kernel $\mathcal{Z}$ consists of one contribution $\mathcal{Z}$$=$$\mathcal{Z}^{\rm ph}$ representing the mass renormalization of the normal-state band structure due to the electron-phonon coupling. The phonon parts, $\mathcal{K}^{\rm ph}$ and $\mathcal{Z}^{\rm ph}$, properly treats the conventional strong-coupling superconductivity. The electron-electron coutribution $\mathcal{K}^{\rm el}$ is the matrix element of the *static* screened Coulomb interaction $\langle \varphi_{n{\bf k}\uparrow}\varphi_{n-{\bf k}\downarrow}|\varepsilon^{-1}(0)V|\varphi_{n'{\bf k}'\uparrow}\varphi_{n'-{\bf k}'\downarrow}\rangle$ with $V$ being the bare Coulomb interaction. Currently, the Thomas-Fermi approximation and the random-phase approximation (RPA) have been applied for the static dielectric function $\varepsilon^{-1}(0)$.[@Massidda] With these settings, the two parts of the nondiagonal kernel have different Kohn-Sham energy dependence: $\mathcal{K}^{\rm ph}$ has large values only for the states within the phonon energy scale, whereas $\mathcal{K}^{\rm el}$ decays slowly with the electronic energy scale. With this Kohn-Sham-state dependence, the retardation effect[@Morel-Anderson] is quantitatively treated. Thus, within the framework of the density functional theory, the SCDFT accurately treats the physics of Migdal-Eliashberg theory (based on the Green’s function). ![(Color online) Diagram corresponding to the electron nondiagonal kernel, $\mathcal{K}^{\rm el}$. The solid line with arrows running in the opposite direction denotes the electronic anomalous propagator [@GrossI]. The blue wavy line denotes the screened electronic Coulomb interaction, which is a product of the inverse dielectric function $\varepsilon^{-1}$ and the bare Coulomb interaction $V$.[]{data-label="fig:diagram"}](diagrams_ed_130926.jpg) The current setting $\mathcal{K}^{\rm el}$ $=$ $\langle \varphi_{n{\bf k}\uparrow}\varphi_{n-{\bf k}\downarrow}|$ $\varepsilon^{-1}(0)V$ $|\varphi_{n'{\bf k}'\uparrow}\varphi_{n'-{\bf k}'\downarrow}\rangle$ corresponds to the anomalous exchange contribution from the screened Coulomb interaction represented in Fig. \[fig:diagram\] with the $\omega$ dependence of $\varepsilon$ omitted. To incorporate effects of the plasmon on the interaction, we retain its frequency dependence. The diagram thus yields a following form $$\begin{aligned} \hspace{-30pt}&& \mathcal{K}^{\rm el, dyn}_{n{\bf k},n'{\bf k}} \!=\! \lim_{\{\Delta_{n{\bf k}}\}\rightarrow 0} \frac{1}{{\rm tanh}[(\beta /2 ) E_{n{\bf k}}]} \frac{1}{{\rm tanh}[(\beta /2) E_{n'{\bf k}'}]} \nonumber \\ \hspace{-10pt}&& \hspace{10pt}\times \frac{1}{\beta^{2}} \sum_{\tilde{\omega}_{1}\tilde{\omega}_{2}} F_{n{\bf k}}({\rm i}\tilde{\omega}_{1}) F_{n'{\bf k}'}({\rm i}\tilde{\omega}_{2}) W_{n{\bf k}n'{\bf k}'}[{\rm i}(\tilde{\omega}_{1}\!\!-\!\!\tilde{\omega}_{2})] , \label{eq:kernel-dyn}\end{aligned}$$ where $F_{n{\bf k}}({\rm i}\tilde{\omega})$ $=$ $\frac{1}{{\rm i}\tilde{\omega}\!+\!E_{n{\bf k}}} \!-\! \frac{1}{{\rm i}\tilde{\omega}\!-\!E_{n{\bf k}}} $ and $\tilde{\omega}_{1}$ and $\tilde{\omega}_{2}$ denote the Fermionic Matsubara frequency. Function $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$$\equiv$$\langle \varphi_{n{\bf k}\uparrow}\varphi_{n-{\bf k}\downarrow}|\varepsilon^{-1}({\rm i}\omega)V|\varphi_{n'{\bf k}'\uparrow}\varphi_{n'-{\bf k}'\downarrow}\rangle$ is the screened Coulomb interaction. We then apply the RPA[@RPA] to the $\omega$-dependent dielectric function, which is a standard approximation to describe the plasmon under crystal field. Formally, the present RPA kernel can be also derived from the RPA free energy defined by Eq. (13) in Ref. : The set of terms of order $O(FF^{\dagger})$ (i.e., the set of the diagrams having only one anomalous bubble taken from Fig. 2 in Ref. ) corresponds to the present kernel. The Coulomb interaction $W_{n{\bf k}n'{\bf k}'}({\rm i}\nu)$ is practically calculated using a certain set of basis functions. Let us here summarize the plane-wave representation, which has been employed in our studies: $$\begin{aligned} &&\hspace{-30pt}W_{n{\bf k}n'{\bf k}'}({\rm i}\nu) \nonumber \\ && = \frac{4\pi}{\Omega}\! \sum_{{\bf G}\!{\bf G}'}\! \frac{ \!\rho^{n{\bf k}}_{n'{\bf k}'}(\!{\bf G}\!)\tilde{\varepsilon}^{-1}_{{\bf G}{\bf G}'}(\!{\bf k}\!-\!{\bf k}'\!;{\rm i}\nu)\!\{\rho^{n{\bf k}}_{n'{\bf k}'}(\!{\bf G}'\!)\!\}^* } { |{\bf k}-{\bf k}'+{\bf G}||{\bf k}-{\bf k}'+{\bf G}'| }\!, \label{eq:K-el-RPA}\end{aligned}$$ with $\tilde{\varepsilon}_{{\bf G}{\bf G}'}(\!{\bf k}\!-\!{\bf k}'\!;{\rm i}\nu)$ being the symmetrized dielectric matrix,[@Hybertsen-Louie] defined by $$\begin{aligned} \tilde{\varepsilon}_{{\bf G}{\bf G}'}({\bf K}; {\rm i}\nu) \!\!\!\!&=&\!\!\!\!\!\! \delta_{{\bf G}{\bf G}'} \nonumber \\ &&\!\!\!-4\pi\frac{1}{|{\bf K}\!+\!{\bf G}|}\chi^{0}_{{\bf G}{\bf G}'}({\bf K}; {\rm i}\nu)\frac{1}{|{\bf K}\!+\!{\bf G}'|} .\end{aligned}$$ The independent-particle polarization $\chi^{0}_{{\bf G}{\bf G}'}({\bf K}; {\rm i}\nu)$ denotes $$\begin{aligned} \chi^{0}_{{\bf G}{\bf G}'}({\bf K};{\rm i}\nu) &\!\!\!\!=&\!\!\!\! \frac{2}{\Omega} \sum_{{\bf k}}\sum_{\substack{n:{\rm unocc}\\n':{\rm occ}}} [\rho^{n{\bf k}+{\bf K}}_{n'{\bf k}}({\bf G})]^{\ast}\rho^{n{\bf k}+{\bf K}}_{n'{\bf k}}({\bf G}') \nonumber \\ && \hspace{-35pt}\times [\frac{1}{{\rm i}\nu \!-\! \epsilon_{n {\bf k}+{\bf K}} \!+\! \epsilon_{n' {\bf k}}} - \frac{1}{{\rm i}\nu \!+\! \epsilon_{n {\bf k}+{\bf K}} \!-\! \epsilon_{n' {\bf k}}}] , \label{eq:chi-def}\end{aligned}$$ where the band indices $n$ and $n'$ run through the unoccupied bands and occupied bands for each **k**, respectively. The matrix $\rho^{n'{\bf k}'}_{n{\bf k}}({\bf G})$ is defined by $$\begin{aligned} \rho^{n'{\bf k}'}_{n{\bf k}}({\bf G}) &=& \int_{\Omega} d{\bf r} \varphi^{\ast}_{n'{\bf k}'}({\bf r}) e^{{\rm i}({\bf k}'-{\bf k}+{\bf G})\cdot{\bf r}} \varphi_{n{\bf k}}({\bf r}). \label{eq:rho}\end{aligned}$$ So far, we have ignored the intraband (Drude) contribution to $\tilde{\varepsilon}$ for ${\bf k}-{\bf k}'=0$: The kernel including this contribution diverges as $({\bf k}-{\bf k}')^{-2}$, whereas the total contribution by the small ${\bf k}-{\bf k}'$ to $T_{\rm c}$ should scale as $({\bf k}-{\bf k}')^{1}$ because of the ${\bf k}'$ integration in Eq. (\[eq:gap-eq\]). ![(Color online) (a) Energy dependence of nondiagonal kernels entering the gap equation. Phonon-induced attraction, static Coulomb repulsion, and the plasmon-induced high-energy Coulomb repulsion are indicated in red, green, and blue, respectively. (b) Approximate solution of the gap equation solved with the phonon and static Coulomb parts. (c) Energy dependence of the kernels in a case where the phonon part is negligibly small and the plasmon part is dominant.[]{data-label="fig:interaction"}](interactions_130920.jpg) The physical meaning of the present dynamical correction to the previous static kernel is as follows. In real systems, screening by charge fluctuations is ineffective for the interaction with large energy exchanges \[i.e., $\varepsilon(\omega) \xrightarrow{\omega \rightarrow \infty} 1$\], whereas it becomes significant as the energy exchange becomes small compared with typical energies of charge excitations. However, the conventional static approximation ignores this energy dependence of the screening by extrapolating the static value of the interaction to the high energy, and underestimates the screened Coulomb repulsion with large energy exchanges. The present extension corrects this underestimation, and gives additional repulsive contribution to the Coulomb matrix elements between the Cooper pairs having much different energies. Interestingly, this additional contribution can raise $T_{\rm c}$. Let us discuss this point in terms of the interaction kernel entering the energy-averaged gap equation $$\begin{aligned} \Delta(\xi) = -\frac{1}{2}N(0) \int \!\! d\xi' \! \mathcal{K}(\xi\!,\xi')\frac{{\rm tanh}[(\beta/2)\xi']}{\xi'}\Delta(\xi') , \label{eq:gap-eq-ave}\end{aligned}$$ where we define the averaged nondiagonal kernel as $\mathcal{K}(\xi,\xi')=\frac{1}{N(0)^{2}}\sum_{n{\bf k}n'{\bf k}'}\delta(\xi-\xi_{n{\bf k}})\delta(\xi'-\xi_{n'{\bf k}'})K_{n{\bf k}n'{\bf k}'}$ with $N(0)$ being electronic density of states at the Fermi level and omit the diagonal kernel for simplicity. This equation qualitatively describes coherent Cooper pairs represented by $\Delta(\xi)$ scattered by the pairing interactions. Suppose $\mathcal{K}=\mathcal{K}^{\rm ph}+\mathcal{K}^{\rm el}$, $N(0)\mathcal{K}^{\rm ph}(\xi,\xi')=-\lambda$ within the Debye frequency $\omega_{\rm ph}$ and $N(0)\mathcal{K}^{\rm el}(\xi,\xi')=\mu$ within a certain electronic energy range such as $E_{\rm F}$ (considering red and green parts in panel (a) of Fig. \[fig:interaction\]). Solving this equation by assuming $\Delta(\xi)$ to be nonzero and constant only within $\omega_{\rm ph}$, we obtain the BCS-type $T_{\rm c}$ formula $T_{\rm c}\propto \omega_{\rm ph}$$\times$$ {\rm exp}[-1/(\lambda-\mu)]$ for $\mu-\lambda<0$. However, if we allow $\Delta(\xi)$ to have nonzero constant values for $|\xi|>\omega_{\rm ph}$, we instead obtain $T_{\rm c}\propto \omega_{\rm ph}$$\times$${\rm exp}[-1/(\lambda-\mu^{\ast})]$ with $\mu^{\ast}=\mu/(1+\mu{\rm ln}[E_{\rm F}/\omega_{\rm ph}])<\mu$, and then, the resulting values of $\Delta(\xi)$ have opposite signs for $|\xi|<\omega_{\rm ph}$ and $|\xi|>\omega_{\rm ph}$ \[panel (b) in Fig. \[fig:interaction\]\]. Here, even if the total low-energy interaction $\mu-\lambda$ is repulsive, superconducting state realizes if $\mu^{\ast}-\lambda<0$. This weakening of the effective Coulomb repulsion is the celebrated retardation effect,[@Morel-Anderson] and its origin is the negative values of the high-energy gap function: Since the scattering by repulsion between Cooper pairs having $\Delta$ with opposite signs is equivalent to the scattering by attraction between those with same signs, there is a gain of the condensation energy.[@Kondo-PTP1963] Next, let us add the plasmon contribution \[blue part in panel (a)\], which enhances the repulsion by $\Delta \mu$ for $\xi$ with an energy scale of plasmon frequency $\omega_{\rm pl}$. Then, more condensation energy can be gained by enhancing the high-energy negative gap function, which increases $T_{\rm c}$. As an extreme situation, one can also consider the case where the phonon-induced attraction is negligible and the plasmon-induced repulsion is dominant \[panel (c)\]. Obviously, a superconducting solution exists even in this case because the discussion about the above $T_{\rm c}$ formula is also valid with the transformation $\lambda$$\rightarrow$$\Delta \mu$, $\mu$$\rightarrow$$\mu+\Delta \mu$ and $\omega_{\rm ph}$$\rightarrow$$\omega_{\rm pl}$. These discussions illustrate that the plasmon contribution can increase $T_{\rm c}$ by enhancing the high-energy repulsion. To the authors’ knowledge, the plasmon mechanism of the above-mentioned type to enhance $T_{\rm c}$ has been originally studied by Takada[@Takada1978] based on the Green’s function formalism for two- and three-dimensional homogeneous electron gas. Using the gap equation derived by himself, he has also performed calculations of $T_{\rm c}$ considering both the phonons and plasmons for doped SrTiO$_{3}$ (Ref. ) and metal-intercalated graphites.[@Takada-graphite1982; @Takada-graphite2009] Our present formalism, which treats the local field effect of inhomogeneous electron distribution behind the phonon and plasmon, is a DFT-based counterpart of his theory.[@comment-counterpart] Multipole plasmon approximation {#subsec:plasmon-pole} ------------------------------- Next we present a formulation to calculate $T_{\rm c}$ using the extended kernel. Evaluation of Eq. (\[eq:kernel-dyn\]) requires to perform the double discrete Matsubara summations for electronic energy scale, which is impractically demanding. We then analytically carry out the summations by approximating $W_{n{\bf k}n'{\bf k'}}$ as a simple function. For this purpose, we employ a multipole plasmon approximation $$\begin{aligned} \tilde{W}_{n{\bf k}n'{\bf k}'}({\rm i}\tilde{\nu}_{m}) \!\!\!\!&=&\!\!\!\! W_{n{\bf k}n'{\bf k}'}(0) \nonumber \\ &&+ \sum^{N_{\rm p}}_{i} a_{i;n{\bf k}n'{\bf k}'} g_{i;n{\bf k}n'{\bf k}'}(\tilde{\nu}_{m}) , \label{eq:W-tilde}\end{aligned}$$ with $g_{i;n{\bf k}n'{\bf k}'}$ being $$\begin{aligned} g_{i;n{\bf k}n'{\bf k}'}(x) = \frac{2}{\omega_{i;n{\bf k}n'{\bf k}'}} -\frac{2\omega_{i;n{\bf k}n'{\bf k}'}}{x^{2}\!+\!\omega^{2}_{i;n{\bf k}n'{\bf k}'}} .\end{aligned}$$ Here, $\tilde{\nu}_{m}$ denotes the Bosonic Matsubara frequency. In contrast with the case of uniform electron gas, inhomogeneous systems can have a variety of plasmon modes, and our aim is to treat these modes in a unified manner. Substituting Eq. (\[eq:W-tilde\]) in Eq. (\[eq:kernel-dyn\]), we finally obtain $\mathcal{K}^{\rm el,dyn}$$=$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$ with $\mathcal{K}^{\rm el,stat}_{n{\bf k}n'{\bf k}'}$$=$$W_{n{\bf k}n'{\bf k}'}(0)$ and $$\begin{aligned} \hspace{-10pt} \Delta\mathcal{K}^{\rm el}_{n{\bf k},n'{\bf k}} &\!\!\!\!\!\!\!=&\!\!\!\!\!\! \sum_{i}^{N_{\rm p}}\!2a_{i;n{\bf k}n'{\bf k}'} \!\left[ \frac{1} {\omega_{i;n{\bf k}n'{\bf k}'}} \right. \nonumber \\ && \hspace{-50pt} \left. + \frac{ I\!(\xi_{n{\bf k}}\!,\!\xi_{n'{\bf k}'}\!,\omega_{i;n{\bf k}n'{\bf k}'}\!) \!\!-\!\! I\!(\xi_{n{\bf k}}\!,-\!\xi_{n'{\bf k}'}\!,\omega_{i;n{\bf k}n'{\bf k}'}\!) }{{\rm tanh}[(\beta/2) \xi_{n{\bf k}}]{\rm tanh}[(\beta/2) \xi_{n'{\bf k}'}]} \right] , \label{eq:Delta-kernel}\end{aligned}$$ where the function $I$ is defined by Eq. (55) in Ref. . In order to calculate Eq. (\[eq:Delta-kernel\]), we determine the plasmon coupling coefficients $a_{i;n{\bf k}n'{\bf k}'}$ and the plasmon frequencies $\omega_{i;n{\bf k}n'{\bf k}'}$ by the following procedure: (i) Calculate the screened Coulomb interaction for the [*real*]{} frequency grid $W_{n{\bf k}n'{\bf k}'}(\nu_{j}\!+\!{\rm i}\eta)$, where {$\nu_{j}$} ($j=1, 2, . . .N_{\omega}$) specifies the frequency grid on which the numerical calculation is performed, and $\eta$ is a small positive parameter, (ii) determine the plasmon frequencies $\{\omega_{i;n{\bf k}n'{\bf k}'}\}$ by the position of the peaks up to the $N_{\rm p}$-th largest in ${\rm Im}W_{n{\bf k}n'{\bf k}'}(\nu_{j}\!+\!{\rm i}\eta)$, (iii) calculate the screened Coulomb interaction for the [*imaginary*]{} frequency grid $W_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j})$, and (iv) using the calculated $W_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j})$, determine the plasmon coupling coefficients $\{a_{i;n{\bf k}n'{\bf k}'}\}$ via the least squares fitting by $\tilde{W}_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j})$. For the fitting, the variance to be minimized is defined as $$\begin{aligned} S_{n{\bf k}n'{\bf k}'} \!\!\!\!&=&\!\!\!\! \sum^{N_{\omega }}_{j} \delta \omega_{j}\biggl[ W_{n{\bf k}n'{\bf k}'}({\rm i}\nu_{j}) -W_{n{\bf k}n'{\bf k}'}(0) \nonumber \\ && - \sum^{N_{\rm p}}_{i} a_{i;n{\bf k}n'{\bf k}'} g_{i;n{\bf k}n'{\bf k}'}(\nu_{j}) \biggr]^{2} ,\end{aligned}$$ and we have introduced a weight $\delta \omega_{j}$ satisfying $\sum^{N_{\omega }}_{j}\delta \omega_{j}$$=$$1$. With all the plasmon frequencies given, the extrema condition $\frac{\partial S}{\partial a_{i}}=0$ ($i=1$, . . . , $N_{\rm p}$) reads $$\begin{aligned} \begin{pmatrix} a_{1}\\ a_{2}\\ \vdots \end{pmatrix} \!\!\!\!&=&\!\!\!\! \begin{pmatrix} V^{gg}_{11} & V^{gg}_{12} & \cdots \\ V^{gg}_{21} & V^{gg}_{22} & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix}^{-1} \begin{pmatrix} V^{Wg}_{1} \\ V^{Wg}_{2} \\ \vdots \end{pmatrix} . \label{eq:fit-coeff}\end{aligned}$$ Here, $V^{Wg}$ and $V^{gg}$ are defined by $$\begin{aligned} V^{Wg}_{i} &\!\!\!\!\!=&\!\!\!\!\! \sum_{j=1}^{N_{\omega}} \delta\omega_{j}[W_{j}-W(0)] g_{i}(\nu_{j}) ,\\ V^{gg}_{ij} &\!\!\!\!\!=&\!\!\!\!\! \sum_{k=1}^{N_{\omega}} \delta\omega_{k} g_{i}(\nu_{k}) g_{j}(\nu_{k}) .\end{aligned}$$ For arbitrary frequency grids, we define the weight as $$\begin{aligned} \delta\omega_{j}\propto \left\{ \begin{array}{cl} 0 & (j=1, N_{w}) \\ (\nu_{j+1}\!-\!\nu_{j-1})p_{j} & (j\neq 1, N_{w}) \\ \end{array} \right. .\end{aligned}$$ The factor $p_{j}$ is the weight for the variance function introduced for generality, and we set $p_{j}=1$ in Secs. \[sec:theory\] and \[sec:appl-Li\]. When a negative plasmon coupling appears, we fix the corresponding coupling to zero, recalculate Eq. (\[eq:fit-coeff\]), and repeat this procedure until all the coupling coefficients becomes nonnegative so that the positive definiteness of the loss function is guaranteed. ![(Color online) Screened Coulomb interaction $W_{n{\bf k}n'{\bf k}'}$ and the corresponding approximate function $\tilde{W}_{n{\bf k}n'{\bf k}'}$ for fcc lithium under 14GPa calculated along the real frequency axis \[(a), (c)\], and the imaginary frequency axis \[(b), (d)\]. The band indices $n$ and $n'$ specify the partially occupied band. ${\bf k}$ and ${\bf k}'$ are $(2\pi/a)(1/7,1/7,1/7)$ and $(0,0,0)$ for (a)–(b), whereas $(2\pi/a)(2/7,2/7,6/7)$ and $(0,0,0)$ for (c)–(d).[]{data-label="fig:fit"}](Li_fcc_14GPa_Wnknk_k7_fit_w_ed_130926_2.jpg) For the determination of plasmon frequencies, the calculated spectrum of ${\rm Im}W_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\eta)$ is examined for each {$n,{\bf k},n',{\bf k}'$}. We have implemented a simple algorithm as follows: First, the peaks are specified as the point where the gradient of ${\rm Im}W_{n{\bf k}n'{\bf k}'}(\nu_{j}+{\rm i}\eta)$ turns from negative to positive; next, the specified peaks are sampled in order by their weighted values $p_{j}{\rm Im}W_{n{\bf k}n'{\bf k}'}(\nu_{j}+{\rm i}\eta)$. By increasing $N_{\rm p}$, we can expect all the relevant plasmon modes are properly considered. We show in Fig. \[fig:fit\] the results of the fitting for fcc Li under 14GPa as typical cases where the fitting is straightforward \[panel (a) and (b)\] and difficult \[(c) and (d)\]. The peaks used for the fitting are indicated by arrows. For the former, an accurate fitting function was obtained with $N_{\rm p}$$=$$2$, where the derived fitting function and its analytic continuation $\tilde{W}_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\delta)$ indicated by thick blue lines reproduce the numerically calculated $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ and $W_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\delta)$ quite well, respectively. For the latter, on the other hand, good agreement of $\tilde{W}_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ and $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ was not achieved with $N_{\rm p}$$\leq$7, where $a_{i;n{\bf k}n'{\bf k}'}$ for the peaks indicated by the smaller arrows were zero. This was because one of the relevant plasmon modes indicated by the larger arrows was the eighth largest with respect to the peak height. The convergence of $T_{\rm c}$ with respect to $N_{\rm p}$ can be slow due to such a feature, though it becomes serious only for {$n, {\bf k}, n', {\bf k}'$} where the dynamical structure is blurred by strong plasmon damping \[see the vertical axes in panels (a) and (c)\]. We here also note possible systematic errors in the present algorithm. First, multiple plasmon peaks in $W_{n{\bf k}n'{\bf k}'}(\omega+{\rm i}\delta)$ may mutually overlap due to their peak broadening. Then, some plasmon modes are hidden by large broad peaks and cannot be specified even if we increase $N_{\rm p}$. We have assumed that these hidden modes are negligible because of their small spectral weight and strong damping. Next, the variance does not exactly converge to zero since the numerically calculated $W_{n{\bf k}n'{\bf k}'}({\rm i}\omega)$ shows a weak cusplike structure at ${\rm i}\omega=0$ \[see panel (d) in Fig. \[fig:fit\]\]. This structure probably originates from the finite lifetime of the plasmon modes. Its effect is not included by the plasmon-pole approximation, and will be examined in future studies. \[tab:Tc\] [lccccc]{} & Al &\ & &14GPa & 20GPa & 25GPa& 30GPa\ $\lambda$ &0.417 &0.522 &0.623 &0.722 &0.812\ $\lambda^{a}$ & & 0.49 &0.66 & &0.83\ $\omega_{\rm ln}$ \[K\] & 314 &317&316&308&304\ $r_{s}$ &2.03 &2.71 &2.64 &2.59 &2.55\ $\Omega_{\rm p}$\[eV\] &16.2& 8.23& 8.44 &8.51 &8.58\ $T_{\rm c}^{\rm ph}$ \[K\] &5.9 &10.0 &15.2 &19.0 &23.3\ $T_{\rm c}^{\rm stat}$ \[K\] &0.8 &0.7 &1.8 & 3.2 &5.0\ $T_{\rm c}^{N_{\rm p}=1}$ \[K\] &1.4 &2.2 &4.1 &6.5 &9.1\ $T_{\rm c}^{N_{\rm p}=2}$ \[K\] &1.4 &2.2 &4.4 &6.8 &9.1\ $T_{\rm c}^{\rm expt.}$ \[K\] & 1.20$^{b}$ & $<$4 &\ \ \ \ ![(Color online) Our calculated $T_{\rm c}$ (solid squares and circles) for aluminum and fcc lithium under high pressures compared with the experimentally observed values. The open symbols represent the experiments: Ref.  (open inverted triangle), Ref.  (open squares), Ref.  (open circles), Ref.  (open regular triangles), and Ref.  (open diamonds). []{data-label="fig:Tc-expt"}](Al_Li_fcc_pressure_Tc_compare_expts_ed_130930.jpg) Application to lithium under pressures {#sec:appl-Li} ====================================== The above formalism, which is based on the plasmon-pole approximation, is expected to be valid for the nearly uniform electron gas. We here present the recent application to an elemental-metal superconductor Li. Lithium has been known to exhibit superconductivity with $T_{\rm c}$$\gtrsim$10 K under high pressure.[@Shimizu2002; @Struzhkin2002; @Deemyad2003; @Lin-Dunn] Early *ab initio* calculations[@Christensen-Novikov; @Tse; @Kusakabe2005; @Kasinathan; @Jishi] including that based on the SCDFT[@Profeta-pressure] reproduced the experimentally observed pressure dependence of $T_{\rm c}$ quantitatively. However, a later sophisticated calculation[@Bazhirov-pressure] using the Wannier interpolation technique[@Giustino-Wannier-elph] has shown that the numerically converged electron-phonon coupling coefficient is far smaller than the previously reported values. On the other hand, the plasmon effect is expected to be substantial because the density of conducting electrons $n$, which determines a typical plasmon frequency by $\propto\sqrt{n}$, is relatively small in Li due to the large radius of the ion and the small number of valence electrons. Therefore, it is interesting to see if the newly included plasmon contribution fills the gap between the theory and experiment. It is also important to examine whether the present *ab initio* method works successfully for conventional superconductors whose $T_{\rm c}$s have already been well reproduced by the conventional SCDFT. For that reason, we also applied the present method to aluminum. ![image](Al_fccLi_14GPa_Kernel_ph_el_K1_ed_130926_2.jpg) Calculation with small $N_{p}$ {#subsec:small-Np} ------------------------------ In Ref. , we performed calculations for fcc Li under pressures of 14, 20, 25, and 30GPa. All our calculations were carried out within the local-density approximation [@Ceperley-Alder; @PZ81] using [*ab initio*]{} plane-wave pseudopotential calculation codes [Quantum Espresso]{} [@Espresso; @Troullier-Martins] (see Ref.  for further details). The phonon contributions to the SCDFT exchange-correlation kernels ($\mathcal{K}^{\rm ph}$ and $\mathcal{Z}^{\rm ph}$) were calculated using the energy-averaged approximation [@GrossII], whereas the electron contributions ($\mathcal{K}^{\rm el,stat}$ and $\Delta\mathcal{K}^{\rm el}$) were calculated by Eq. (13) in Ref.  and Eq. (\[eq:Delta-kernel\]) to evaluate the plasmon effect. The SCDFT gap equation was solved with a random sampling scheme given in Ref. , with which the sampling error in the calculated $T_{\rm c}$ was not more than a few percent. In addition to the typical plasmon, an extra plasmon due to a band-structure effect has been discussed for Li[@Karlsson-Aryasetiawan; @Silkin2007] and Al[@Hoo-Hopfield; @Sturm-Oliveira1989]. We therefore carried out the calculation for $N_{\rm p}$$=$$1$ and $2$. In Table \[tab:Tc\], we summarize our calculated $T_{\rm c}$ values with $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$ ($T_{\rm c}^{\rm ph}$), $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$ ($T_{\rm c}^{\rm stat}$), and $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$ ($T_{\rm c}^{N_{\rm p}=1}$ and $T_{\rm c}^{N_{\rm p}=2}$). The estimated electron-phonon coupling coefficient $\lambda$, the logarithmic average of phonon frequencies $\omega_{\rm ln}$, the density parameter $r_{s}$, and typical plasma frequency $\Omega_{\rm p}$ are also given. Instead of using the Wannier-interpolation technique, we carried out the Fermi surface integration for the input Eliashberg functions[@Migdal-Eliashberg] with broad smearing functions, [@Akashi-plasmon] and we obtained $\lambda$ consistent with the latest calculation [@Bazhirov-pressure], which is smaller than the earlier estimates [@Tse; @Kusakabe2005; @Profeta-pressure; @Kasinathan; @Christensen-Novikov; @Jishi]. The material and pressure dependence of the theoretical $T_{\rm c}$ follows that of $\lambda$. With $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$, $T_{\rm c}$ is estimated to be of order of 10K. While it is significantly suppressed by including $\mathcal{K}^{\rm el,stat}$, it is again increased by introducing $\Delta\mathcal{K}^{\rm el}$. We here do not see significant $N_{\rm p}$ dependence, which is further examined in Sec. \[subsec:large-Np\]. The calculated values of $T_{\rm c}$ are compared with the experimental values in Fig. \[fig:Tc-expt\]. With the static approximation (red square), the general trend of the experimentally observed $T_{\rm c}$ is well reproduced: Aluminum exhibits the lowest $T_{\rm c}$, and $T_{\rm c}$ in Li increases as the pressure becomes higher. However, the calculated $T_{\rm c}$ for Li is significantly lower than the experimental one, which demonstrates that the conventional phonon theory is quantitatively insufficient to understand the origin of the high $T_{\rm c}$ in Li under high pressures. From the previous *ab initio* calculations, this insufficiency has not been well recognized because either too strong electron-phonon coupling or too weak electron-electron Coulomb interaction was used. With the plasmon contribution (blue circle), the resulting $T_{\rm c}$ systematically increases compared with the static-level one and it becomes quantitatively consistent with the experiment. For Al, in contrast, the accuracy is acceptable with both $T^{\rm stat}_{\rm c}$ and $T^{N_{\rm p}=2}_{\rm c}$, where the increase of $T_{\rm c}$ by $\Delta\mathcal{K}^{\rm el}$ is relatively small. These results indicate the followings: First, the plasmon contribution is essential for the high $T_{\rm c}$ in fcc Li under pressure, and second, our scheme gives accurate estimates of $T_{\rm c}$ regardless of whether their dynamical effects are strong or weak. ![(Color online) (a) Decomposition of the nondiagonal exchange-correlation kernel $\mathcal{K}_{n{\bf k}n'{\bf k}'}$ at $T$$=$$0.01$K calculated for fcc lithium under pressure of 14GPa, averaged by equal-energy surfaces for $n'{\bf k}'$. (b) The corresponding gap function calculated with (darker) and without (lighter) $\Delta\mathcal{K}^{\rm el}$.[]{data-label="fig:kernel-gap"}](Li_fcc_14GPa_Kernel_ph_el_K1_gap_ed_130926_2.jpg) We discuss the origin of the enhancement of $T_{\rm c}$ by considering the dynamical effect in terms of partially energy-averaged nondiagonal kernels $\mathcal{K}_{n{\bf k}}(\xi)$$\equiv$$\frac{1}{N(\xi)}\sum_{n'{\bf k}'}\mathcal{K}_{n{\bf k}n'{\bf k}'}\delta(\xi-\xi_{n'{\bf k}'})$. With $n{\bf k}$ chosen as a certain point near the Fermi energy, we plotted the averaged kernel for fcc Li under pressure of 14GPa and Al with $N_{\rm p}$$=$$2$ in Fig. \[fig:kernel\]. The total kernel is decomposed into $\mathcal{K}^{\rm ph}$ (solid red line), $\mathcal{K}^{\rm el,stat}$ (dotted green line), and $\Delta\mathcal{K}^{\rm el}$ (dashed blue line). Generally, the total kernel becomes slightly negative within the energy scale of the phonons due to $\mathcal{K}^{\rm ph}$, whereas it becomes positive out of this energy scale mainly because of $\mathcal{K}^{\rm el,stat}$. The $\Delta\mathcal{K}^{\rm el}$ value is positive definite, but nearly zero for a low energy scale. As discussed in Sec. \[sec:theory\], the high-energy enhancement of repulsion increases $T_{\rm c}$ through the retardation effect. Remarkably, $\Delta\mathcal{K}^{\rm el}$ sets in from an energy far smaller than the typical plasmon frequency (see Table \[tab:Tc\]), and its absolute value is of the same order of $\mathcal{K}^{\rm el,stat}$. These features can be also seen in the case of homogeneous electron gas studied by Takada [@Takada1978]. On the difference between Li and Al \[(a) and (b)\], we see that the contribution of $\Delta\mathcal{K}^{\rm el}$ in Al is noticeably smaller than that in Li. Also, the energy scale of the structure of $\Delta\mathcal{K}^{\rm el}$ \[inset of (b)\], which correlates with $\Omega_{\rm p}$ (see Table \[tab:Tc\]), is small (large) for Li (Al). These differences explain why the effect of $\Delta\mathcal{K}^{\rm el}$ is more significant in Li. The enhanced retardation effect by the plasmon is seen more clearly from the gap functions plotted together with the non-diagonal kernel in Fig. \[fig:kernel-gap\]. Indeed, we observe substantial enhancement of the negative gap value in the high-energy region, where the additional repulsion due to $\Delta\mathcal{K}^{\rm el}$ is strong. This clearly demonstrates that the plasmon mechanism indeed enhances $T_{\rm c}$, as is described in Sec. \[sec:theory\]. We did not find a nonzero solution for the gap equation Eq. (\[eq:gap-eq\]) with only the electron-electron contributions ($\mathcal{K}$$=$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$) down to $T=0.01$ kelvin, but did with the electron-phonon and the static electron-electron contribution ($\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$). Hence, while the driving force of the superconducting transition in Li is the phonon effect, the plasmon effect is essential to realize high $T_{\rm c}$. Finally, we also examined the effect of energy dependence of electronic density of states (DOS) on $\mathcal{Z}^{\rm ph}$. Since the form for $\mathcal{Z}^{\rm ph}$ used above \[Eq. (24) in Ref. \] only treats the constant component of the density of states, we also employed a form generalized for the nonconstant density of states \[Eqs. (40) in Ref. \]. The calculated $T_{\rm c}$ changes by approximately 2% with the nonconstant component, indicating that the constant-DOS approximation for the phonon contributions is valid for the present systems. ![(Color online) $N_{\rm p}$ dependence of the calculated gap function $\Delta_{n{\bf k}}$ at the Fermi level at $T$=0.01 K for (a) fcc lithium under pressure of 25GPa and (b) aluminum.[]{data-label="fig:gap-conv"}](Al_Li_fcc_25GPa_gap_conv_wrt_peak_nointerpol_ed_130928.jpg){width="7cm"} \[tab:Tc-2\] --------------------------------- --------- --------- --------- --------- ---------- Al 14GPa 20GPa 25GPa 30GPa $T_{\rm c}^{N_{\rm p}=1}$ \[K\] 1.4,1.5 2.2,2.8 4.1,5.2 6.5,7.4 9.1,11.1 $T_{\rm c}^{N_{\rm p}=2}$ \[K\] 1.4,1.6 2.2,3.1 4.4,5.5 6.8,8.0 9.1,10.7 $T_{\rm c}^{N_{\rm p}=5}$ \[K\] 1.6 3.8 6.5 9.2 12.0 $T_{\rm c}^{N_{\rm p}=8}$ \[K\] 1.6 3.8 6.5 9.2 12.0 --------------------------------- --------- --------- --------- --------- ---------- : Calculated $T_{\rm c}$ with different $N_{\rm p}$ using the procedure described in the text. For $N_{\rm p}$$=$1 and 2, the calculated values in Table \[tab:Tc\] are given together for comparison (left values). $N_{\rm p}$ dependence of $T_{\rm c}$ {#subsec:large-Np} ------------------------------------- Here we investigate the convergence of $T_{\rm c}$ with respect to the number of plasmon peaks $N_{\rm p}$. To address this problem, on top of the procedure described in Secs. \[sec:theory\] and \[subsec:small-Np\], we employed a slightly different algorithm. The difference is as follows. First, in the previous procedure, the plasmon frequencies $\omega_{i;n{\bf k}n'{\bf k}'}$ and coupling coefficients $a_{i;n{\bf k}n'{\bf k}'}$ for a set of sampling points were calculated from linear interpolation using the *ab initio* data on the equal grid, where the interpolation was independently carried out for each $i$-th largest branch. Since such an algorithm becomes unstable for damped peaks, we here did not carry out that, but rather determined $\omega_{i;n{\bf k}n'{\bf k}'}$ and $a_{i;n{\bf k}n'{\bf k}'}$ simply by the *ab initio* values on the neighboring grid point. Second, the weight for the variance and the ordering of the peak $p_{j}$ (see Sec. \[subsec:plasmon-pole\]) was set to unity in the previous procedure, but we here adopted $p_{j}= \nu_{j}^{-(1/3)}$: In an analytic $T_{\rm c}$ formula for three-dimensional electron gas derived by Takada \[Eq. (2.28) in Ref. \], the coefficient $\langle F \rangle$ in the exponent depends on the typical plasmon frequency by $\Omega_{\rm p}^{-(1/3)}$, so that we determined $p_{j}$ accordingly. We have indeed found that this setting of $p_{j}$ accelerates the convergence of the calculated gap function with respect to $N_{\rm p}$, as demonstrated by Fig. \[fig:gap-conv\].[@comment-accelerate] Carrying out the above procedure,[@comment-recalc] we calculated $T_{\rm c}$ for Al and Li under the pressures. The calculated result for $N_{\rm p}$$=$1, 2, 5 and 8 is summarized in Table \[tab:Tc-2\] together with that of Sec. \[subsec:small-Np\]. For $N_{\rm p}$$=$1 and 2, the previous and present procedures give slightly different values of $T_{\rm c}$, which originates mainly from the difference in the interpolation of $\omega_{i;n{\bf k}n'{\bf k}'}$ and $a_{i;n{\bf k}n'{\bf k}'}$. Within the present results, the caluculated $T_{\rm c}$ for Al shows little $N_{\rm p}$ dependence, whereas $N_{\rm p}$ has to be larger than 5 for Li to achieve the convergence within 0.1K. This indicates that the damped dynamical structure of the Coulomb interaction ignored with $N_{\rm p}$$=$1 and 2 also has a nonnegligible effect. We note that the general numerical trend observed in the results in Sec. \[subsec:small-Np\] is also valid for the calculated values with $N_{\rm p}\geq 5$. Summary and Conclusion {#sec:summary} ====================== We reviewed the recent progress by the authors in the SCDFT to address non-phonon superconducting mechanisms.[@Akashi-plasmon] An exchange-correlation kernel entering the SCDFT gap equation has been formulated within the dynamical RPA so that the plasmons in solids are considered. Through the retardation effect, plasmons can induce superconductivity, which has been studied for more than 35 years as the plasmon-induced pairing mechanism. A practical method to calculate $T_{\rm c}$ considering the plasmon effect have been implemented and applied to fcc Li. We have shown that the plasmon effect considerably raises $T_{\rm c}$ by cooperating with the conventional phonon-mediated pairing interaction, which is essential to understand the high $T_{\rm c}$ in Li under high pressures. The recent application suggests a general possibility that plasmons have a substantial effect on $T_{\rm c}$, even in cases where it does not alone induce superconducting transition. It is then interesting to apply the present formalism to “other high-temperature superconductors"[@Pickett-review-other] such as layered nitrides, fullerides, and the bismuth perovskite. Effects of the electron-electron and electron-phonon interactions in these systems have recently been examined from various viewpoints, particularly with *ab initio* calculations.[@Meregalli-Savrasov-BKBO; @Heid-Bohnen2005; @Yin-Kotliar-PRX; @Antropov-Gunnarsson-C60; @Janssen-Cohen-C60; @Akashi-MNCl; @Akashi-fullerene; @Nomura-C60-cRPA] Since they have a nodeless superconducting gap, plasmons may play a crucial role to realize their high $T_{\rm c}$.[@Bill2002-2003] More generally, there can be other situations: (i) the phonon effect does not dominate over the static Coulomb repulsion, but the plasmon effect does (i.e., a superconducting solution is not found with $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$, but is found with $\mathcal{K}$$=$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$), and (ii) either of the two effects does not independently, but their cooperation does (i.e., a superconducting solution is found with $\mathcal{K}$$=$$\mathcal{K}^{\rm ph}$$+$$\mathcal{K}^{\rm el,stat}$$+$$\Delta\mathcal{K}^{\rm el}$). Searching for superconducting systems of such kinds is another interesting future subject, for which our scheme provides a powerful tool based on the density functional theory. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank Kazuma Nakamura and Yoshiro Nohara for providing subroutines for calculating the RPA dielectric functions. This work was supported by Funding Program for World-Leading Innovative R & D on Science and Technology (FIRST Program) on “Quantum Science on Strong Correlation,” JST-PRESTO, Grants-in-Aid for Scientic Research (No. 23340095), and the Next Generation Super Computing Project and Nanoscience Program from MEXT, Japan. [999]{} J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. **108**, 1175 (1957). J. G. Bednorz and K. A. Müller, Z. Phys. B **64**, 189 (1986). A. B. Migdal, Sov. Phys. JETP **7**, 996 (1958); G. M. Eliashberg, Sov. Phys. JETP **11**, 696 (1960); D. J. Scalapino, in [*Superconductivity*]{} edited by R. D. Parks, (Marcel Dekker, New York, 1969) VOLUME 1; J. R. Schrieffer,[*Theory of superconductivity; Revised Printing*]{}, (Westview Press, Colorado, 1971); P. B. Allen and B. Mitrović, in *Solid State Physics*, edited by H. Ehrenreich, F. Seitz, and D. Turnbull (Academic, New York, 1982), Vol. 37, p. 1. W. Kohn and L. J. Sham, Phys. Rev. **140**, A1133 (1965). S. Baroni, S. deGironcoli, A. Dal Corso, and P. Giannozzi, Rev. Mod. Phys. **73**, 515(2001). K. Kunc and R. M. Martin, in *Ab initio Calculation of Phonon Spectra*, edited by J. T. Devreese, V. E. van Doren, and P. E. van Camp (Plenum, New York, 1983), p. 65. D. M. Ceperley and B. J. Alder, Phys. Rev. Lett. **45**, 566 (1980). J. P. Perdew and A. Zunger, Phys. Rev. B **23**, 5048 (1981). S. Y. Savrasov and D. Y. Savrasov, Phys. Rev. B **54**, 16487 (1996). H. J. Choi, D. Roundy, H. Sun, M. L. Cohen, and S. G. Louie, Nature (London) **418**, 758 (2002); Phys. Rev. B **66**, 020513(R) (2002). W. L. McMillan, Phys. Rev. [**167**]{}, 331 (1968). P. B. Allen and R. C. Dynes, Phys. Rev. B [**12**]{}, 905 (1975). P. Morel and P. W. Anderson, Phys. Rev. **125**, 1263 (1962); N. N. Bogoliubov, V. V. Tolmachev, and D. V. Shirkov, [*A New Method in the Theory of Superconductivity*]{} (1958) (translated from Russian: Consultants Bureau, Inc., New York, 1959). L. N. Oliveira, E. K. U. Gross, and W. Kohn, Phys. Rev. Lett. **60**, 2430 (1988). T. Kreibich and E. K. U. Gross, Phys. Rev. Lett. **86**, 2984 (2001). M. Lüders, M. A. L. Marques, N. N. Lathiotakis, A. Floris, G. Profeta, L. Fast, A. Continenza, S. Massidda, and E. K. U. Gross, Phys. Rev. B **72**, 024545 (2005). M. A. L. Marques, M. Lüders, N. N. Lathiotakis, G. Profeta, A. Floris, L. Fast, A. Continenza, E. K. U. Gross, and S. Massidda, Phys. Rev. B **72**, 024546 (2005). A. Floris, G. Profeta, N. N. Lathiotakis, M. Lüders, M. A. L. Marques, C. Franchini, E. K. U. Gross, A. Continenza, and S. Massidda, Phys. Rev. Lett. **94**, 037004 (2005). A. Sanna, G. Profeta, A. Floris, A. Marini, E. K. U. Gross, and S. Massidda, Phys. Rev. B **75**, 020511(R) (2007). C. Bersier, A. Floris, A. Sanna, G. Profeta, A. Continenza, E. K. U. Gross, and S. Massidda, Phys. Rev. B **79**, 104503 (2009). R. Akashi, K. Nakamura, R. Arita, and M. Imada, Phys. Rev. B **86**, 054513 (2012). R. Akashi and R. Arita, Phys. Rev. B **88**, 054510 (2013). D. J. Scalapino, Rev. Mod. Phys. **84**, 1383 (2012). W. Kohn and J. M. Luttinger, Phys. Rev. Lett. **15**, 524 (1965). V. Radhakrishnan, Phys. Lett. **16**, 247 (1965). H. Fröhlich, J. Phys. C: Solid State Phys. **1**, 544 (1968). Y. Takada, J. Phys. Soc. Jpn. **45**, 786 (1978). H. Rietschel and L. J. Sham, Phys. Rev. B **28**, 5100 (1983). W. A. Little, Phys. Rev. **134**, A1416 (1967). C. S. Koonce, M. L. Cohen, J. F. Schooley, W. R. Hosler, and E. R. Pfeiffer, Phys. Rev. **163**, 380 (1967). Y. Takada, J. Phys. Soc. Jpn. **49**, 1267 (1980). J. W. Garland, Jr., Phys. Rev. Lett. **11**, 111 (1963). D. Pines, Can. J. Phys. **34**, 1379 (1956). J. Ihm, M. L. Cohen, and S. F. Tuan, Phys. Rev. B **23**, 3258 (1981). V. L. Ginzburg, Sov. Phys. Usp. **13**, 335 (1970). D. Allender, J. Bray, and J. Bardeen, Phys. Rev. B **7**, 1020 (1973). V. Z. Kresin, Phys. Rev. B **35**, 8716 (1987). A. Bill, H. Morawitz, and V. Z. Kresin, Phys. Rev. B **66**, 100501(R) (2002); Phys. Rev. B **68**, 144519 (2003). S. Yamanaka, K. Hotehama, and H. Kawaji, Nature (London) **392**, 580 (1998). Y. Taguchi, A. Kitora, and Y. Iwasa, Phys. Rev. Lett. **97**, 107001 (2006). K. Taniguchi, A. Matsumoto, H. Shimotani, and H. Takagi, Appl. Phys. Lett. **101**, 042603 (2012). J. T. Ye, Y. J. Zhang, R. Akashi, M. S. Bahramy, R. Arita, and Y. Iwasa, Science **338**, 1193 (2012). R. Akashi and R. Arita, Phys. Rev. Lett. **111**, 057006 (2013). S. Massidda, F. Bernardini, C. Bersier, A. Continenza, P. Cudazzo, A. Floris, H. Glawe, M. Monni, S. Pittalis, G. Profeta, A. Sanna, S. Sharma, and E. K. U. Gross, Supercond. Sci. Technol. **22**, 034006 (2009). D. Pines, *Elementary Excitations in Solids* (Benjamin, New York, 1963). S. Kurth, M. Marques, M. Lüders, and E. K. U. Gross, Phys. Rev. Lett. **83**, 2628 (1999). M. S. Hybertsen and S. G. Louie, Phys. Rev. B **35**, 5585 (1987); M. S. Hybertsen and S. G. Louie, Phys. Rev. B **35**, 5602 (1987). J. Kondo, Prog. Theor. Phys. **29**, 1 (1963). Y. Takada, J. Phys. Soc. Jpn, **51**, 63 (1982). Y. Takada, J. Phys. Soc. Jpn, **78**, 013703 (2009). His theory also treats the coupling between phonons and plasmons, which is not considered in our method. K. Shimizu, H. Ishikawa, D. Takao, T. Yagi, and K. Amaya, Nature (London) **419**, 597 (2002). V.V. Struzhkin, M. I. Eremets, W. Gan, H. K. Mao, and R. J. Hemley, Science **298**, 1213 (2002). S. Deemyad and J. S. Schilling, Phys. Rev. Lett. **91**, 167001 (2003). T. H. Lin and K. J. Dunn, Phys. Rev. B **33**, 807 (1986). J. S. Tse, Y. Ma, and H. M. Tütüncü, J. Phys. Condens. Matter **17**, S911 (2005); Y. Yao, J. Tse, K. Tanaka, F. Marsiglio, Y. Ma, Phys. Rev. B **79**, 054524 (2009). S. U. Maheswari, H. Nagara, K. Kusakabe, and N. Suzuki, J. Phys. Soc. Jpn. **74**, 3227 (2005). D. Kasinathan, J. Kuneš, A. Lazicki, H. Rosner, C. S. Yoo, R. T. Scalettar, and W. E. Pickett, Phys. Rev. Lett. **96**, 047004 (2006); D. Kasinathan, K. Koepernik, J. Kuneš, H. Rosner, and W. E. Pickett, Physica C **460-462**, 133 (2007). N. E. Christensen and D. L. Novikov, Phys. Rev. B **73**, 224508 (2006). R. A. Jishi, M. Benkraouda, and J. Bragin, J. Low Temp. Phys. **147**, 549 (2007). G. Profeta, C. Franchini, N. N. Lathiotakis, A. Floris, A. Sanna, M. A. L. Marques, M. Lüders, S. Massidda, E. K. U. Gross, and A. Continenza, Phys. Rev. Lett. **96**, 047003 (2006). T. Bazhirov, J. Noffsinger, and M. L. Cohen, Phys. Rev. B **82**, 184509 (2010). F. Giustino, M. L. Cohen, and S. G. Louie, Phys. Rev. B **76**, 165108 (2007). P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. Dal Corso, S. Fabris, G. Fratesi, S. de Gironcoli, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, J. Phys.: Condens. Matter **21**, 395502 (2009); http://www.quantum-espresso.org/. N. Troullier and J. L. Martins, Phys. Rev. B **43**, 1993 (1991). V. M. Silkin, A. Rodriguez-Prieto, A. Bergara, E. V. Chulkov, and P. M. Echenique, Phys. Rev. B **75**, 172102 (2007). K. Karlsson and F. Aryasetiawan, Phys. Rev. B **52**, 4823 (1995). E. N. Foo and J. J. Hopfield, Phys. Rev. **173**, 635 (1968). K. Sturm and L. E. Oliveira, Phys. Rev. B **40**, 3672 (1989). N. W. Ashcroft and N. D. Mermin, *Solid State Physics* (Thomson Learning, Singapore, 1976). R. Akashi and R. Arita, Phys. Rev. B **88**, 014514 (2013). The two settings of $p_{j}$ give slightly different values of the variance for $N_{\rm p}\rightarrow \infty$ due to the cusplike structure discussed in Sec. \[subsec:plasmon-pole\], though the difference is invisibly small here. Also, we used Kohn-Sham energy eigenvalues on an auxiliary $21^{3}$ k-point grid for the ${\bf k}$ integration in Eq. (\[eq:chi-def\]) when the both $n$ and $n$’ corresponds to partially-occupied bands, with which some plasmon peaks became sharp. W. E. Pickett, Physica B **296**, 112 (2001); Physica C **468**, 126 (2008). V. Meregalli and S. Y. Savrasov, Phys. Rev. B **57**, 14453 (1998). R. Heid and K. P. Bohnen, Phys. Rev. B **72**, 134527 (2005). Z. P. Yin, A. Kutepov, and G. Kotliar, Phys. Rev. X **3**, 021011 (2013). V. P. Antropov, O. Gunnarsson, and A. I. Liechtenstein, Phys. Rev. B **48**, 7651 (1993). J. Laflamme Janssen, M. Côté, S. G. Louie, and M. L. Cohen, Phys. Rev. B **81**, 073106 (2010). Y. Nomura, K. Nakamura, and R. Arita, Phys. Rev. B **85**, 155452 (2012). [^1]: E-mail address: akashi@solis.t.u-tokyo.ac.jp
--- abstract: 'Isotropic Heisenberg exchange naturally appears as the main interaction in magnetism, usually favouring long-range spin-ordered phases. The anisotropic Dzyaloshinskii-Moriya interaction arises from relativistic corrections and is *a priori* much weaker, even though it may sufficiently compete with the isotropic one to yield new spin textures. Here, we challenge this well-established paradigm, and propose to explore a Heisenberg-exchange-free magnetic world. There, the Dzyaloshinskii-Moriya interaction induces magnetic frustration in two dimensions, from which the competition with an external magnetic field results in a new mechanism producing skyrmions of nanoscale size. The isolated nanoskyrmion can already be stabilized in a few-atom cluster, and may then be used as LEGO${\textregistered}$ block to build a large magnetic mosaic. The realization of such topological spin nanotextures in $sp$- and $p$-electron compounds or in ultracold atomic gases would open a new route toward robust and compact magnetic memories.' author: - 'E. A. Stepanov$^{1,2}$, S. A. Nikolaev$^{2}$, C. Dutreix$^{3}$, M. I. Katsnelson$^{1,2}$, V. V. Mazurenko$^{2}$' title: 'Heisenberg-exchange-free nanoskyrmion mosaic' --- The concept of spin was introduced by G. Uhlenbeck and S. Goudsmit in the 1920s in order to explain the emission spectrum of the hydrogen atom obtained by A. Sommerfeld [@Int2]. W. Heitler and F. London subsequently realized that the covalent bond of the hydrogen molecule involves two electrons of opposite spins, as a result of the fermionic exchange [@Int4]. This finding inspired W. Heisenberg to give an empirical description of ferromagnetism [@Int5; @Int6], before P. Dirac finally proposed a Hamiltonian description in terms of scalar products of spin operators [@Int7]. These pioneering works focused on the ferromagnetic exchange interaction that is realized through the direct overlap of two neighbouring electronic orbitals. Nonetheless, P. Anderson understood that the exchange interaction in transition metal oxides could also rely on an indirect antiferromagnetic coupling via intermediate orbitals [@PhysRev.115.2]. This so-called superexchange interaction, however, could not explain the weak ferromagnetism of some antiferromagnets. The latter has been found to arise from anisotropic interactions of much weaker strength, as addressed by I. Dzyaloshinskii and T. Moriya [@DZYALOSHINSKY1958241; @Moriya]. The competition between the isotropic exchange and anisotropic Dzyaloshinskii-Moriya interactions (DMI) leads to the formation of topologically protected magnetic phases, such as skyrmions [@NagaosaReview]. Nevertheless, the isotropic exchange mainly rules the competition, which only allows the formation of large magnetic structures, more difficult to stabilize and manipulate in experiments [@NagaosaReview; @PhysRevX.4.031045]. Finding a new route toward more compact robust spin textures then appears as a natural challenge. As a promising direction, we investigate the existence of two-dimensional skyrmions in the absence of isotropic Heisenberg exchange. Indeed, recent theoretical works have revealed that antiferromagnetic superexchange may be compensated by strong ferromagnetic direct exchange interactions at the surfaces of $sp$- and $p$-electron nanostructures [@silicon; @graphene], whose experimental isolation has recently been achieved [@PbSn; @PhysRevLett.98.126401; @SurfMagn; @kashtiban2014atomically]. Moreover, Floquet engineering in such compounds also offers the possibility to dynamically switch off the isotropic Heisenberg exchange interaction under high-frequency-light irradiation, a unique situation that could not be met in transition metal oxides in equilibrium [@PhysRevLett.115.075301; @Control1; @Control2]. In particular, rapidly driving the strongly-correlated electrons may be used to tune the magnetic interactions, which can be described by in terms of spin operators $\hat{\bf S}_i$ by the following Hamiltonian $$\begin{aligned} H_{\rm spin} = -\sum_{{\ensuremath{\left\langle ij \right\rangle}}} J_{ij} (A) \,\hat{\bf S}_{i}\,\hat{\bf S}_{j} + \sum_{{\ensuremath{\left\langle ij \right\rangle}}}{\bf D}_{ij} (A)\,[\hat{\bf S}_{i}\times\hat{\bf S}_{j}], \label{Hspin}\end{aligned}$$ where the strengths of isotropic Heisenberg exchange $J_{ij}(A)$ and anisotropic DMI ${\bf D}_{ij}(A)$ now depend on the light amplitude $A$. The summations are assumed to run over all nearest-neighbour sites $i$ and $j$. The isotropic Heisenberg exchange term describes a competition between ferromagnetic direct exchange and antiferromagnetic kinetic exchange [@PhysRev.115.2]. Importantly, it may be switched off dynamically by varying the intensity of the high-frequency light, while the anisotropic DMI remains non-zero [@Control2]. ![Stable nanoskyrmion-designed DMI abbreviation resulting from the Monte Carlo simulation of the Heisenberg-exchange-free model on the non-regular square lattice with $B_{z} = 1.2$. Arrows and color depict the in- and out-of-plane spin projection, respectively.[]{data-label="Fig1"}](Fig1.pdf){width="0.67\linewidth"} The study of Heisenberg-exchange-free magnetism may also be achieved in other classes of systems, such as optical lattices of ultracold atomic gases. Indeed, cold atoms have enabled the observation and control of superexchange interactions, which could be reversed between ferromagnetic and antiferromagnetic [@Trotzky], as well as strong DMI [@Gong], following the realization of the spin-orbit coupling in bosonic and fermionic gases [@SO_Lin; @SO_Wang]. Here, we show that such a control of the microscopic magnetic interactions offers an unprecedented opportunity to observe and manipulate nano-scale skyrmions. Heisenberg-exchange-free nanoskyrmions actually arise from the competition between anisotropic DMI and a constant magnetic field. The latter was essentially known to stabilize the spin textures [@PhysRevX.4.031045], whereas here it is a part of the substantially different and unexplored mechanism responsible for nanoskyrmions. Fig. \[Fig1\] immediately highlights that an arbitrary system of few-atom skyrmions can be stabilized and controlled on a non-regular lattice with open boundary conditions, which was not possible at all in the presence of isotropic Heisenberg exchange. [*Heisenberg-exchange-free Hamiltonian*]{} — Motivated by the recent predictions and experiments discussed above, we consider the following spin Hamiltonian $$\begin{aligned} &\hat H_{\rm Hef} = \sum_{{\ensuremath{\left\langle ij \right\rangle}}}{\bf D}_{ij}\,[\hat{\bf S}_{i}\times\hat{\bf S}_{j}] - \sum_{i}{\bf B}\hat{\bf S}_{i}, \label{DMI}\end{aligned}$$ where the magnetic field is perpendicular to the two dimensional system and ${\bf B}=~(0,0,B_{z})$. The latter tends to align the spins in the $z$ direction, while DMI flavors their orthogonal orientations. At the quantum level this non-trivial competition provides a fundamental resource in quantum information processing [@QuantInf]. Here, we are interested in the semi-classical description of the Heisenberg-exchange-free magnetism. ![Magnetic frustration in elementary DMI clusters of the triangular ([**a**]{}) and square ([**b**]{}) lattices. Curving arrows denote clockwise direction of the bonds in each cluster. Big gray arrows correspond to the in-plane DMI vectors. Black arrows in circles denote the in-plane directions of the spin moments. Blue dashed and red dotted lines indicate the bonds with minimal and zero DMI energy, respectively. ([**c**]{}) and ([**d**]{}) illustrate the examples of the spin configurations corresponding to classical ground state of the DMI Hamiltonian.[]{data-label="Fig2"}](Fig2.pdf){width="1\linewidth"} [*DMI-induced frustration*]{} — In the case of the two-dimensional materials with pure DMI between nearest neighbours, magnetic frustration is the intrinsic property of the system. To show this let us start off with the elementary plaquettes of the triangular and square lattices without external magnetic field (see Fig. \[Fig2\]). Keeping in mind real two-dimensional materials with the $C_{nv}$ symmetry [@graphene; @silicon] we consider the in-plane orientation of the DMI vector perpendicular to the corresponding bond. Taking three spins of a single square plaquette as shown in Fig. \[Fig2\] [**b**]{}, one can minimize their energy while discarding their coupling with the fourth spin. Then, the orientation of the remaining spin can not be uniquely defined, because the spin configuration regardless whether it points “up” or “down” has the same energy, which indicates frustration. Thus, Fig. \[Fig2\] [**d**]{} gives the example of the classical ground state of the square plaquette with the energy ${\rm E}_{\square} = - {\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}\,|\mathbf{D}_{ij}|\,S^2$ and magnetization ${\rm M}^z_{\square} = S/2$ (per spin). In turn, frustration of the triangular plaquette is expressed in Fig. \[Fig2\] [**b**]{}, while its magnetic ground state is characterized by the following spin configuration $\mathbf{S}_1 = (0, -\frac{1}{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}}, \frac{1}{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}})\,S$, $\mathbf{S}_2 = (\frac{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{3}}}{2{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}}, \frac{1}{2{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}}, \frac{1}{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}})\,S$ and $\mathbf{S}_3 = (-\frac{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{3}}}{2{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}}, \frac{1}{2{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}}, \frac{1}{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}})\,S$ shown in Fig. \[Fig2\] [**c**]{}. One can see that the in-plane spin components form $120^{\circ}$-Neel state similar to the isotropic Heisenberg model on the triangular lattice [@PhysRev.115.2]. The corresponding energy and magnetization are ${\rm E}_{\triangle} =- {\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{3}}\,|\mathbf{D}_{ij}|\,S^2$ and ${\rm M}^z_{\triangle} = S/{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}$, respectively. Importantly, the ground state of the triangular and square plaquettes is degenerate due to the $C_{nv}$ and in-plane mirror symmetry. For instance, there is another state with the same energy ${\rm E'} = {\rm E}$, but opposite magnetization ${\rm M}'^z = - {\rm M}^z$. Therefore, such elementary magnetic units can be considered as the building blocks in order to realize [*spin spirals*]{} on lattices with periodic boundary conditions and zero magnetic field. The ensuing results are obtained via Monte Carlo simulations, as detailed in Supplemental Material [@SM]. ![Fragments of the spin textures and spin structure factors obtained with the Heisenberg-exchange-free model on the square $20\times20$ ([**a**]{}) and triangular $21\times21$ ([**b**]{}) lattices. The values of the magnetic fields in these simulations were chosen $B_{z} = 3.0$ and $B_{z} = 3.2$ for the triangular and square lattices, respectively. The calculated skyrmion numbers for the triangular (blue triangles) and square (red squares) lattices ([**c**]{}). The magnetic field is in units of DMI. The temperature is equal to ${\rm T}=0.01\,|{\rm \bf D}|$.[]{data-label="Fig3"}](Fig3.pdf){width="1\linewidth"} [*Nanoskyrmionic state*]{} — At finite magnetic field the spiral state can be transformed to a skyrmionic spin texture. Fig. \[Fig3\] gives some examples obtained from the Monte Carlo calculations for the triangular and square lattices. Remarkably, the radius of the obtained nanoskyrmions does not exceed a few lattice constants. The calculated spin structure factors $\chi_{\perp}({\mathbf{q}})$ and $\chi_{\parallel}({\mathbf{q}})$ [@SM] revealed a superposition of the two (square lattice) or three (triangular lattice) spin spirals with $\pm \mathbf{q}$, which is the first indication of the skyrmionic state (see Fig. \[Fig3\] [**a**]{}, [**b**]{}). For another confirmation one can calculate the skyrmionic number, which is related to the topological charge. In the discrete case, the result is extremely sensitive to the number of sites comprising a single skyrmion and to the way the spherical surface is approximated [@Rosales]. Here we used the approach of Berg and Lüscher [@Berg], which is based on the definition of the topological charge as the sum of the nonoverlapping spherical triangles areas [@Blugel2016; @SM]. According to our simulations, the topological charge of each object shown in Fig. \[Fig3\] [**a**]{} and Fig. \[Fig3\] [**b**]{} is equal to unity. The square and triangular lattice systems exhibit completely different dependence of the average skyrmion number on the magnetic field. As it is shown in Fig. \[Fig3\] [**c**]{}, the skyrmionic phase for the square lattice $2.7\leq B_{z} < 4.2$ is much narrower than the one of the triangular lattice $1.2 \leq B_{z} < 6$. Moreover, in the case of the square lattice we observe strong finite-size effects leading to a non-stable value of the average topological charge in the region of $0 < B_{z} < 2.7$. Since the value of the magnetic field is given in the units of DMI, the topological spin structures in the considered model require weak magnetic fields, which is very appealing for modern experiments. We would like to stress that the underlying mechanism responsible for the skyrmions presented in this work is intrinsically different from those presented in other studies. Generally, skyrmions can be realized by means of the different mechanisms [@NagaosaReview]. For instance, in noncentrosymmetric systems these spin textures arise from the competition between isotropic and anisotropic exchange interactions. On the other hand, magnetic frustration induced by competing isotropic exchange interactions can also lead to a skyrmion crystal state, even in the absence of DMI and anisotropy. Moreover, following the results of [@Blugel] the nanoskyrmions are stabilized due to a four-spin interaction. Nevertheless, the nanoskyrmions have never been predicted and observed as the result of the interplay between DMI and a constant magnetic field. ![Catalogue of the DMI nanoskyrmion species stabilized on the small square (top figures) and triangular (bottom figures) clusters with open boundary conditions. The corresponding magnetic fields are (from top to bottom): on-site $B_{z}=3.0;~3.0$, off-site $B_{z}=1.2;~2.4$, bond-centered $B_{z}=2.5;~3.0$. []{data-label="Fig4"}](Fig4.pdf){width="0.64\linewidth"} Full catalogue of nanoskyrmions obtained in this study is presented in Fig. \[Fig4\]. As one can see, they can be classified with respect to the position of the skyrmionic center on the discrete lattice. Thus, the on-site, off-site (center of an elementary triangle or square) and bond-centred configurations have been revealed. Importantly, these structures can be stabilized not only on the lattice, but also on the isolated plaquettes of 12-37 sites with open boundary conditions. It is worth mentioning that not all sites of the isolated plaquettes form the skyrmion. In some cases inclusion of the additional spins that describe the environment of the skyrmion is necessary for stabilization of the topological spin structure. As we discuss below, it can be used to construct a magnetic domain for data storage. [*Nanoskyrmionic mosaic*]{} — By defining the local rules (interaction between nearest neighbours) and external parameters (such as the lattice size and magnetic field) one can obtain magnetic structures with different patterns by using Monte Carlo approach. As it follows from Fig. \[Fig5\], the pure off-site square skyrmion structures are realized on the lattices $6\times14$ with open boundary conditions at $B_{z} = 3.0$. Increasing the lattice size along the $x$ direction injects bond-centred skyrmions into the system. In turn, increasing the magnetic field leads to the compression of the nanoskyrmions and reduces their density. At $B_{z}=3.6$ we observed the on-site square skyrmion of the most compact size. Thus the particular pattern of the resulting nanoskyrmion mosaic respect the minimal size of individual nanoskyrmions and the tendency of the system to formation of close-packed structures to minimize the energy. ![(Top panel) Evolution of the nanoskyrmions on the lattice $6\times14$ with respect to the magnetic field. (Bottom panel) Examples of nanoskyrmion mosaics obtained from the Monte Carlo simulations of the DMI model on the square lattices with open boundary conditions at the magnetic field $B_{z}=3$.[]{data-label="Fig5"}](Fig5.pdf){width="1\linewidth"} The solution of the Heisenberg-exchange-free Hamiltonian  for the magnetic fields corresponding to the skyrmionic phase can be related to the famous geometrical NP-hard problem of bin packing [@Bin]. Let us imagine that there is a set of fixed-size and fixed-energy objects (nanoskyrmions) that should be packed on a lattice of $n \times m$ size in a more compact way. As one can see, such objects are weakly-coupled to each other. Indeed, the contact area of different skyrmions is nearly ferromagnetic, so the binding energy between two skyrmions is very small, since it is related to DMI. In addition, the energy difference between nanoskyrmions of different types is very small as can be seen from Fig. \[Fig5\]. Indeed, the energy difference between three off-site and bond-centered skyrmions realized on the $6\times14$ plaquette is about $0.2\,B_{z}$. Here, we sample spin orientation within the Monte Carlo simulations and do not manipulate the nanoskyrmions directly. Thus, the stabilization of a periodic and close-packed nanoskyrmionic structures on the square or triangular lattice with the length of more than $30$ sites becomes a challenging task. Therefore, the problem can be addressed to a LEGO$\textregistered$-type constructor, where one builds the mosaic pattern using the unit nanoskyrmionic bricks. [*Size limit*]{} — For practical applications, it is of crucial importance to have skyrmions with the size of the nanometer range, for instance to achieve a high density memory [@Lin]. Previously the record density of the skyrmions was reported for the Fe/Ir(111) system [@Blugel] for which the size of the unit cell of the skyrmion lattice stabilized due to a four-spin interaction was found to be 1nm $\times$ 1nm. In our case the diameter of the triangular on-site skyrmion is found to be $4$ lattice constants (Fig. \[Fig4\]). Thus for prototype $sp$-electron materials the diameter is equal to 1.02 nm (semifluorinated graphene [@graphene]) and 2.64 nm (Si(111):{Sn,Pb} [@silicon]). On the basis of the obtained results we predict the smallest diameter of $2$ lattice constants for the on-site square skyrmion. Our simulations for finite-sized systems with open boundary conditions show that such a nanoskyrmion can exist on the $5\times5$ cluster (Fig. \[Fig4\] top right plaquette), which is smaller than that previously reported in [@Keesman]. We believe that this is the ultimate limit of a skyrmion size on the square lattice. [*Micromagnetic model*]{} — The analysis of the isolated skyrmion can also be fulfilled on the level of the micromagnetic model treating the magnetization as a continuous vector field [@SM]. Contrary to the case of nonzero exchange interaction, the Heisenberg-exchange-free Hamiltonian  allows to obtain an analytical solution for the skyrmionic profile. In the particular case of the square lattice, the radius of the isolated skyrmion is equal to $R=4Da/B$, where $a$ is the lattice constant. Moreover, the skyrmionic solution is stable even in the presence of a small exchange interaction $J\ll{}D$ [@SM]. It is worth mentioning that the obtained result for the radius of the Heisenberg-exchange-free skyrmion is essentially different from the case of competing exchange interaction and DMI, where the radius is proportional to the ratio $J/D$. Although in the absence of DMI both, the exchange interaction and magnetic field, favour the collinear orientation of spins in the direction perpendicular to the surface, the presence of DMI changes the picture drastically. When the spins are tilted by the anisotropic interaction, the magnetic field still wants them to point in the $z$ direction, while the exchange interaction tries to keep two neighbouring spins parallel without any relation to the axes. Therefore, the stronger magnetic field decreases the radius of the skyrmion, while the larger value of exchange interaction broadens the structure [@ref]. ![([**a**]{}) two possible states of the nanoskyrmionic bit. ([**b**]{}) the 24-bit nanoskyrmion memory block encoding DMI abbreviation as obtained from the Monte Carlo simulations with $B_{z} =1.2$.[]{data-label="Fig6"}](Fig6.pdf){width="0.9\linewidth"} [*Memory prototype*]{} — Having analyzed individual nanoskyrmions we are now in a position to discuss technological applications of the nanoskyrmion mosaic. Fig. \[Fig6\] a visualizes a spin structure consisting of the elementary blocks of two types that we associated with the two possible states of a single bit, the “1” and “0”. According to our Monte Carlo simulations a side stacking of off-site square plaquette visualized in Fig. \[Fig4\] protects skyrmionic state in each plaquette. Thus we have a stable building block for design of the nano-scale memory or nanostructures presented in Fig. \[Fig1\]. Similar to the experimentally realized vacancy-based memory [@Memory], a specific filling of the lattice can be reached by means of the scanning tunnelling microscopy (STM) technique. In turn, the spin-polarized regime [@STM] of STM is to be used to read the nanoskyrmionic state. The density of the memory prototype we discuss can be estimated as 1/9 bits $a^{-2}$ ($a$ is the lattice constant), which is of the same order of magnitude as obtained for vacancy-based memory. [*Conclusion*]{} — We have introduced a new class of the two-dimensional systems that are described with the Heisenberg-exchange-free Hamiltonian. The frustration of DMI on the triangular and square lattices leads to a non-trivial state of nanoskyrmionic mosaic that can be manipulated by varying the strength of the constant magnetic field and the size of the sample. Importantly, such a state appears as a result of competition between DMI and the constant magnetic field. This mechanism is unique and is reported for the first time. Being stable on non-regular lattices with open boundary conditions, nanoskyrmionic phase is shown to be promising for technological applications as a memory component. We also present the catalogue of nanoskyrmionic species that can be stabilized already on a tiny plaquettes of a few lattice sites. Characteristics of the isolated skyrmion were studied both, numerically and analytically, within the Monte Carlo simulations and in the framework of the micromagnetic model, respectively. We thank Frederic Mila, Alexander Tsirlin and Alexey Kimel for fruitful discussions. The work of E.A.S. and V.V.M. was supported by the Russian Science Foundation, Grant 17-72-20041. The work of M.I.K. was supported by NWO via Spinoza Prize and by ERC Advanced Grant 338957 FEMTO/NANO. Also, the work was partially supported by the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO). [42]{} Uhlenbeck, G. E., Goudsmit, S. [*Die Naturwissenschaften*]{} [**13**]{}, 953 (1925). Uhlenbeck, G. E., Goudsmit, S. [*Nature*]{} [**117**]{}, 264 (1926). Goudsmit, S., Uhlenbeck, G. E. [*Physica*]{} [**6**]{}, 273 (1926). Heitler, W., London, F. [*Zeitschrift für Physik*]{} [**44**]{}, 455 (1927). Heisenberg, W. [*Zeitschrift für Physik*]{} [**43**]{}, 172 (1927). Heisenberg, W. [*Zeitschrift für Physik*]{} [**49**]{}, 619 (1928). Dirac, P. A. M. [*Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences*]{} (The Royal Society, 1929). Anderson, P. W. [*Phys. Rev.*]{} [**115**]{}, 2 (1959). Dzyaloshinsky, I. [*Journal of Physics and Chemistry of Solids*]{} [**4**]{}, 241 (1958). Moriya, T. [*Phys. Rev.*]{} [**120**]{}, 91 (1960). Skyrme, T. [*Nuclear Physics*]{} [**31**]{}, 556 (1962). Bogdanov, A., Yablonskii, D. [*Sov. Phys. JETP*]{} [**68**]{}, 101 (1989). Mühlbauer, S., et al. [*Science*]{} [**323**]{}, 915 (2009). Münzer, W., et al. [*Phys. Rev.*]{} [**B 81**]{}, 041203(R) (2010). Yu, X., et al. [*Nature*]{} [**465**]{}, 901 (2010). Nagaosa, N., Tokura, Y. [*Nature Nanotechnology*]{} [**8**]{}, 899 (2013). Banerjee, S., Rowland, J., Erten, O., Randeria M. [*Phys. Rev.*]{} [**X 4**]{}, 031045 (2014). Badrtdinov, D. I., Nikolaev, S. A., Katsnelson, M. I., Mazurenko, V. V. [*Phys. Rev.*]{} [**B 94**]{}, 224418 (2016). Mazurenko, V. V., et al. [*Phys. Rev.*]{} [**B 94**]{}, 214411 (2016). Slezák, J., Mutombo, P., Cháb, V. [*Phys. Rev.*]{} [**B 60**]{}, 13328 (1999). Modesti, S., et al. [*Phys. Rev. Lett.*]{} [**98**]{}, 126401 (2007). Li, G., et al. [*Nature Communications*]{} [**4**]{}, 1620 (2013). Kashtiban, R. J., et al. [*Nature Communications*]{} [**5**]{}, 4902 (2014). Itin, A. P., Katsnelson, M. I. [*Phys. Rev. Lett.*]{} [**115**]{}, 075301 (2015). Dutreix, C., Stepanov, E. A., Katsnelson, M. I. [*Phys. Rev.*]{} [*B 93*]{}, 241404(R) (2016). Stepanov, E. A., Dutreix, C., Katsnelson, M. I. [*Phys. Rev. Lett.*]{} [**118**]{}, 157201 (2017). Trotzky, S., et al. [*Science*]{} [**319**]{}, 295 (2008). Gong, M., Qian, Y., Yan, M., Scarola, V. W., Zhang, C. [*Scientific Reports*]{} [**5**]{}, 10050 (2015). Lin, Y. J., Jiménez-Garcia, K., Spielman, I. B. [*Nature*]{} [**471**]{}, 83 (2011). Wang, P., et al. [*Phys. Rev. Lett.*]{} [**109**]{}, 095301 (2012). Da-Chuang, L., et al. [*Chinese Physics Letters*]{} [**32**]{}, 050302 (2015). Supplemental Material for “Heisenberg-exchange-free nanoskyrmion mosaic”. Rosales, H. D., Cabra, D. C., Pujol, P. [*Phys. Rev.*]{} [**B 92**]{}, 214439 (2015). Berg, B., Lüscher, M. [*Nuclear Physics*]{} [**B 190**]{}, 412 (1981). Heo, C., Kiselev, N. S., Nandy, A. K., Blügel, S., Rasing, T. [*Scientific reports*]{} [**6**]{}, 27146 (2016). Heinze, S., et al. [*Nature Physics*]{} [**7**]{}, 713 (2011). Johnson, D. S. [*Near-optimal bin packing algorithms*]{}, (Ph.D. thesis, Massachusetts Institute of Technology, 1973). Lin, S. Z., Saxena, A. [*Phys. Rev.*]{} [**B 92**]{}, 180401(R) (2015). Keesman, R., Raaijmakers, M., Baerends, A. E., Barkema, G. T., Duine, R. A. [*Phys. Rev.*]{} [**B 94**]{}, 054402 (2016). Kalff, F. E., et al. [*Nature Nanotechnology*]{} [**11**]{}, 926 (2016). Wiesendanger, R. [*Rev. Mod. Phys.*]{} [**81**]{}, 1495 (2009). Although the rich variety of nanoskyrmions on the discrete lattices obtained in the current study can not be described by the micromagnetic model, because the length scale on which the magnetic structure varies is of the order of the interatomic distance, the result for the radius matches very well our numerical simulations and can be considered as a limiting case of the micromagnetic solution when the applied magnetic filed is of the order of DMI. Nevertheless, the analytical solution for the isolated skyrmion might be helpful for the other set of system parameters, where the considered micromagnetic model is applicable. Methods ======= The DMI Hamiltonian with classical spins was solved by means of the Monte Carlo approach. The spin update scheme is based on the Metropolis algorithm. The systems in question are gradually (200 temperature steps) cooled down from high temperatures (${\rm T}\sim |\mathbf{D}_{ij}|$) to ${\rm T}=~0.01|\mathbf{D}_{ij}|$. Each temperature step run consists of $1.5\times10^{6}$ Monte Carlo steps. The corresponding micromagnetic model was solved analytically. Definition of the skyrmion number ================================= Skyrmionic number is related to the topological charge. In the discrete case, the result is extremely sensitive to the number of sites comprising a single Skyrmion and to the way the spherical surface is approximated. Here we used the approach of Berg and Lüscher, which is based on the definition of the topological charge as the sum of the nonoverlapping spherical triangles areas. Solid angle subtended by the spins ${\bf S}_{1}$, ${\bf S}_{2}$ and ${\bf S}_{3}$ is defined as $$\begin{aligned} A = 2 \arccos[\frac{1+ {\bf S}_{1} {\bf S}_{2} + {\bf S}_{2} {\bf S}_{3} + {\bf S}_{3} {\bf S}_{1}}{{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2(1+ {\bf S}_{1} {\bf S}_{2} )(1+ {\bf S}_{2} {\bf S}_{3})(1+ {\bf S}_{3} {\bf S}_{1})}}}].\end{aligned}$$ We do not consider the exceptional configurations for which $$\begin{aligned} &{\bf S}_{1} [{\bf S}_{2} \times {\bf S}_{1}] = 0 \\ &1+ {\bf S}_{1} {\bf S}_{2} + {\bf S}_{2} {\bf S}_{3} + {\bf S}_{3} {\bf S}_{1} \le 0. \notag\end{aligned}$$ Then the topological charge $Q$ is equal to $ Q = \frac{1}{4\pi} \sum_{l} A_{l}. $ Spin spiral state ================= Our Monte Carlo simulations for the DMI Hamiltonian with classical spin $|\mathbf{S}| = 1$ have shown that the triangular and square lattice systems form [*spin spiral*]{} structures (see Fig. \[spinfactors\]). The obtained spin textures and the calculated spin structure factors are $$\begin{aligned} \chi_{\perp}({\mathbf{q}})&=\frac{1}{N}{\ensuremath{\left\langle \left|\sum_{i} S_{i}^{x} \, e^{-i{\mathbf{q}}\cdot{\bf r}_{i}} \right|^{2}+\left|\sum_{i} S_{i}^{y} \, e^{-i{\mathbf{q}}\cdot{\bf r}_{i}} \right|^{2} \right\rangle}}\\ \chi_{\parallel}({\mathbf{q}})&=\frac{1}{N}{\ensuremath{\left\langle \left|\sum_{i} S_{i}^{z} e^{-i{\mathbf{q}}\cdot{\bf r}_{i}} \right|^{2} \right\rangle}}.\end{aligned}$$ ![Fragments of the spin textures and spin structure factors obtained with the Heisenberg-exchange-free model on the square $20\times20$ [**A**]{} and triangular $21\times21$ [**B**]{} lattices in the absence of the magnetic field. The temperature is equal to ${\rm T}=0.01\,|{\rm \bf D}|$.[]{data-label="spinfactors"}](FigS1.pdf){width="0.55\linewidth"} The pictures of intensities at zero magnetic field correspond to the spin spiral state with $|\mathbf{q}_{\square}| = \frac{1}{2{\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}} \times \frac{2\pi}{a}$ and $|\mathbf{q}_{\triangle}| \simeq 0.29 \times \frac{2\pi}{a}$ for the square and triangle lattices, respectively. Here $a$ is the lattice constant. The corresponding periods of the spin spirals are $\lambda_{\triangle} = 3.5\,a$ and $\lambda_{\square} = 2 {\mkern-6mu\mathop{}\oldsqrt[\mkern8mu]{2}}\,a$. The energies of the triangular and square systems at B$_{z}$ and low temperatures are scaled as energies of the elementary clusters, namely E$_{\triangle}$ and E$_{\square}$ (Fig. \[Fig2\] c and Fig. \[Fig2\] d), multiplied by the number of sites. In contrast to the previous considerations taking into account Heisenberg exchange interaction, we do not observe any long-wavelength spin excitations (${\mathbf{q}}=0$) in $\chi_{\parallel}(\boldsymbol{q})$. Micromagnetics of isolated skyrmion =================================== A qualitative description of a single skyrmion can be obtained within a micromagnetic model, when the quantum spin operators $\hat{\bf S}$ are replaced by the classical local magnetization ${\bf m}_{i}$ at every lattice site and later by a continuous and differentiable vector field ${\bf m}({\bf r})$ as $\hat{\bf S}_{i} \to S {\bf m}_{i} \to S{\bf m}({\bf r})$, where $S$ is the spin amplitude and $|{\bf m}|=1$. This approach is valid for large quantum spins when the length scale on which the magnetic structure varies is larger than the interatomic distance. Let us make the specified transformation explicitly and calculate the energy of the single spin localized at the lattice site $i$ that can be found as a sum of the initial DMI Hamiltonian over the nearest neighbor lattice sites $$\begin{aligned} \label{mme1} E_{i}&=-\sum_{j}J_{ij}\,\hat{\bf S}_{i}\,\hat{\bf S}_{j} + \sum_{j}{\bf D}_{ij}\,[\hat{\bf S}_{i}\times\hat{\bf S}_{j}] - {\bf B}\,\hat{\bf S}_{i} \\ &= -S^{2}\sum_{j}J_{ij}\,{\bf m}_{i}\,{\bf m}_{j} + S^2\sum_{j}{\bf D}_{ij}\,[{\bf m}_{i}\times{\bf m}_{j}] - S{\bf B}\,{\bf m}_{i} \notag\\ &=-S^{2}\sum_{j}J_{ij}\left(1-\frac{1}{2}({\bf m}_{j}-{\bf m}_{i})^{2}\right) + S^2\sum_{j}{\bf D}_{ij}\left[{\bf m}_{i}\times\left({\bf m}_{j}-{\bf m}_{i}\right)\right] - S{\bf B}\,{\bf m}_{i} \notag\\ &\simeq-S^{2}\sum_{j}J_{ij}\left(1-\frac{1}{2}({\bf m}({\bf r}_{i}+\delta{\bf r}_{ij}) - {\bf m}({\bf r}_{i}))^{2}\right) + S^2\sum_{j}{\bf D}_{ij}\left[{\bf m}_{i}({\bf r}_{i})\times\left({\bf m}({\bf r}_{i}+\delta{\bf r}_{ij}) - {\bf m}({\bf r}_{i})\right)\right] - S{\bf B}\,{\bf m}({\bf r}_{i}) \notag\\ &\simeq\frac{S^{2}a^2}{2}\sum_{j}J_{ij}\left(({\bf e}_{ij}\nabla)\,{\bf m}({\bf r}_{i})\right)^{2} + S^2a\sum_{j}{\bf D}_{ij}\left[{\bf m}_{i}({\bf r}_{i})\times (({\bf e}_{ij}\nabla)\,{\bf m}({\bf r}_{i}))\right] - S{\bf B}\,{\bf m}({\bf r}_{i}), \notag\end{aligned}$$ The DMI vector ${\bf D}_{ij} = D\left[{\bf e}_{z}\times{\bf e}_{ij}\right]$ is perpendicular to the vector ${\bf e}_{ij}$ that connects spins at the nearest-neighbor sites ${\ensuremath{\left\langle ij \right\rangle}}$ and favours their orthogonal alignment, while the exchange term tends to make the magnetization uniform. The above derivation was obtained for a particular case of a square lattice, but can be straightforwardly generalized to an arbitrary configuration of spins. Then, we obtain $$\begin{aligned} \label{mme2} E_{i} &= \frac{S^{2}a^2}{2}\sum_{j}J_{ij}\left(({\bf e}_{ij}\nabla)\,{\bf m}({\bf r}_{i})\right)^{2} + S^2aD\sum_{j}\left[{\bf e}_{z}\times{\bf e}_{ij}\right]\left[{\bf m}_{i}\times \Big(({\bf e}_{ij}\nabla)\,{\bf m}({\bf r}_{i})\Big)\right] - S{\bf B}_{z}\,{\bf m}_{z}({\bf r}_{i}) \\ &= \frac{J'}{2}\left[\Big(\partial_{x}\,{\bf m}({\bf r}_{i})\Big)^{2} + \Big(\partial_{y}\,{\bf m}({\bf r}_{i})\Big)^{2} \right] - S{\bf B}_{z}\,{\bf m}_{z}({\bf r}_{i}) \notag \\ &\,+ D'\,\left[{\bf m}_{z}({\bf r}_{i})\,\Big(\partial_{x}{\bf m}_{x}({\bf r}_{i})\Big) - \Big(\partial_{x}{\bf m}_{z}({\bf r}_{i})\Big)\,{\bf m}_{x}({\bf r}_{i})) + {\bf m}_{z}({\bf r}_{i})\,\Big(\partial_{y}{\bf m}_{y}({\bf r}_{i})\Big) - \Big(\partial_{y}{\bf m}_{z}({\bf r}_{i})\Big)\,{\bf m}_{y}({\bf r}_{i}))\right] \notag\end{aligned}$$ where $J'=2JS^{2}a^{2}$, $D'=2DS^{2}a$ and $B'=BS$. The unit vector of the magnetization at every point of the vector field can be parametrized by ${\bf m} = \sin\theta\cos\psi\,{\bf e}_{x} + \sin\theta\sin\psi\,{\bf e}_{y} + \cos\theta\,{\bf e}_{z}$ in the spherical coordinate basis. In order to describe axisymmetric skyrmions, we additionally introduce the cylindrical coordinates $\rho$ and $\varphi$, so that $\rho=0$ is associated to the center of a skyrmion $$\begin{aligned} \partial_{x}{\bf m}({\bf r}_{i}) &= \cos\varphi\,\partial_{\rho}{\bf m}({\bf r}_{i}) - \frac{1}{\rho}\sin\varphi\,\partial_{\varphi}{\bf m}({\bf r}_{i})\,, \\ \partial_{y}{\bf m}({\bf r}_{i}) &= \sin\varphi\,\partial_{\rho}{\bf m}({\bf r}_{i}) + \frac{1}{\rho}\cos\varphi\,\partial_{\varphi}{\bf m}({\bf r}_{i})\,,\end{aligned}$$ from which follows $$\begin{aligned} \Big(\partial_{x}{\bf m}({\bf r}_{i})\Big)^2 + \Big(\partial_{y}{\bf m}({\bf r}_{i})\Big)^2 = \Big(\partial_{\rho}{\bf m}({\bf r}_{i})\Big)^2 + \frac{1}{\rho^2}\Big(\partial_{\varphi}{\bf m}({\bf r}_{i})\Big)^2.\end{aligned}$$ Assuming that $\theta=\theta(\rho,\varphi)$ and $\psi=\psi(\rho,\varphi)$, the derivatives of the magnetization can be expressed as $$\begin{aligned} \partial_{\rho}{\bf m}({\bf r}_{i}) &= \Big(\cos\theta\cos\psi\,\dot\theta_{\rho}-\sin\theta\sin\psi\,\dot\psi_{\rho}\Big)\,{\bf e}_{x} + \Big(\cos\theta\sin\psi\,\dot\theta_{\rho}+\sin\theta\cos\psi\,\dot\psi_{\rho}\Big)\,{\bf e}_{y} - \sin\theta\,\dot\theta_{\rho}\,{\bf e}_{z} \,, \\ \partial_{\varphi}{\bf m}({\bf r}_{i}) &= \Big(\cos\theta\cos\psi\,\dot\theta_{\varphi}-\sin\theta\sin\psi\,\dot\psi_{\varphi}\Big)\,{\bf e}_{x} + \Big(\cos\theta\sin\psi\,\dot\theta_{\varphi}+\sin\theta\cos\psi\,\dot\psi_{\varphi}\Big)\,{\bf e}_{y} - \sin\theta\,\dot\theta_{\varphi}\,{\bf e}_{z} \,.\end{aligned}$$ The exchange and DMI energies then equal to $$\begin{aligned} E^{J}_{i} &= \frac{J'}{2}\left[\dot\theta^2_{\rho} + \sin^2\theta\,\dot\psi_{\rho}^2 + \frac{1}{\rho^2}\dot{\theta}^2_{\varphi} + \frac{1}{\rho^2}\sin^2\theta\,\dot\psi^2_{\varphi} \right],\\ E^{D}_{i} &= D'\left(\cos(\psi-\varphi)\left[\dot\theta_{\rho}+\frac{1}{\rho}\sin\theta\cos\theta\,\dot\psi_{\varphi}\right] + \sin(\psi-\varphi)\left[\frac{1}{\rho}\dot\theta_{\varphi}-\sin\theta\cos\theta\,\dot\psi_{\rho}\right]\right).\end{aligned}$$ Finally, the micromagnetic energy can be written as follows $$\begin{aligned} E(\theta,\psi) =\int_{0}^{\infty}{\cal E}(\theta,\psi,\rho,\varphi)\,d\rho\,d\varphi \,,\end{aligned}$$ where the skyrmionic energy density is $$\begin{aligned} {\cal E}(\theta,\psi,\rho,\varphi) &= \frac{J'}{2}\left[\rho\dot\theta^2_{\rho} + \rho\sin^2\theta\,\dot\psi_{\rho}^2 + \frac{1}{\rho}\dot{\theta}^2_{\varphi} + \frac{1}{\rho}\sin^2\theta\,\dot\psi^2_{\varphi} \right] - B'\rho\cos\theta \\ & + D'\left(\cos(\psi-\varphi)\left[\rho\dot\theta_{\rho}+\sin\theta\cos\theta\,\dot\psi_{\varphi}\right] + \sin(\psi-\varphi)\left[\dot\theta_{\varphi}-\rho\sin\theta\cos\theta\,\dot\psi_{\rho}\right]\right). \notag\end{aligned}$$ The set of the Euler-Lagrange equations for this energy density $$\begin{aligned} \begin{cases} \frac{\partial{\cal E}}{\partial\theta} - \frac{d}{d\rho}\frac{\partial{\cal E}}{\partial\dot\theta_{\rho}} - \frac{d}{d\varphi}\frac{\partial{\cal E}}{\partial\dot\theta_{\varphi}}=0, \\ \frac{\partial{\cal E}}{\partial\psi} - \frac{d}{d\rho}\frac{\partial{\cal E}}{\partial\dot\psi_{\rho}} - \frac{d}{d\varphi}\frac{\partial{\cal E}}{\partial\dot\psi_{\varphi}}=0, \end{cases}\end{aligned}$$ then reads $$\begin{aligned} \left\{\hspace{-0.15cm} \begin{matrix} &J'\left[\rho\,\ddot\theta_{\rho} + \dot\theta_{\rho} + \frac{1}{\rho}\ddot\theta_{\varphi} - \frac{1}{\rho} \sin\theta\cos\theta\,\dot\psi^2_{\varphi} - \rho\sin\theta\cos\theta\,\dot\psi^2_{\rho} \right] + 2D'\left[\cos(\psi-\varphi)\sin^2\theta\,\dot\psi_{\varphi} - \rho\sin(\psi-\varphi)\sin^2\theta\,\dot\psi_{\rho}\right] - B'\rho\sin\theta=0 \,,\\ &J'\left[\rho\sin^2\theta\,\ddot\psi_{\rho} + \sin^2\theta\,\dot\psi_{\rho} + \rho\sin2\theta\,\dot\psi_{\rho}\dot\theta_{\rho} + \frac{1}{\rho}\sin^2\theta\,\ddot\psi_{\varphi} + \frac{1}{\rho}\sin2\theta\,\dot\theta_{\varphi}\,\dot\psi_{\varphi}\right] + 2D'\left[\rho\sin(\psi-\varphi)\sin^2\theta\,\dot\theta_{\rho} - \cos(\psi-\varphi)\sin^2\theta\,\dot\theta_{\varphi}\right] = 0 \,. \end{matrix} \right. \notag\end{aligned}$$ Here we restrict ourselves to the particular case of the $C_{nv}$ symmetry. Then, one can assume that $\dot\theta_{\varphi}=0$ and $\psi-\varphi=\pi{}n$ ($n\in{\mathbb Z}$), which leads to $$\begin{aligned} \alpha \left[\rho^{2}\,\ddot\theta_{\rho} + \rho \dot\theta_{\rho} - \sin\theta\cos\theta \right] \pm 2\rho\sin^2\theta - \beta\rho^{2}\sin\theta=0 \,,\end{aligned}$$ where $\alpha=J'/D'$ and $\beta=B'/D'$. Although we are interested in the problem where the exchange interaction is absent, it is still necessary to keep $\alpha\ll1$ as a small parameter in order to investigate stability of the skyrmionic solution under small perturbations. Therefore, one can look for a solution of the following form $$\begin{aligned} \theta = \theta_{0} + \alpha \theta_{1} + O(\alpha^2),\end{aligned}$$ which results in $$\begin{aligned} \alpha \left[ \rho^{2}\,\ddot\theta_{0} + \rho \dot\theta_{0} - \sin\theta_{0}\cos\theta_{0} \right] \pm 2\rho\sin^2\theta_{0} \pm 4 \alpha \rho\sin\theta_{0}\cos\theta_{0}\, \theta_{1} - \beta\rho^{2}\sin\theta_{0} - \alpha \beta\rho^{2}\cos\theta_{0}\,\theta_{1} =0 \,.\end{aligned}$$ Solution for $J'=0$ ------------------- When the exchange interaction is dynamically switched off ($J' = 0$), the zeroth order in the limit $\alpha\ll1$ leads to $$\begin{aligned} \rho\,\sin\theta_{0} \left( \beta \rho \mp 2\sin\theta_{0} \right) =0 \,.\end{aligned}$$ This yields two solutions: $$\begin{aligned} 1)~\sin\theta_{0} &= 0,~\text{which corresponds to a FM ordered state}\\ 2)~\sin\theta_{0} &= \pm \frac{\beta\rho}{2} = \pm \frac{B'\rho}{2D'},~\text{which describes a Skyrmion}.\label{eq:Skprofile}\end{aligned}$$ Then, the unit vector of the magnetization that describes a single skyrmion is equal to $$\begin{aligned} {\bf m} = \sin\theta \cos(\psi-\phi)\,{\bf e}_{\rho} + \sin\theta\sin(\psi-\varphi)\,{\bf e}_{\varphi} + \cos\theta\,{\bf e}_{z} = \pm\sin\theta\,{\bf e}_{\rho} + \cos\theta\,{\bf e}_{z} = \frac{\beta\rho}{2}\,{\bf e}_{\rho} + \cos\theta_0\,{\bf e}_{z},\end{aligned}$$ Importantly, the radial coordinate of the skyrmionic solution is limited by the condition $\rho\leq\frac{2D'}{B'}$. Moreover, the $z$ component of the magnetization, namely ${\bf m}_{z}=\cos\theta_0$, is not uniquely determined by the Euler-Lagrange equations. Indeed, the solution  with the initial condition $\theta_0(\rho=0)=\pi$ for the center of the skyrmion describes only half of the skyrmion, because the magnetization at the boundary ${\bf m}(\rho=2D'/B')$ lies in-plane along ${\bf e}_{\rho}$, which can not be continuously matched with the FM environment of a single skyrmion. Moreover, the magnetization at the larger values of $\rho$ is undefined within this solution. Therefore, one has to make some efforts to obtain the solution for the whole skyrmionic structure. Let us stick to the case, when the magnetization of the center of the skyrmion is points, i.e. $\theta_0(\rho=0)=\pi$, ${\bf m}_{z}(\rho=0)=-1$, and magnetic field is points. Then, Eq. \[eq:Skprofile\] provides the solution on the segment $\theta_0\in[\pi,\frac{\pi}{2}]$ and $\rho\in[0,\frac{2D'}{B'}]$, which for every given direction with the fixed angle $\varphi$ describes the quarter period of the spin spiral as shown in the left panel of Fig. \[fig:SkProf\]. As it is mentioned in the main text, in the case of the $C_{nv}$ symmetry, the single skyrmion is nothing more than a superposition of three and two spin spirals for the case of the triangular and square lattice respectively. Therefore, one has to restore the second quarter of the period of a spin spiral and the rest can be obtained via the symmetry operation $\rho\to-\rho$. The second part of the spin spiral can be found by shifting the variable $\rho$ by $\rho_0$ in the skyrmionic solution  as $\sin\theta_0 =~ B'(\rho-~\rho_0)/2D'$. In order to match this solution with the initial one, the constant has to be equal to $\rho_0=\frac{4D'}{B'}$. Since the magnetization is defined as a continuous and differentiable function, the angle $\theta_0$ can only vary on a segment $\theta\in[\frac{\pi}{2},0]$, otherwise either the ${\bf e}_{\rho}$, or ${\bf e}_{z}$ projections of the magnetization will not fulfil the mentioned requirement. The correct matching of the two spin spirals is shown in Fig. \[fig:SkProf\] a), while Fig. \[fig:SkProf\] c) shows the violation of differentiability of ${\bf m}_{z}$ and Figs. \[fig:SkProf\] b), d) give a wrong matching of ${\bf m}_{\rho}$. Thus, the magnetization at the boundary of the skyrmion at $\rho=R=\frac{4D'}{B'}$ that defines the radius $R$ points up, i.e. ${\bf m}_{z}(\rho_0)=1$, which perfectly matches with the FM environment that is collinear to a constant magnetic field ${\bf B}$. ![Possible matching of the two parts of the spin spiral.[]{data-label="fig:SkProf"}](FigS2.pdf){width="0.5\linewidth"} ![Skyrmionic radius for two different values of the magnetic field. The larger field favours more compact structures ($R_2<R_1$) as shown in the right panel. Red arrows depict the skyrmion, while the two black arrows are related to the ferromagnetic environment.[]{data-label="fig:SkR"}](FigS3.pdf){width="0.5\linewidth"} It is worth mentioning that the obtained result for the radius of the skyrmion $R=\frac{4D'}{B'} = \frac{8DSa}{B}$ is fundamentally different from the case when the skyrmion appears due to a competition between the exchange interaction and DMI. The radius in the later case is proportional to the ratio $J/D$ and does not depend on the value of the spin $S$, while in the Hiesenberg-exchange-free case it does. Although in the absence of DMI both, the exchange interaction and magnetic field, favour the collinear orientation of spins along the $z$ axis, the presence of DMI changes the picture drastically. The spins are now tilted from site to site, but the magnetic field still wants them to point in the $z$ direction and the exchange interaction aligns neighboring spins parallel without any relation to the axes. This leads to the fact that the stronger magnetic field decreases the radius of the skyrmion, while the larger value of the exchange interaction broadens the structure. This is also clear from Fig. \[fig:SkR\] where in the case of zero exchange interaction the larger magnetic field favours the alignment with the smaller radius of the skyrmion $R_2<R_1$ shown in the right panel. Finally, the obtained skyrmionic structure is shown in Fig. \[fig:Sk\]. It is worth mentioning that our numerical study corresponds to $B\sim{}D$, so the radius of the skyrmion is equal to $\rho_{0}\sim4Sa$, which is of the order of a few lattice sites. Although for these values of the magnetic field the micromagnetic model is not applicable, because the magnetization changes a lot from site to site, it provides a good qualitative understanding of the skyrmionic behavior and still matches with our numerical simulations. The corresponding skyrmion number in a two-dimensional system is defined as $$\begin{aligned} N=\frac{1}{4\pi} \int dx\,dy\,{\bf m} \left[ \partial_{x}{\bf m}\times\partial_{y}{\bf m}\right]\end{aligned}$$ and then equal to $$\begin{aligned} N=\frac{1}{4\pi} \int dx\,dy\,\frac{1}{\rho}\sin\theta\,\dot\theta_{\rho} = \frac{1}{4\pi} \int d\rho\,d\varphi\,\sin\theta\,\dot\theta_{\rho} = \frac12\left(\cos\theta(0) - \cos\theta(\rho_{0})\right) = 1.\end{aligned}$$ One can also consider the case of zero magnetic field. Then, solution of the Euler-Lagrange equations $$\begin{aligned} \left\{ \begin{matrix} \dot\psi_{\varphi} = \rho\tan(\psi-\varphi)\,\dot\psi_{\rho},\\ \dot\theta_{\varphi} = \rho\tan(\psi-\varphi)\,\dot\theta_{\rho} \end{matrix} \right.\end{aligned}$$ describes a spiral state, as shown in Fig. \[spinfactors\]. ![Spatial profile of the skyrmionic solution.[]{data-label="fig:Sk"}](FigS4.pdf){width="0.32\linewidth"} Solution for small $J'$ ----------------------- Now, let us study stability of the skyrmionic solution and consider the case of a small exchange interaction with respect to DMI. The first order in the limit $\alpha\ll1$ implies $$\begin{aligned} \left[ \rho^{2}\,\ddot\theta_{0} + \rho \dot\theta_{0} - \sin\theta_{0}\cos\theta_{0} \right] \pm 4 \rho\sin\theta_{0}\cos\theta_{0}\, \theta_{1} - \beta\rho^{2}\cos\theta_{0}\,\theta_{1} =0 \,.\end{aligned}$$ The zeroth order solution leads to $$\begin{aligned} \cos \theta_{0} \, \dot\theta_{0} = \pm \frac{\beta}{2} ~~~\text{and}~~~ \cos\theta_{0} \, \ddot\theta_{0} -\sin\theta_{0} \, \dot\theta_{0}^{2} = 0 \,.\end{aligned}$$ This results in $$\begin{aligned} &\rho^{2}\frac{\sin\theta_{0}}{\cos\theta_{0}} \left(\frac{\beta}{2\cos\theta_{0}}\right)^{2} \pm \rho \frac{\beta}{2\cos\theta_{0}} - \sin\theta_{0}\cos\theta_{0} = - \beta \rho^{2} \cos\theta_{0} \, \theta_{1} \notag\\ &\pm \left(\frac{\beta\rho}{2}\right)^{3}\frac{1}{\cos^{4}\theta_{0}} \pm \frac{\beta\rho}{2} \frac{1}{\cos^{2}\theta_{0}} \mp \frac{\beta\rho}{2} = - \beta \rho^{2} \, \theta_{1} \notag\\ &\beta \rho^{2} \, \theta_{1} = \mp \left[ \left(\frac{\beta}{2} \rho \right)^{3} \frac{1}{\cos^{4}\theta_{0}} + \frac{\beta}{2} \rho \left( \frac{1}{\cos^{2}\theta_{0}} - 1 \right) \right] \notag\\ &\theta_{1} = - \frac{\beta}{4}\sin\theta_{0} \left[ \frac{1}{\cos^{4}\theta_{0}} + \frac{1}{\cos^{2}\theta_{0}} \right] \,,\end{aligned}$$ provided $\cos\theta_{0}\neq0$. Therefore the total solution for the skyrmion $$\begin{aligned} \theta = \theta_{0} - \frac{J'B'}{4D'^2}\sin\theta_{0} \left[ \frac{1}{\cos^{4}\theta_{0}} + \frac{1}{\cos^{2}\theta_{0}} \right]\end{aligned}$$ is stable in the two important regions when $\sin\theta_0=0$ – around the center of skyrmion and at the border. The divergency of the correction $\theta_1$ in the middle of the skyrmion when $\cos\theta_0=0$ comes from the fact that the magnetization is poorly defined here, as it was discussed above.
--- abstract: 'We use Stein’s method to bound the Wasserstein distance of order $2$ between a measure $\nu$ and the Gaussian measure using a stochastic process $(X_t)_{t \geq 0}$ such that $X_t$ is drawn from $\nu$ for any $t > 0$. If the stochastic process $(X_t)_{t \geq 0}$ satisfies an additional exchangeability assumption, we show it can also be used to obtain bounds on Wasserstein distances of any order $p \geq 1$. Using our results, we provide optimal convergence rates for the multi-dimensional Central Limit Theorem in terms of Wasserstein distances of any order $p \geq 2$ under simple moment assumptions.' author: - | Thomas Bonis\ DataShape team, Inria Saclay, Université Paris-Saclay, Paris, France\ thomas.bonis@inria.fr bibliography: - 'Bibliography.bib' title: - 'Stein’s method for normal approximation in Wasserstein distances with application to the multivariate Central Limit Theorem [^1] ' - 'Stein’s method for normal approximation in Wasserstein distances with application to the multivariate Central Limit Theorem' --- Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to thank Michel Ledoux for his many comments and advice regarding the redaction of this paper as well as Jérôme Dedecker and Yvik Swan, Chi Tran and Frédéric Chazal for their multiple remarks. [^1]: The author was supported by the French Délégation Générale de l’Armement (DGA) and by ANR project TopData ANR-13-BS01-0008.
--- address: - 'Department of Mathematics, University of Michigan, East Hall, 525 East University Avenue, Ann Arbor, MI 48109-1109, USA' - 'Department of Mathematics, University of Illinois at Chicago, 851 S. Morgan St., M/C. 249, Chicago, IL 60607-7045, USA' - 'Department of Mathematics, Harvard University, 1 Oxford Street, Cambridge, MA 02138, USA' author: - Tommaso de Fernex - Lawrence Ein - Mircea Mustaţǎ title: Bounds for log canonical thresholds with applications to birational rigidity --- Introduction {#introduction .unnumbered} ============ Let $X$ be a smooth algebraic variety, defined over an algebraically closed field of characteristic zero, and let $V \subset X$ be a proper closed subscheme. Our main goal in this paper is to study an invariant of the pair $(X,V)$, called the log canonical threshold of $X$ along $V$, and denoted by $\operatorname{lc}(X,V)$. Interest in bounds for log canonical thresholds is motivated by techniques that have recently been developed in higher dimensional birational geometry. In this paper, we study this invariant using intersection theory, degeneration techniques and jet schemes. A natural question is how does this invariant behave under basic operations such as restrictions and projections. Restriction properties have been extensively studied in recent years, leading to important results and conjectures. In the first section of this paper, we investigate the behavior under projections, and we prove the following result (see Theorem \[thm1\] for a more precise statement): \[thm1-intro\] With the above notation, suppose that $V$ is Cohen-Macaulay, of pure codimension $k$, and let $f : X \to Y$ be a proper, dominant, smooth morphism of relative dimension $k-1$, with $Y$ smooth. If $f|_V$ is finite, then $$\operatorname{lc}(Y,f_*[V]) \leq \frac{k! \. \operatorname{lc}(X,V)^k}{k^k},$$ and the inequality is strict if $k\geq 2$. Moreover, if $V$ is locally complete intersection, then $$\operatorname{lc}(Y,f_*[V]) \leq \frac{\operatorname{lc}(X,V)^k}{k^k}.$$ Examples show that these bounds are sharp. The proof of the above theorem is based on a general inequality relating the log canonical threshold of a fractional ideal of the form $h^{-b}\. \a$, and the colength of $\a$. Here $\a$ is a zero dimensional ideal in the local ring of $X$ at some (not necessarily closed) point, $b\in{\mathbb Q}_+$, and $h$ is the equation of a smooth divisor. We prove this inequality in the second section (see Theorem \[l(a)-e(a)\]), using a degeneration to monomial ideals. It generalizes a result from [@DEM], which was the case $b=0$. In the third section, we give lower bounds for the log canonical threshold of affine subschemes defined by homogeneous equations of the same degree. We prove the following Let $V\subset X=\A^n$ be a subscheme defined by homogeneous equations of degree $d$. Let $c=\operatorname{lc}(\A^n, V)$, and let $Z$ be the non log terminal locus of $(\A^n, c\. V)$. If $e=\operatorname{codim}(Z,\A^n)$, then $$\operatorname{lc}(\A^n,V) \ge \frac{e}d.$$ Moreover, we have equality if and only if the following holds: $Z$ is a linear subspace, and if $\pi : \A^n\longrightarrow\A^n/Z$ is the projection, then there is a subscheme $V'\subset\A^n/Z$ such that $V=\pi^{-1}(V')$, $\operatorname{lc}(\A^n/Z,V')=e/d$, and the non log terminal locus of $(\A^n/Z,(e/d)\. V')$ is the origin. The proof of this result is based on the characterization of the log canonical threshold via jet schemes from [@Mu2]. In the particular case when $V$ is the affine cone over a projective hypersurface with isolated singularities, the second assertion in the above result proves a conjecture of Cheltsov and Park from [@CP]. In the last section we apply the above bounds in the context of birational geometry. In their influential paper [@IM], Iskovskikh and Manin proved that a smooth quartic threefold is what is called nowadays birationally superrigid; in particular, every birational automorphism is regular, and the variety is not rational. There has been a lot of work to extend this result to other Fano varieties of index one, in particular to smooth hypersurfaces of degree $N$ in $\P^N$, for $N>4$. The case $N=5$ was done by Pukhlikov in [@Pu2], and the cases $N=6,7,8$ were proven by Cheltsov in [@Ch2]. Moreover, Pukhlikov showed in [@Pu5] that a general hypersurface as above is birationally superrigid, for every $N>4$. We use our results to give an easy and uniform proof of birational superrigidity for arbitrary smooth hypersurfaces of degree $N$ in $\P^N$ when $N$ is small. \[thm3\_introd\] If $X\subset{\mathbb P}^N$ is a smooth hypersurface of degree $N$, and if $4\leq N\leq 12$, then $X$ is birationally superrigid. Based on previous ideas of Corti, Pukhlikov proposed in [@Pu1] a proof of the birational rigidity of every smooth hypersurface of degree $N$ in $\P^N$, for $N\geq 6$. Unfortunately, at the moment there is a gap in his arguments (see Remark \[gap\] below). Despite this gap, the proof proposed in [@Pu1] contains many remarkable ideas, and it seems likely that a complete proof could be obtained in the future along those lines. In fact, the outline of the proof of Theorem \[thm3\_introd\] follows his method, and our contribution is mainly to simplifying and solidifying his argument. Acknowledgements {#acknowledgements .unnumbered} ---------------- We are grateful to Steve Kleiman and Rob Lazarsfeld for useful discussions. Research of the first author was partially supported by MURST of Italian Government, National Research Project (Cofin 2000) “Geometry of Algebraic Varieties”. Research of the second author was partially supported by NSF Grant DMS 02-00278. The third author served as a Clay Mathematics Institute Long-Term Prize Fellow while this research has been done. Singularities of log pairs under projections ============================================ Let $X$ be a smooth algebraic variety, defined over an algebraically closed field of characteristic zero, and let $V \subset X$ be a proper subscheme. For any rational number $c > 0$, we can consider the pair $(X,c\. V)$. The usual definitions in the theory of singularities of pairs, for which we refer to [@Ko], extend to this context. In particular, we say that an irreducible subvariety $C \subset X$ is a center of non log canonicity (resp. non log terminality, non canonicity, non terminality) for $(X,c\. V)$ if there is at least one divisorial valuation of $K(X)$, with center $C$ on $X$, whose discrepancy along $(X,c\.V)$ is $<-1$ (resp. $\le -1$, $<0$, $\le 0$). We will denote by $\operatorname{lc}(X,V)$ the log canonical threshold of the pair $(X,V)$, i.e., the largest $c$ such that $(X,c\. V)$ is log canonical. We will occasionally consider also pairs of the form $(X,c_1\. V_1-c_2\.V_2)$, where $V_1$, $V_2\subset X$ are proper subschemes of $X$. The definition of (log) terminal and canonical pairs extends in an obvious way to this setting. We fix now the set-up for this section. Let $f : X \to Y$ be a smooth and proper morphism onto a smooth algebraic variety $Y$. We assume that $V\subset X$ is a pure dimensional, Cohen-Macaulay closed subscheme, such that $\dim V = \dim Y - 1$, and such that the restriction of $f$ to $V$ is finite. If $[V]$ denotes the cycle associated to $V$, then its push-forward $f_*[V]$ determines an effective Cartier divisor on $Y$. We set $\operatorname{codim}(V,X)=k$. \[thm1\] With the above notation, let $C \subset X$ be an irreducible center of non log terminality for $(X,c\. V)$, for some $c>0$. Then $f(C)$ is a center of non log terminality (even non log canonicity, if $k\geq 2$) for the pair $$\label{gen_formula} \( Y, \frac{k! \. c^k}{k^k} \. f_*[V] \).$$ Moreover, if $V$ is locally complete intersection (l.c.i. for short) then $f(C)$ is a center of non log terminality for the pair $$\label{lci_formula} \( Y, \frac{c^k}{k^k} \. f_*[V] \).$$ Let $k$ and $n$ be two positive integers with $n > k$, and let $R = K[x_k,\dots,x_n]$. We take $X = \P^{k-1}_R = \operatorname{Proj}R[x_0,\dots,x_{k-1}]$, $Y = \operatorname{Spec}R$, and let $f$ be the natural projection from $X$ to $Y$. For any $t>0$, let $V_t$ be the subscheme of $X$ defined by the homogeneous ideal $(x_1,\dots,x_k)^t$. Note that $\operatorname{lc}(X,V_t) = k/t$, and that if $c=k/t$, then $V_1$ is a center of non log terminality for $(X,c\. V_t)$. Since $l(\O_{V_t,V_1})=\binom{k+t-1}{k}$, we see that $$\lim_{t\to\infty}\frac{k! \. c^k/k^k}{\operatorname{lc}(Y,f_*[V_t])} =\lim_{t\to\infty}\frac{t(t+1)\ldots(t+k-1)}{t^k}=1,$$ so the bound in (\[gen\_formula\]) is sharp (at least asymptotically). To prove sharpness in the l.c.i. case, let $W_t\subset X$ be the complete intersection subscheme defined by $(x_1^t,\dots,x_k^t)$. This time $l(\O_{W_t,W_1}) = t^k$, and $\operatorname{lc}(Y,f_*[W_t]) = 1/t^k = \operatorname{lc}(X,W_t)^k/k^k$. By hypothesis, there is a proper birational morphism $\n : W \to X$, where $W$ can be chosen to be smooth, and a smooth irreducible divisor $E$ on $W$, such that $\n(E) = C$, and such that the discrepancy of $(X,c\. V)$ at $E$ is $$\label{eq1} a_E(X,c\. V) \le -1.$$ The surjection $f$ induces an inclusion of function fields $f^* : K(Y) \inj K(X)$. Let $R_E:=\O_{W,E}\subset K(X)$ be the discrete valuation ring associated to the valuation along $E$, and let $R = (f^*)^{-1}R_E$. Note that $R$ is a non-trivial discrete valuation ring. $R$ corresponds to a divisorial valuation. It is enough to show that the transcendence degree of the residue field of $R$ over the ground field is $\dim Y-1$ (see [@KM], Lemma 2.45). This follows from [@ZS], VI.6, Corollary 1. The lemma implies that there is a proper birational morphism $\g : Y' \to Y$ and an irreducible divisor $G$ on $Y'$ such that $R=\O_{Y',G}$. By Hironaka’s theorem, we may assume that both $Y'$ and $G$ are smooth, and moreover, that the union between $G$ and the exceptional locus of $\g$ has simple normal crossings. Since the center of $R_E$ on $X$ is $C$, we deduce that $R$ has center $f(C)$ on $Y$, so $\g(G) = f(C)$. Consider the fibered product $X' = Y' \times_Y X$. We may clearly assume that $\n$ factors through the natural map $\f : X' \to X$. Therefore we have the following commutative diagram: $$\xymatrix{ W \ar[r]^{\e} & X' \ar[d]_g \ar[r]^{\f} & X \ar[d]^f \\ &Y' \ar[r]^{\g} & Y, }$$ where $\phi\circ\eta=\nu$. Note that $X'$ is a smooth variety, $g$ is a smooth, proper morphism, and $\e$ and $\f$ are proper, birational morphisms. Let $V' = \f^{-1}(V)$ be the scheme theoretic inverse image of $V$ in $X'$, i.e., the subscheme of $X'$ defined by the ideal sheaf $I_V \. \O_{X'}$. \[lem1\] $V'$ is pure dimensional, $\operatorname{codim}(V',X')=k$, and $\f^*[V]$ is the class of $[V']$. Moreover, if $V$ is l.c.i., then so is $V'$. Note that both $\gamma$ and $\phi$ are l.c.i. morphisms, because they are morphisms between smooth varieties. The pull-back in the statement is the pull-back by such a morphism (see [@Fulton], Section 6.6). Recall how this is defined. We factor $\gamma$ as $\gamma_1\circ\gamma_2$, where $\gamma_1 : Y'\times Y\longrightarrow Y$ is the projection, and $\gamma_2 : Y'\hookrightarrow Y'\times Y$ is the graph of $\gamma$. By pulling-back, we get a corresponding decomposition $\phi=\phi_1\circ\phi_2$, with $\phi_1$ smooth, and $\phi_2 : X'\hookrightarrow Y'\times X$ a regular embedding of codimension $\dim Y'$. Then $\phi^*[V]=\phi_2^!([Y'\times V])$. Since $f|_V$ is finite and $V' = Y'\times_Y V$, $g|_{V'}$ is also finite. Moreover, since $g(V')$ is a proper subset of $Y'$, we see that $\dim V' \le \dim Y'-1$. On the other hand, $V'$ is locally cut in $Y'\times V$ by $\dim\,Y'$ equations, so that every irreducible component of $V'$ has dimension at least $\dim\,V$. Therefore $V'$ is pure dimensional, and $\dim\,V'=\dim\,V$. Since $Y'\times V$ is Cohen-Macaulay, this also implies that $\phi_2^!([Y'\times V])$ is equal to the class of $[V']$, by Proposition 7.1 in [@Fulton]. This proves the first assertion. Moreover, if $V$ is l.c.i., then it is locally defined in $X$ by $k$ equations. The same is true for $V'$, hence $V'$ is l.c.i., too. We will use the following notation for multiplicities. Suppose that $W$ is an irreducible subvariety of a variety $Z$. Then the multiplicity of $Z$ along $W$ is denoted by $e_WZ$ (we refer to [@Fulton], Section 4.3, for definition and basic properties). If $\alpha=\sum_in_i[T_i]$ is a pure dimensional cycle on $Z$, then $e_W\alpha:=\sum_in_ie_WT_i$ (if $W\not\subseteq T_i$, then we put $e_WT_i=0$). Note that if $W$ is a prime divisor, and if $D$ is an effective Cartier divisor on $Z$, then we have $e_W[D]={\rm ord}_W(D)$, where $[D]$ is the cycle associated to $D$, and ${\rm ord}_W(D)$ is the coefficient of $W$ in $[D]$. As we work on smooth varieties, from now on we will identify $D$ with $[D]$. Let $F = \e(E)$. Note that by construction, we have $g(F)=G$. Since $F\subseteq V'$, and $g|_{V'}$ is finite, and $\dim\,G=\dim\,V'$, it follows that $F$ is an irreducible component of $V'$, hence $\operatorname{codim}(F, X')=k$. We set $a = e_F(K_{X'/X})$. To simplify the statements, we put $$\d = \begin{cases} 1 &\text{if $V$ is a l.c.i.,} \\ k! &\text{otherwise.} \end{cases}$$ \[lem2\] We have $$\operatorname{ord}_G(\g^* f_*[V]) \geq \frac{(a + 1)k^k}{\d c^k},$$ and the inequality is strict in the case $\delta=k!$, if $k\geq 2$. Since $\f$ and $\g$ are l.c.i. morphisms of the same relative dimension, it follows from [@Fulton], Example 17.4.1, and Lemma \[lem1\] that $g_*[V']$ and $\g^*f_*[V]$ are linearly equivalent, as divisors on $Y'$. As the two divisors are equal outside the exceptional locus of $\g$, we deduce from the Negativity Lemma (see [@KM], Lemma 3.39) that also their $\g$-exceptional components must coincide. This gives $g_*[V'] = \g^*f_* [V]$. In particular, ${\rm ord}_G(\g^*f_*[V])$ is greater or equal to the coefficient of $F$ in $[V']$. Lemma \[lem1\] implies $$\operatorname{ord}_G(\g^* f_*[V]) \geq l(\O_{V',F}),$$ so that it is enough to show that $$\label{lem2-eq} l(\O_{V',F}) \geq \frac{(a + 1)k^k}{\d c^k},$$ and that the inequality is strict in the case $\delta=k!$, if $k\geq 2$. By replacing $W$ with a higher model, we may clearly assume that $\n^{-1}(V)$ is an effective divisor on $W$. If $I_V\subseteq\O_X$ is the ideal defining $V$, then we put $\operatorname{ord}_E(I_V):=\operatorname{ord}_E\n^{-1}(V)$. It follows from (\[eq1\]) that we have $$-1\geq \operatorname{ord}_E(K_{W/X}) - c\.\operatorname{ord}_E(I_V) = \operatorname{ord}_E(K_{W/X'}) - (c\.\operatorname{ord}_E(I_{V'})-\operatorname{ord}_E(K_{X'/X})).$$ Therefore $F$ is a center of non log terminality for the pair $(X',c\. V' - K_{X'/X})$. Since $g(F)=G$ is a divisor on $Y'$, it follows that $F$ can not be contained in the intersection of two distinct $\phi$-exceptional divisors. Hence the support of $K_{X'/X}$ is smooth at the generic point of $F$. Then (\[lem2-eq\]) follows from Theorem \[l(a)-e(a)\] below (note that the length of a complete intersection ideal coincides with its Samuel multiplicity). We continue the proof of Theorem \[thm1\]. Note that $\operatorname{ord}_G K_{Y'/Y} \leq e_F(g^* K_{Y'/Y})$. Since $K_{X'/X} = g^* K_{Y'/Y}$ (see [@Hartshorne], Proposition II 8.10), we deduce $$\operatorname{ord}_G K_{Y'/Y} \leq a.$$ In conjunction with Lemma \[lem2\], this gives $$a_G\left(Y,\frac{\delta c^k}{k^k}f_*[V]\right)= \operatorname{ord}_G \( K_{Y'/Y} - \frac{\d c^k}{k^k}\. \g^* f_*[V] \) \leq -1.$$ Moreover, this inequality is strict in the case when $\d=k!$, if $k\geq 2$. This completes the proof of Theorem \[thm1\]. We refer to [@Pu1] for a result on the canonical threshold of complete intersection subschemes of codimension 2, via generic projection. Multiplicities of fractional ideals =================================== In this section we extend some of the results of [@DEM], as needed in the proof of Theorem \[thm1\]. More precisely, we consider the following set-up. Let $X$ be a smooth variety, $V\subset X$ a closed subscheme, and let $Z$ be an irreducible component of $V$. We denote by $n$ the codimension of $Z$ in $X$, and by $\a \subset\O_{X,Z}$ the image of the ideal defining $V$. Let $H \subset X$ be a prime divisor containing $Z$, such that $H$ is smooth at the generic point of $Z$. We consider the pair $$(X, V-b\cdot H),$$ for a given $b\in{\mathbb Q}_+$. \[l(a)-e(a)\] With the above notation, suppose that for some $\mu\in{\mathbb Q}_+^*$, $(X,\frac{1}{\mu}(V-b\cdot H))$ is not log terminal at the generic point of $Z$. Then $$\label{l(a)} l(\O_{X,Z}/\a)\geq\frac{n^n \m^{n-1}(\m + b)}{n!},$$ and the inequality is strict if $n\geq 2$. Moreover, if $e(\a)$ denotes the Samuel multiplicity of $\O_{X,Z}$ along $\a$, then $$\label{e(a)} e(\a) \ge n^n \m^{n-1}(\m+b).$$ For $n=2$, inequality (\[e(a)\]) gives a result of Corti from [@Co2]. On the other hand, if $b=0$, then the statement reduces to Theorems 1.1 and 1.2 in [@DEM]. We see that (\[l(a)\]) implies (\[e(a)\]) as follows. If we apply the first formula to the subscheme $V_t\subseteq X$ defined by $\a^t$, to $\mu_t=\mu t$, and to $b_t=bt$, we get $$l(\O_{X,Z}/\a^t)\geq\frac{n^n\mu^{n-1}(\mu+b)}{n!}t^n.$$ Dividing by $t^n$ and passing to the limit as $t \to \infty$ gives (\[e(a)\]). In order to prove (\[l(a)\]), we proceed as in [@DEM]. Passing to the completion, we obtain an ideal $\^\a$ in $\^ \O_{X,Z}$. We identify $\^ \O_{X,Z}$ with $K[[x_1,\dots,x_n]]$ via a fixed isomorphism, where $K$ is the residue field of $\O_{X,Z}$. Moreover, we may choose the local coordinates so that the image of an equation $h$ defining $H$ in $\O_{X,Z}$ is $x_n$. Since $\^\a$ is zero dimensional, we can find an ideal $\bb \subset R = K[x_1,\dots,x_n]$, which defines a scheme supported at the origin, and such that $\^\bb = \^\a$. If $V'$, $H'\subset{\mathbb A}^n$ are defined by $\bb$ and $x_n$, respectively, then $({\mathbb A}^n,\frac{1}{\mu}(V'-b\. H'))$ is not log terminal at the origin. We write $\mu = r/s$, for some $r,s \in \N$, and we may clearly assume that $sb\in\N$. Consider the ring $S = K[x_1,\dots,x_{n-1},y]$, and the inclusion $R\subseteq S$ which takes $x_n$ to $y^r$. This determines a cyclic covering of degree $r$ $$M := \operatorname{Spec}S \to N := {\mathbb A}^n=\operatorname{Spec}R,$$ with ramification divisor defined by $(y^{r-1})$. For any ideal $\cc\subset R$, we put $\tilde\cc:=\cc S$. If $W$ is the scheme defined by $\cc$, then we denote by $\widetilde{W}$ the scheme defined by $\tilde\cc$. In particular, if $H''\subset M$ is defined by $(y)$, then $\widetilde{H'}=rH''$. It follows from [@ein1], Proposition 2.8 (see also [@Laz], Section 9.5.E) that $(N,\frac{1}{\mu}(V'-b\.H'))$ is not log terminal at the origin in $N$ if and only if $(M,\frac{1}{\mu}\cdot\widetilde{V'}-(sb+r-1)H'')$ is not log terminal at the origin in $M$. We write the rest of the proof in the language of multiplier ideals, for which we refer to [@Laz]. We use the formal exponential notation for these ideals. If $\tilde\bb$ is the ideal defining $\widetilde{V'}$, then the above non log terminality condition on $M$ can be interpreted as saying that $$\label{J} y^{bs+r-1} \not \in \J(\tilde \bb^{1/\mu}).$$ We choose a monomial order in $S$, with the property that $$x_1 > \dots > x_{n-1} > y^{bs+r-1}.$$ This induces flat deformations to monomial ideals (see [@Eisenbud], Chapter 15). For an ideal $\dd \subseteq S$, we write the degeneration as $\dd_t \to \dd_0$, where $\dd_t \cong \dd$ for $t \ne 0$ and $\dd_0 =: \operatorname{in}(\dd)$ is a monomial ideal. We claim that $$\label{in(J)} y^{bs+r-1} \not \in \operatorname{in}(\J(\tilde \bb^{1/\mu})).$$ Indeed, suppose that $y^{bs+r-1} \in \operatorname{in}(\J(\tilde \bb^{1/\mu}))$. Then we can find an element $f \in \J(\tilde \bb^{1/\mu})$ such that $\operatorname{in}(f) = y^{bs+r-1}$. Because of the particular monomial order we have chosen, $f$ must be a polynomial in $y$ of degree $bs+r-1$. On the other hand, $\J(\tilde \bb^{1/\mu})$ defines a scheme which is supported at the origin (or empty), since so does $\tilde \bb$. We deduce that $y^i\in\J(\tilde\bb^{1/\mu})$, for some $i\leq bs+r-1$, which contradicts (\[J\]). \[in(J(c))vJ(in(c))\] For every ideal $\dd \subseteq S$, and every $c\in{\mathbb Q}_+^*$, we have $$\operatorname{in}(\J(\dd^c)) \supseteq \J(\operatorname{in}(\dd)^c).$$ Consider the family $\pi : \MM = \A^n \times T \to T$, with $T = \A^1$, and the ideal $\DDD\subset\O_{\MM}$ corresponding to the degeneration of $\dd$ described above. If $U$ is the complement of the origin in $T$, then there is an isomorphism $$(\pi^{-1}(U), \DDD\vert_{\pi^{-1}(U)})\simeq ({\mathbb A}^n\times U, {\rm pr}_1^{-1}\dd).$$ Via this isomorphism we have $\J(\pi^{-1}(U),\DDD^c) \simeq {\rm pr}_1^{-1}(\J(\dd^c))$. Since the family degenerating to the initial ideal is flat, we deduce easily that $$\J(\MM,\DDD^c)\cdot \O_{\pi^{-1}(0)}\subseteq\operatorname{in}(\J(\dd^c)).$$ On the other hand, the Restriction Theorem (see [@Laz]) gives $$\J(\operatorname{in}(\dd)^c)=\J((\DDD\vert_{\pi^{-1}(0)})^c)\subseteq\J(\MM,\DDD^c)\cdot \O_{\pi^{-1}(0)}.$$ If we put together the above inclusions, we get the assertion of the lemma. Note that the monomial order on $S$ induces a monomial order on $R$, and that $\widetilde{\operatorname{in}(\bb)}=\operatorname{in}(\tilde\bb)$. Indeed, the inclusion $\widetilde{\operatorname{in}(\bb)}\subseteq\operatorname{in}(\tilde\bb)$ is obvious, and the corresponding subschemes have the same length $r\cdot l(R/\bb)$. On the other hand, Lemma \[in(J(c))vJ(in(c))\] and (\[in(J)\]) give $$y^{bs+r-1} \not \in \J(\operatorname{in}(\tilde \bb)^{1/\mu}).$$ Applying again Proposition 2.8 in [@ein1], in the other direction, takes us back in $R$: we deduce that $(N, \frac{1}{\mu}(W-b\cdot H'))$ is not log terminal at the origin, where $W\subset N$ is defined by $\operatorname{in}(\bb)$. Since $l(\O_{X,Z}/\a)=l(R/\bb)=l(R/\operatorname{in}(\bb))$, we have reduced the proof of (\[l(a)\]) to the case when $\a$ is a monomial ideal. In this case, we have in fact a stronger statement, which we prove in the lemma below; therefore the proof of Theorem \[l(a)-e(a)\] is complete. The following is the natural generalization of Lemma 2.1 in [@DEM]. \[monomial\] Let $\a$ be a zero dimensional monomial ideal in the ring $R = K[x_1,\dots,x_n]$, defining a scheme $V$. Let $H_i$ be the hyperplane defined by $x_i=0$. We consider $\mu\in{\mathbb Q}_+^*$ and $b_i\in{\mathbb Q}$, such that $\mu\geq\max_i\{b_i\}$. If the pair $({\mathbb A}^n, \frac{1}{\mu}(V+\sum_ib_iH_i))$ is not log terminal, then $$l(R/\a)\geq\frac{n^n}{n!} \. \prod_{i=1}^n (\m - b_i),$$ and the inequality is strict if $n\geq 2$. We use the result in [@ELM] which gives the condition for a monomial pair, with possibly negative coefficients, to be log terminal. This generalizes the formula for the log canonical threshold of a monomial ideal from [@Ho]. It follows from [@ELM] that $(X,\frac{1}{\mu}(V+\sum_ib_iH_i))$ is not log terminal if and only if there is a facet of the Newton polytope associated to $\a$ such that, if $\sum_i u_i/a_i = 1$ is the equation of the hyperplane supporting it, then $$\sum_{i=1}^n \frac{\m- b_i}{a_i}\leq 1.$$ Applying the inequality between the arithmetic mean and the geometric mean of the set of nonnegative numbers $\{(\m - b_i)/a_i\}_i$, we deduce $$\prod_i a_i \geq n^n \. \prod_i (\m - b_i).$$ We conclude using the fact that $n! \. l(R/\a)\geq \prod_i a_i$, and the inequality is strict if $n\geq 2$ (see, for instance, Lemma 1.3 in [@DEM]). Log canonical thresholds of affine cones ======================================== In this section we give a lower bound for the log canonical threshold of a subscheme $V\subset\A^n$, cut out by homogeneous equations of the same degree. The bound involves the dimension of the non log terminal locus of $(\A^n,c\cdot V)$, where $c=\operatorname{lc}(\A^n,V)$. Moreover, we characterize the case when we have equality. In the particular case when $V$ is the affine cone over a projective hypersurface with isolated singularities, this proves a conjecture of Cheltsov and Park from [@CP]. The main ingredient we use for this bound is a formula for the log canonical threshold in terms of jet schemes, from [@Mu2]. Recall that for an arbitrary scheme $W$, of finite type over the ground field $k$, the $m$th jet scheme $W_m$ is again a scheme of finite type over $k$ characterized by $${\rm Hom}({\rm Spec}\,A, W_m)\simeq{\rm Hom}({\rm Spec}\,A[t]/(t^{m+1}), W),$$ for every $k$-algebra $A$. Note that $W_m(k)= {\rm Hom}({\rm Spec}\,k[t]/(t^{m+1}), W),$ and in fact, we will be interested only in the dimensions of these spaces. For the basic properties of the jet schemes, we refer to [@Mu1] and [@Mu2]. \[ingred\][([@Mu2], 3.4)]{} If $X$ is a smooth, connected variety of dimension $n$, and if $V\subset X$ is a subscheme, then the log canonical threshold of $(X,V)$ is given by $$\operatorname{lc}(X,V)=n-\sup_{m\in\N}\frac{\dim\,V_m}{m+1}.$$ Moreover, there is $p\in\N$, depending on the numerical data given by a log resolution of $(X,V)$, such that $\operatorname{lc}(X,V)=n-(\dim\,V_m)/(m+1)$ whenever $p\mid (m+1)$. For every $W$ and every $m\geq 1$, there are canonical projections $\phi^W_m:W_m\longrightarrow W_{m-1}$ induced by the truncation homomorphisms $k[t]/(t^{m+1})\longrightarrow k[t]/(t^m)$. By composing these projections we get morphisms $\pi^W_m:W_m \longrightarrow W$. When there is no danger of confusion, we simply write $\phi_m$ and $\pi_m$. If $W$ is a smooth, connected variety, then $W_m$ is smooth, connected, and $\dim\,W_m=(m+1)\dim\,W$, for all $m$. It follows from definition that taking jet schemes commutes with open immersions. In particular, if $W$ has pure dimension $n$, then $\pi_m^{-1}(W_{\rm reg})$ is smooth, of pure dimension $(m+1)n$. Recall that the non log terminal locus of a pair is the union of all centers of non log terminality. In other words, its complement is the largest open subset over which the pair is log terminal. Theorem \[ingred\] easily gives a description via jet schemes of the non log terminal locus of a pair which is log canonical, but is not log terminal. Suppose that $(X, V)$ is as in the theorem, and let $c=\operatorname{lc}(X,V)$. We say that an irreducible component $T$ of $V_m$ (for some $m$) computes $\operatorname{lc}(X,V)$ if $\dim(T)=(m+1)(n-c)$. Note that basic results on jet schemes show that for every irreducible component $T$ of $V_m$, the projection $\pi_m(T)$ is closed in $V$ (see [@Mu1]). It follows from Theorem \[ingred\] that if $W$ is an irreducible component of $V_m$ that computes the log canonical threshold of $(X, V)$ then $\pi_m(W)$ is contained in the non log terminal locus of $(X,c\cdot V)$ (see also [@ELM]). For future reference, we record here two lemmas. For $x \in \R$, we denote by $[x]$ the largest integer $p$ such that $p \le x$. \[fiber\][([@Mu1], 3.7)]{} If $X$ is a smooth, connected variety of dimension $n$, $D\subset X$ is an effective divisor, and $x\in D$ is a point with $e_xD=q$, then $$\dim(\pi^D_m)^{-1}(x)\leq mn-[m/q],$$ for every $m\in\N$. In fact, the only assertion we will need from Lemma \[fiber\] is that $\dim\,(\pi^D_m)^{-1}(x)\leq mn-1$, if $m\geq q$, which follows easily from the equations describing the jet schemes (see [@Mu1]). \[semicont\][([@Mu2] 2.3)]{} Let $\Phi : {\mathcal W}\longrightarrow S$ be a family of schemes, and let us denote the fiber $\Phi^{-1}(s)$ by ${\mathcal W}_s$. If $\tau:S\longrightarrow{\mathcal W}$ is a section of $\Phi$, then the function $$f(s)=\dim(\pi_m^{{\mathcal W}_s})^{-1}(\tau(s))$$ is upper semi-continuous on the set of closed points of $S$, for every $m\in\N$. The following are the main results in this section. \[lower\_bound1\] Let $V\subset\A^n$ be a subscheme whose ideal is generated by homogeneous polynomials of degree $d$. Let $c=\operatorname{lc}(\A^n,V)$, and let $Z$ be the non log terminal locus of $(\A^n, c\cdot V)$. If $e=\operatorname{codim}(Z,\A^n)$, then $c\geq e/d$. \[equality\_case1\] With the notation in the previous theorem, $c=e/d$ if and only if $V$ satisfies the following three properties: 1. $Z = L$ is a linear subspace of codimension $e$. 2. $V$ is the pull back of a closed subscheme $V'\subset\A^n/L$, which is defined by homogeneous polynomials of degree $d$ and such that $\operatorname{lc}(\A^n/L, V') =e/d$. 3. The non log terminal locus of $(\A^n/L, e/d\cdot V')$ is just the origin. If $\pi_m:V_m\longrightarrow V$ is the canonical projection, then we have an isomorphism $$\label{isom} \pi_m^{-1}(0)\simeq V_{m-d}\times\A^{n(d-1)},$$ for every $m\geq d-1$ (we put $V_{-1}=\{0\}$). Indeed, for a $k$-algebra $A$, an $A$-valued point of $\pi_m^{-1}(0)$ is a ring homomorphism $$\phi:k[X_1,\ldots,X_n]/(F_1,\ldots,F_s)\longrightarrow A[t]/(t^{m+1}),$$ such that $\phi(X_i)\in(t)$ for all $i$. Here $F_1,\ldots, F_s$ are homogeneous equations of degree $d$, defining $V$. Therefore we can write $\phi(X_i)=tf_i$, and $\phi$ is a homomorphism if and only if the classes of $f_i$ in $A[t]/(t^{m+1-d})$ define an $A$-valued point of $V_{m-d}$. But $\phi$ is uniquely determined by the classes of $f_i$ in $A[t]/(t^m)$, so this proves the isomorphism in equation (\[isom\]). By Theorem \[ingred\], we can find $p$ such that $$\dim\,V_{pd-1}=pd(n-c).$$ Let $W$ be an irreducible component of $V_{pd-1}$ computing $\operatorname{lc}(X,V)$, so $\dim\,W=pd(n-c)$ and $\pi_{pd-1}(W) \subset Z$. By our hypothesis, $\dim\pi_{pd-1}(W)\leq n-e$. Therefore Lemma \[semicont\] gives $$\label{inequality1} pd(n-c)=\dim\,W\leq\dim\pi_{pd-1}^{-1}(0)+n-e= \dim V_{(p-1)d-1}+(d-1)n+n-e,$$ where the last equality follows from (\[isom\]). Another application of Theorem \[ingred\] gives $$\label{inequality2} \dim\,V_{(p-1)d-1}\leq (p-1)d(n-c).$$ Using this and (\[inequality1\]), we get $c\geq e/d$. We use the notation in the above proof. Since $c=e/d$, we see that in both equations (\[inequality1\]) and (\[inequality2\]) we have, in fact, equalities. The equality in (\[inequality2\]) shows that $\dim V_{(p-1)d-1}=(p-1)d(n-c)$, so we may run the same argument with $p$ replaced by $p-1$. Continuing in this way, we see that we may suppose that $p=1$. In this case, the equality in (\[inequality1\]) shows that for some irreducible component $W$ of $V_{d-1}$, with $\dim W=dn-e$, we have $\dim \pi_{d-1}(W)=n-e$. It follows that if $Z_1:=\pi_{d-1}(W)$, then $Z_1$ is an irreducible component of $Z$. Fix $x\in Z_1$. If ${\rm mult}_xF\leq d-1$, for some degree $d$ polynomial $F$ in the ideal of $V$, then Lemma \[fiber\] would give $\dim\,\pi_{d-1}^{-1}(x)\leq (d-1)n-1$. This would imply $\dim\,W\leq n-e+(d-1)n-1$, a contradiction. Therefore we must have ${\rm mult}_xF\geq d$, for every such $F$. Recall that we have degree $d$ generators of the ideal of $V$, denoted by $F_1,\ldots,F_s$. Let $L_i = \{x \in \A^n | {\rm mult}_xF_i =d\}$, for $i\leq s$. By the Bézout theorem, $L_i$ is a linear space. If $L= \bigcap_{i=1}^s L_i$, then $Z_1 \subset L$. On the other hand, by blowing-up along $L$, we see that $L$ is contained in the non log terminal locus of $(\A^n, c\cdot V)$. Therefore $Z_1 =L$. Let $z_1, ..., z_e$ be the linear forms defining $L$. Then each $F_i$ is a homogeneous polynomial of degree $d$ in $z_1, ..., z_e$. This shows that $V$ is the pull back of a closed subscheme $V'\subset\A^n/L$, defined by $F_1,..., F_s$. Since the projection map $\pi: \A^n \longrightarrow \A^n/L$ is smooth and surjective, we see that $\operatorname{lc}(\A^n/L, V') = \operatorname{lc}(\A^n, V)$ and that the non log terminal locus of $(\A^n, \frac{e}{d}\cdot V)$ is just the pull-back of the corresponding locus for the pair $(\A^n/L, e/d\cdot V')$. Note that the non log terminal locus of $(\A^n/L, e/d\cdot V)$ is defined by an homogeneous ideal. By dimension considerations, we conclude that this locus consists just of the origin, so $Z= L$. Conversely, if $V$ is the pull back of a closed subscheme from $\A^n/L$ as described in the theorem, one checks that $\operatorname{lc}(\A^n, V) = e/d$ and that the corresponding non log terminal locus is just $L$. Let $V'$ be a closed subscheme of $\P^{n-1}$ defined by degree $d$ homogeneous polynomials $F_1,\ldots, F_s$, and let $V$ be the closed subscheme in $\A^n$ defined by the same set of polynomials. Let $c =\operatorname{lc}(\P^{n-1}, V')$, and let $Z'$ be the non log terminal locus of $(\P^{n-1}, c\cdot V')$. Suppose that the codimension of $Z'$ in $\P^{n-1}$ is $e$. \[proj\_case\] With the above notation, $\operatorname{lc}(\P^{n-1}, V') \ge e/d$. Moreover, if we have equality, then $V'$ is the cone over a scheme in some $\P^{e-1}$. Note that $$\operatorname{lc}(\P^{n-1}, V') = \operatorname{lc}(\A^n-\{0\}, V-\{0\}) \ge \operatorname{lc}(\A^n, V).$$ Now the first assertion follows from Theorem \[lower\_bound1\]. If $\operatorname{lc}(\P^{n-1}, V') = e/d$, then $\operatorname{lc}(\A^n, V) = e/d$ and the non log terminal locus of $(\A^n, \frac{e}{d}\cdot V)$ is a linear space $L$ of codimension $e$. If $z_1,..., z_e$ are the linear forms defining $L$, then each $F_i$ is a homogeneous polynomial of degree $d$ in $z_1, ..., z_e$. Therefore $V'$ is the cone with center $L$ over the closed subscheme of $\P^{e-1}$ defined by $F_1,\ldots,F_s$. In [@CP], Cheltsov and Park studied the log canonical threshold of singular hyperplane sections of smooth, projective hypersurfaces. If $X\subset{\mathbb P}^n$ is a smooth hypersurface of degree $d$, and if ${V} =X\cap H$, for a hyperplane $H$, then they have shown that $$\label{ineq_CP} \operatorname{lc}(X,{V})\geq\min\{(n-1)/d,1\}.$$ It follows from Theorem \[ingred\] that $\operatorname{lc}(X, V) = \operatorname{lc}(\P^{n-1}, V)$. As it is well known that ${V}$ has isolated singularities, if we apply the first assertion in Corollary \[proj\_case\], then we recover the result in [@CP]. Cheltsov and Park have conjectured in their setting that if $d\geq n$, then equality holds in (\[ineq\_CP\]) if and only if ${V}$ is a cone. They have shown that their conjecture would follow from the Log Minimal Model Program. The second assertion in Corollary \[proj\_case\] proves, in particular, their conjecture. Application to birational rigidity ================================== Using the bounds on log canonical thresholds from the previous sections, we prove now the birational rigidity of certain Fano hypersurfaces. We recall that a Mori fiber space $X$ is called [*birationally superrigid*]{} if any birational map $\f : X \rat X'$ to another Mori fiber space $X'$ is an isomorphism. For the definition of Mori fiber space and for another notion of rigidity, we refer to [@Co2]. Note that Fano manifolds having Néron-Severi group of rank 1 are trivially Mori fiber spaces. Birational superrigidity is a very strong condition: it implies that $X$ is not rational, and that $\operatorname{Bir}(X) = \operatorname{Aut}(X)$. Note that if $X$ is a smooth hypersurface of degree $N$ in $\P^N$ ($N \ge 4$), then $X$ has no nonzero vector fields. Therefore if $X$ is birationally superrigid, then the birational invariant $\operatorname{Bir}(X)$ is a finite group. The following theorem is the main result of this section. \[X\_N\] For any integer $4 \le N \le 12$, every smooth hypersurface $X = X_N \subset \P^N$ of degree $N$ is birationally superrigid. The case $N=4$ of the above theorem is due to Iskovskikh and Manin (see [@IM]). The case $N=5$ was proven by Pukhlikov in [@Pu2], while the cases $N=6,7,8$ were established by Cheltsov in [@Ch2]. Birational superrigidity of smooth hypersurfaces of degree $N$ in $\P^N$ (for $N \ge 5$) was conjectured by Pukhlikov in [@Pu5], where the result is established under a suitable condition of regularity on the equation defining the hypersurface. We remark that there is an attempt due to Pukhlikov in [@Pu1] to prove the general case (for $N \ge 6$). Despite a gap in the proof (see the remark below), we believe that the method therein could lead in the future to the result. In fact, the proof given below for Theorem \[X\_N\] follows his method, and our contribution is mainly in simplifying and solidifying his argument. \[gap\] The following gives a counterexample to Corollary 2 in [@Pu1]. Let $Q\subset{\mathbb P}^4$ be a cone over a twisted cubic, and let $\pi_a: Q\longrightarrow R=\pi_a(Q)$ be the projection from an arbitrary point $a\in{\mathbb P}^4\setminus Q$; note that $R$ is the cone over a singular plane cubic. If $p$ is the vertex of $Q$, then the restriction of $\pi_a$ to any punctured neighbourhood of $p$ in $Q$ can not preserve multiplicities, as $q=\pi_a(p)$ lies on a one dimensional component of the singular locus of $R$. Before proving the above theorem, we recall the following result, due to Pukhlikov: \[pu1\][([@Pu1], Proposition 5)]{} Let $X \subset \P^N$ be a smooth hypersurface, and let $Z$ be an effective cycle on $X$, of pure codimension $k < \frac 12 \dim X$. If $m \in \N$ is such that $Z \equiv m \.c_1(\O_X(1))^k$, then $\dim \{ x \in Z \mid e_xZ > m \} < k$. \[pu1\_rmk\] Because we have assumed $k<\frac 12 \dim X$, the existence of $m$ as in the proposition follows from Lefschetz Theorem. One can check that the proof of Proposition \[pu1\] extends to the case $k = \frac 12 \dim X$, if we assume that such $m$ exists. Note also that the statement is trivially true if $k > \frac 12 \dim X$. We need first a few basic properties which allow us to control multiplicities when restricting to general hyperplane sections, and when projecting to lower dimensional linear subspaces. The following proposition must be well known, but we include a proof for the convenience of the readers. We learned this proof, which simplifies our original arguments, from Steve Kleiman. \[int\_mult\] Let $Z\subset\P^n$ be an irreducible projective variety. If $H \subset Z$ is a general hyperplane section, then $e_pH = e_pZ$ for every $p \in H$. As observed by Whitney (e.g., see [@Kl], page 219), at any point $p \in Z$, the fiber over $p$ of the conormal variety of $Z$, viewed as a linear subspace of $(\P^n)^*$, contains the dual variety of every component of the embedded projective tangent cone $C_pZ$ of $Z$ at $p$. A hyperplane section $H$ of $Z$ satisfies $e_pH = e_pZ$ if the hyperplane meets $C_pZ$ properly. Therefore, this equality holds for every point $p$ in $H$ whenever $H$ is cut out by a hyperplane not in the dual variety of $Z$. In the next two propositions, we consider a (possibly reducible) subvariety $Z \subset \P^{n+s}$, of pure dimension $n-1$, for some $n \ge 2$ and $s\geq 1$, and take a general linear projection $\p : \P^{n+s} \setminus \LL \to \P^n$. Here $\LL$ denotes the center of projection, that is an $(s-1)$ dimensional linear space. We put $T = \p(Z)$ and $g = \p|_Z : Z \to T$. It is easy to see that since $\LL$ is general, $g$ is a finite birational map. For convenience, we put $\dim(\emptyset)=-1$. \[proj\_mult1\] With the above notation, consider the set $$\D = \Big\{q \in T \mid e_q T > \sum_{p \in g^{-1}(q)} e_p Z \Big\}.$$ If the projection is chosen with suitable generality, then $\operatorname{codim}(\D,{\mathbb P}^n) \ge 3$. Note that $e_qT \ge \sum e_pZ$ for every $q \in T$, the sum being taken over all points $p$ over $q$. Moreover, for a generic projection, every irreducible component of $Z$ is mapped to a distinct component of $T$. Therefore, by the linearity of the multiplicity, we may assume that $Z$ is irreducible. Let $\D' \subset T$ be the set of points $q$, such that for some $p$ over $q$, the intersection of the $s$ dimensional linear space $\ov{\LL q}$ with the embedded projective tangent cone $C_pZ$ of $Z$ at $p$, is at least one dimensional. We claim that $\operatorname{codim}(\D',{\mathbb P}^n) \geq 3$. Indeed, it follows from the theorem on generic flatness that there is a stratification $Z=Z_1 \sqcup \dots \sqcup Z_t$ by locally closed subsets such that, for every $1 \le j \le t$, the incidence set $$I_j = \{ (p,x) \in Z_j\times\P^{n+s} \mid x \in C_pZ \}$$ is a (possibly reducible) quasi-projective variety of dimension no more than $2 \dim Z= 2n-2$. Let $\operatorname{pr}_1$ and $\operatorname{pr}_2$ denote the projections of $I_j$ to the first and to the second factor, respectively. It is clear that the set of those $y\in{\mathbb P}^{n+s}$, with $\dim \operatorname{pr}_2^{-1}(y)=\tau$ has dimension at most $\max\{2n-2-\tau,-1\}$, for every $\tau\in{\mathbb N}$. Since $\LL$ is a general linear subspace of dimension $s-1$, it intersects a given $d$ dimensional closed subset in a set of dimension $\max\{d-n-1,-1\}$. Hence $\dim \operatorname{pr}_2^{-1}(\LL)\leq n-3$, and therefore $\dim(\operatorname{pr}_1(\operatorname{pr}_2^{-1}(\LL))) \le n-3$. As this is true for every $j$, we deduce $\operatorname{codim}(\D',{\mathbb P}^n)\geq 3$. Thus, in order to prove the proposition, it is enough to show that $\D \subseteq \D'$. For a given point $p \in Z$, let $L_p \subset \P^{n+s}$ be an $(s+1)$ dimensional linear subspace passing through $p$. Let $\mm_p$ be the maximal ideal of $\O_{Z,p}$, and let $\PP \subset \O_{Z,p}$ be the ideal locally defining $L_p \cap Z$. If $L_p$ meets the tangent cone $C_pZ$ of $Z$ at $p$ properly, then the linear forms defining $L_p$ generate the ideal of the exceptional divisor of the blow up of $Z$ at $p$. Therefore $e(\mm_p)=e(\PP)$. Consider now some $q \in T \setminus \D'$. Let $L_q \subset \P^n$ be a general line passing through $q$, and let $\QQ \subset \O_{T,q}$ be the ideal generated by the linear forms vanishing along $L_q$. We denote by $L$ the closure of $\p^{-1}(L_q)$ in $\P^{n+q}$. For every $p \in g^{-1}(q)$, let $\PP \subset \O_{Z,p}$ be the ideal generated by the linear forms vanishing along $L$. Since $L_q$ is general and $q \not \in \D'$, we may assume that $L$ intersects $C_pZ$ properly, hence $e(\mm_p) = e(\PP)$. On the other hand, if $\mm_q$ is the maximal ideal of $\O_{T,q}$, then $\QQ\subseteq\mm_q$, which gives $$\PP=\QQ\cdot\O_{Z,p} \subseteq \mm_q\cdot\O_{Z,p}\subseteq\mm_p.$$ Therefore $e(\mm_p)=e(\mm_q\cdot\O_{Z,p})$ for every $p$ as above, hence $q\not\in\D$, by [@Fulton], Example 4.3.6. \[proj\_mult2\] With the notation in Proposition \[proj\_mult1\], consider the set $$\S = \S(Z,\p):= \{q \in T \mid \text{$g^{-1}(q)$ has al least 3 distinct points} \}.$$ If the projection is sufficiently general, then $\operatorname{codim}(\S,{\mathbb P}^n) \geq 3$. We have $\operatorname{codim}(\S,{\mathbb P}^n)\geq 3$ if and only if $\S \cap P = \emptyset$ for every general plane $P\subset\P^n$. Pick one general plane $P$, let $P' \;(\cong \P^{s+2})$ be the closure of $\p^{-1}(P)$ in $\P^{n+s}$, and let $\p'$ be the restriction of $\p$ to $P' \setminus \LL$. If $Z' = Z \cap P'$, then $Z'$ is a (possibly reducible) curve, and its multisecant variety is at most two dimensional (see, for example, [@FOV], Corollary 4.6.17). Note that $\LL$ is general in $P'$. Indeed, choosing the center of projection $\LL$ general in $\P^{n+s}$, and then picking $P$ general in $\P^n$ is equivalent to first fixing a general $(s+2)$-plane $P'$ in $\P^{n+s}$ and then choosing $\LL$ general in $P'$. Therefore we conclude that $\S \cap P$, which is the same as $\S(Z',\p')$, is empty. By adjunction, $\O_X(-K_X)\simeq\O_X(1)$. Let $\f : X \rat X'$ be a birational map from $X$ to a Mori fiber space $X'$, and assume that $\f$ is not an isomorphism. By the Noether-Fano inequality (see [@Co1] and [@Is], or [@Ma]), we find a linear subsystem $\H \subset |\O_X(r)|$, with $r \geq 1$, whose base scheme $B$ has codimension $\ge 2$, and such that the pair $(X,\frac 1r\.B)$ is not canonical. We choose $c<\frac{1}{r}$, such that $(X,c\cdot B)$ is still not canonical, and let $C \subset X$ be a center of non canonicity for $(X,c \.B)$. Note that $C$ is a center of non canonicity also for the pairs $(X,c \. D)$ and $(X,c \. V)$, where $V = D \cap D'$ and $D$, $D' \in \H$ are two general members. Applying Proposition \[pu1\] for $Z=D$ and $k=1$, we see that the multiplicity of $D$ is $\leq r$ on an open subset whose complement has dimension zero. On this open subset $(X,c\.D)$ is canonical (see, for example, [@Ko] 3.14.1). Therefore $C = p$, a point of $X$. Let $Y$ be a general hyperplane section of $X$ containing $p$. Then $p$ is a center of non log canonicity for $(Y,c \. B|_Y)$. Note that $Y$ is a smooth hypersurface of degree $N$ in $\P^{N-1}$. Let $\p : \P^{N-1} \setminus \LL \to \P^{N-3}$ be a general linear projection, where the center of projection $\LL$ is a line. We can assume that the restriction of $\p$ to each irreducible component of $V|_Y$ is finite and birational. Note that $\p_*[V|_Y]$ is a divisor in $\P^{N-3}$ of degree $Nr^2$. If $\tilde Y = \operatorname{Bl}_{\LL \cap Y} Y$, then we get a morphism $f : \tilde Y \to \P^{N-3}$. If we choose $\LL$ general enough, then we can find an open set $U \subset \P^{N-3}$, containing the image $q$ of $p$, such that $f$ restricts to a smooth (proper) morphism $f^{-1}(U) \to U$. Applying Theorem \[thm1\], we deduce that the pair $$\label{pair} \(\P^{N-3}, \frac{c^2}4 \. \p_*[V|_Y] \)$$ is not log terminal at $q$. We claim that $$\label{dim_bound} \dim \{y \in \pi(V|_Y) \mid e_y(\p_*[V|_Y]) > 2r^2 \} \le \max \{ N-6,0 \}.$$ Indeed, by Propositions \[proj\_mult2\] and \[proj\_mult1\], the map $\operatorname{Supp}([V|_Y]) \to \operatorname{Supp}(\p_*[V|_Y])$ is at most 2 to 1 and preserves multiplicities outside a set, say $\D \cup \S$, of dimension $\le \max\{N-6,-1\}$. This implies that, for each $y$ outside the set $\D \cup \S$, $e_y(\p_*[V|_Y]) = \sum e_x([V|_Y])$, where the sum is taken over the points $x$ over $y$, and this sum involves at most two non-zero terms. Then (\[dim\_bound\]) follows from the fact that, by Propositions \[pu1\] and  \[int\_mult\] (see also Remark \[pu1\_rmk\]), the set of points $x$ for which $e_x[V|_Y] > r^2$ is at most zero dimensional. Note that the pair (\[pair\]) is log terminal at every point $y$ where $e_y(\p_*[V|_Y]) \le 4r^2$. If $4 \le N \le 6$, we deduce that the pair is log terminal outside a zero dimensional closed subset. In this case, Corollary \[proj\_case\] gives $c^2/4 \ge (N-3)/(Nr^2)$. Since $c < 1/r$, this implies $N < 4$, a contradiction. If $7 \le N \le 12$, then we can only conclude that the pair (\[pair\]) is log terminal outside a closed subset of codimension at least $3$. This time the same corollary gives $c^2/4 \ge 3/(Nr^2)$, which implies $N > 12$. This again contradicts our assumptions, so the proof is complete. [dFEM]{} I. A. Cheltsov, On a smooth four-dimensional quintic, (Russian) Mat. Sb. **191** (2000), 139–160; translation in Sb. Math. **191** (2000), 1399–1419. I. Cheltsov and J. Park, Log canonical thresholds and generalized Eckardt points, Mat. Sb. **193** (2002), 149–160. A. Corti, Factoring birational maps of threefolds after Sarkisov, J. Algebraic Geom. **4** (1995), 223–254. A. Corti, Singularities of linear systems and $3$-fold birational geometry, in *Explicit birational geometry of $3$-folds*, 259–312, Cambridge Univ. Press, Cambridge, 2000. T. de Fernex, L. Ein and M. Mustaţǎ, Multiplicities and log canonical threshold, preprint 2002, to appear in J. Algebraic Geom. L. Ein, Multiplier ideals, vanishing theorem and applications, in *Algebraic Geometry, Santa Cruz 1995*, volume **62** of Proc. Symp. Pure Math. Amer. Math. Soc., 1997, 203–219. L. Ein, R. Lazarsfeld and M. Mustaţǎ, Contact loci in arc spaces, preprint 2002. D. Eisenbud, Commutative algebra with a view toward algebraic geometry, Grad. Texts in Math. **150**, Springer, New York, 1995. H. Flenner, L. O’Carroll and W. Vogel, *Joins and Intersections*, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 1999. W. Fulton, *Intersection Theory*, second ed., Springer-Verlag, Berlin, 1998. R. Hartshorne, *Algebraic Geometry*, Graduate Texts in Mathematics, No. 52, Springer-Verlag, New York, 1977. J. Howald, Multiplier ideals of monomial ideals, Trans. Amer. Math. Soc. **353** (2001), 2665–2671. V. A. Iskovskikh, Birational rigidity and Mori theory, Uspekhi Mat. Nauk **56**:2 (2001) 3–86; English transl., Russian Math. Surveys **56**:2 (2001), 207–291. V. A. Iskovskikh and Yu. I. Manin, Three-dimensional quartics and counterexamples to the Lüroth problem, Mat. Sb. **86** (1971), 140–166; English transl., Math. Sb. **15** (1972), 141–166. S. Kleiman, Tangency and duality, Proceedings of the 1984 Vancouver Conference in Algebraic Geometry, 163–225, CMS Conf. Proc. **6**, Amer. Math. Soc., Providence, RI, 1986. J. Kollár, Singularities of pairs, in *Algebraic Geometry, Santa Cruz 1995*, volume **62** of Proc. Symp. Pure Math. Amer. Math. Soc., 1997, 221–286. J. Kollár, S. Mori, *Birational Geometry of Algebraic Varieties*, Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge, 1998. R. Lazarsfeld, *Positivity in Algebraic Geometry*, book in preparation. K. Matsuki, *Introduction to the Mori Program*, Universitext, Springer-Verlag, New York, 2002. M. Mustaţǎ, Jet schemes of locally complete intersection canonical singularities, with an appendix by David Eisenbud and Edward Frenkel, Invent. Math. **145** (2001), 397–424. M. Mustaţǎ, Singularities of pairs via jet schemes, J. Amer. Math. Soc. **15** (2002), 599–615. A.V. Pukhlikov, Birational automorphisms of a four-dimensional quintic, Invent. Math. **87** (1987), 303–329. A. V. Pukhlikov, Birational automorphisms of Fano hypersurfaces, Invent. Math. **134** (1998), 401–426. A.V. Pukhlikov, Birationally rigid Fano hypersurfaces, preprint 2002, arXiv: math.AG/0201302. O. Zariski and P. Samuel, *Commutative Algebra, Vol. II*, Van Nostrand, Princeton, 1960.
--- abstract: 'The standing wave nodes of nonradial oscillations on a neutron star crust will drift with a definite angle velocity around rotational pole due to the rotation of neutron stars. This is called the nonradial oscillation node precession of neutron stars. This article estimated the precession velocity and pointed out that it merely lies on the star’s rotation velocity and the angular order of spherical harmonic $l$ by one order approximation. If we suppose that oscillations effect the particles’ escaping from the polar cap of a neutron star, so that the antinode and node areas of the standing waves have different radiative intensity, several unusual conclusions are acquired by reviewing the observation of pulsars which had already been taken as neutron stars. For example, the drifting subpulse period $P_{3}$ can be gotten from the width of subpulses and order $l$; the larger velocity drift may produce the peak structure of average pulse profiles; the dissimilar radiation phenomena between neighboring periods generated from drift provide a reasonable explanation of interpulses which have been found on some pulsars.' author: - | [Haochen Li]{}\ [Physics Department, Washington University]{}\ [St. Louis, MO 63143]{} date: 'March 28, 2001' title: The Nonradial Oscillation Node Precession of Neutron Stars --- =-2mm =-1.50mm =6.7in =8.5in Introduction ============ #### Boriakoff (1976) had detected quasi-periodic micropulsations within the subpulses of PSR 2016+28, and inclined to take it as nonradial oscillations of neutron stars. In the pulsar polar cap model (Radhakrishnan and Cooke 1969) the radio pulse is produced by the coherent radiation of particles escaping from a certain surface area of the star(polar cap)along the magnetic field lines. Because of the high particle velocity, the radiation is emitted in a narrow cone, the axis of which coincide with the velocity vector of particles, which is tangential to the magnetic field lines. Since these are periodically distorted by the star’s vibration, the radiation cone will periodically change directions, switching on and off the radiopulse illumination of the observer(modulation). Van Horn (1980) pointed out that rotating, magnetized neutron stars can support a rich variety of oscillation modes and firstly suggested a possible association of subpulse drift and torsional oscillations. The special terms of the neutron stars are considered for calculating one order approximation of the frequency split of the torsional oscillations in Section 2. And using this result, we will discuss phenomena such as drifting subpulses, average pulse profiles and interpulses of pulsars in Section 3. Section 4 is the summary. Theory of Neutron Star Oscillation Node Precession ================================================== #### Ruderman (1968) firstly pointed out torsional oscillation modes of neutron star crusts. Hansen and Cioffi (1980) calculated the periods of those for a range of stellar models and found those associated with fundamental modes have periods of around 20 ms. We can use this result to estimate the lowest frequency of torsional oscillation of neutron stars as$$\omega_{0}={\frac{2\pi}{20ms}}=100\pi s^{-1}.$$ The rotation angular velocity of neutron star can be considered as $2\pi s^{-1}$, thus the ratio is $$\epsilon={\frac{\Omega}{\omega_{0}}}=0.02.$$ We can see that although the angular velocity of neutron star rotation is much larger than that of common stars, it is still small compared with the frequency of self-oscillation. This inspires us that the oscillation node precession theory which has been established on other heavenly bodies can be used on neutron stars (also because the torsional oscillation is little sensitive to sphere models, see Van Horn 1980, and its results is simple). That is, we can take rotation effect as perturbation to solve the sphere oscillation equations just as Ledoux (1951) did on gaseous stars and MacDonald and Ness (1961) did on the earth, and the frequency of free oscillation of sphere crust is the sum of the undisturbed frequency plus the perturbation frequency:$$\omega=\omega_{0}+\omega^{1}.$$ As one order approximation for torsional oscillation,$$\omega^{1}={\frac{m}{l(l+1)}}\Omega,$$ where $l$ and $m$ are integers denoting angular orders of spherical harmonic. As the theory of oscillation of stars (Ledoux 1951) and the earth(MacDonald and Ness 1961) has noted, each value of $m$ has two travelling waves associated with it. In the case of the earth one wave travels eastward, and its rate of travel is decreased by the angular velocity of earth rotation; the other travels westward, and its rate is faster. The waves corresponded with neighboring values of $m$ have relative angular velocity $${\frac{\Omega}{l(l+1)}}.$$ The combined effect is to produce a standing-wave pattern that for a given value of $m$ moves westward with the angular velocity $${\frac{\Omega}{l(l+1)}}$$ of its nodes, which is well known in seismology as the node precession of oscillations. And this is just the result we will use next to recur to the attempt which Van Horn (1980) had made to connect the torsional oscillation of rotating neutron stars with the observation phenomena of pulsars. Discussion ========== Drifting Subpulses ------------------ #### We suppose that the node and antinode of the standing wave separately correspond with those of subpulse radiate wave pattern, i.e., drifting subpulses reflect the node precession. Then the degrees of subpulse drift in one period of a pulsar rotation (Manchester and Taylor 1977) is $$D_{\phi}={\frac{\Omega}{l(l+1)}} P_{1}={\frac{360}{l(l+1)}},$$ where $P_{1}$ is the pulsar rotational period and 360 of longitude are equal to one pulsar period. We can see that when $D_{\phi}$ is smaller than the width of subpulses (the drifting subpulse observation results are exactly so, see Manchester and Taylor 1977), then we get the subpulse drifting-band spacing $$P_{3}={\frac{\frac{P_{2}}{P_{1}}\times360}{D_{\phi}}}={\frac{l(l+1)}{\frac{P_{1}}{P_{2}}}}$$(in units of $P_{1}$),where $P_{2}$ is the subpulse period (converted from degrees of longitude). We calculated the values of $P_{3}$ for several pulsars using the observational data from Van Horn(1980) and Wright and Fowler(1981). The results are listed in Table 1. Note that these are acquired with larger values of $l$, with which the values of $P_{3}$ increase. Several proximal values have been enumerated in the table for compare. The difference between theoretical and observational values probably due to error and disconsidering of the coupling of several values of $l$. The different values of $P_{3}$ in one pulsar are deemed to mode switch of different $l$. Average Pulses -------------- #### Theoretically we have no reasons to believe that the drifting pace of subpulses is always small. Then, for convenience we define drifting rate as $$V={\frac{\frac{P_{1}}{P_{2}}}{l(l+1)}},$$ which represents drift space (in units of $P_{2}$) in each rotational period of a pulsar. Because we could not find the integer drifting space (integer $V$), so the practically observed rate $V'$ rest with the decimal part of $V$. For example, if $V=3/2$ or $1/2$, then $V'=1/2$; if $V=5/3$, then $V'=2/3$ or $-1/3$(minus sign represents opposite drift direction). Here it implies that ${\frac{P_{1}}{P_{2}}}$ is integer which goes on the fact that it here represents the node number of the standing wave along the longitude of the sphere. When $l=1$ or $2$(the fundamental mode which the oscillation is most likely on), it is easy to determine that $V'$ will frequently get $1/2$, $1/6$, $2/6$(the same as $4/6$), etc. Unlike the smaller drifting pace discussed in Section 3.1, these values of $V'$ are too great to be detected as the drift we commonly mean (we do not know whether PSR 2303+30 listed in Table 1. belongs to these small $l$ modes). But in this situation subpulses will appear more frequently at the several fixures in the general radiate windows. The average pulse profiles imitated by computer program through adding a great many drifting periods show peak structures as displayed in Fig.1. Interpulses ----------- #### If the pulses of pulsars can embody the node precession of standing waves around the longitude of neutron stars, then according to the observed drifting pace $V'$ discussed above, it must have the circumstances that neighboring periods of pulses have different observational pictures especially when the standing wave length is longer than the general radiate window. For instance, once we see a pulse which practically is a fraction of the standing wave length(a node or nearby), then next period we see the antinode or nearby($V'=1/2$ is very common), therefore the different intensity pulses alternately occur along with the integral periods of rotation. This can give an natural explanation of interpulses (Manchester and Taylor 1977). The weaker pulse will be surely inclined to the nearby node(or antinode) area which should have stronger radiation. That is why the degrees between neighboring pulses are not exactly 180(Manchester and Taylor 1977). If this is true, it means that the real periods of the interpulse pulsars are only half of those we believe now. Summary ======= #### The sphere free oscillation has been proved by theory and observation to be a very common phenomena in the world of stars and planets. Many prior works have supposed this happens on neutron stars which have so great density and so rapid rotation (Cheng and Ruderman 1980; Harding and Tademaru 1981; McDermott, Van Horn, and Hansen 1988; Cordes, Weisberg, and Hankins 1990). Although the mechanism of radiation affected by oscillation has not been clearly discussed ( which obviously is a very important problem), it gives a very natural explanation to drifting subpulse phenomena, the generalization of which reasonably gives clear pictures of pulsar’s fundamental observation facts such as average pulse profile, interpulse, etc. The theory also has great potential in explanation of mode changing, micropulses, and glitches which maybe the author will discuss later. #### It should be pointed out that although the theoretic values of $P_{3}$ we get in Section 3.1 using bigger $l$ are in good agreement with the observation values(see Table 1.), it does not mean that the actual oscillation orders are always high. The lower modes (small $l$) are not considered in the calculating of $P_{3}$ is because the larger scale drift devote little observational effect(this can be seen from the discussion of Section 3.2). Actually it is most probable that more than one mode of oscillation are simultaneously sustained on the star crust, and the observation phenomena is only the coupling of these modes. In mode switching, the dominant precession rate changes sequentially. The advanced job need to determine the relationship of $l$, $m$ and the width of subpulses, so that we can know exactly the parameters of the stars’ oscillation and more detailed knowledge about a given pulsar. #### I wish to thank Xinji Wu and Xiaofei Chen of Peking University for the helpful discussions. Boriakoff, V. 1976, Ap.J. Lett., [**208**]{}, L43. Cheng, A. F., and Ruderman, M. A. 1980, Ap.J., [**235**]{}, 576. Cordes, J. M., Weisberg, J. M., and Hankins T.H.1990, A.J., [**100**]{}, 1882. Hansen, C. J., and Cioffi, D. F. 1980, Ap.J., [**238**]{}, 740. Harding, A.K., and Tadermaru, E. 1981, Ap.J., [**243**]{}, 597. Ledoux, P. 1951, Ap.J., [**114**]{}, 373. MacDonald, G.J.F., and Ness, N. F. 1961, J.Geophys.Res., [**66**]{}, 1865. McDermott, P. N., Van Horn, H. M., and Hansen C. J. 1988, Ap.J., [**325**]{}, 725. Manchester, R. N., and Taylor, J. H.1977, Pulsars (San Francisco: Freeman). Radhakrishnan, V., and Cooke, D. J. 1969 Ap.Lett., [**3**]{}, 225. Ruderman, M. A. 1968, Nature, [**218**]{}, 1128 Van Horn, H. M. 1980, Ap.J., [**236**]{}, 899. Wright, G. A. E., and Fowler, L. A. 1981, IAU Symposium 95, Pulsars, ed. W. Sieber and R. Wielebinski, p.211 Fig.1. The sketch maps of average pulse formed by great pace drifting. The abscissa is longitude and the ordinate is intensity, and the numbers only have relative meaning. 1) is single pulse which indicates our suppose that there are only two gaussian subpulses in one general pulse window and this measures up the observational fact; 2)-3) are both average pulses with $V'=1$, distinct at the original positions; 4)-6) are $V'=1/2$, $2/6$, $1/20$ separately. The adding times are all $10^{4}$. It is obvious that the figures will keep stable on more adding times. Table 1. The periods of drifting subpulses. PSR $P_{1}$(s) $P_{2}$(ms) ${\frac{P_{1}}{P_{2}}}$ $l$ $P_{3}$(Theory) $P_{3}$(Observation) --------- ------------ ------------- ------------------------- ----- ----------------- ---------------------- 19 18 1944+17 0.440 21 21 20 20 20 21 22 8 4.2 9 5.3 10 6.5 4.5 0031-07 0.943 55 17 11 7.8 6.8 12 9.2 12.5 13 11 14 12 15 14 8 1.7 2.11 0943+10 1.097 26 42 9 2.1 or 10 2.6 1.90 16 10 0809+74 1.292 50 26 17 12 11.0 18 13 18 3.8 1919+21 1.337 15 89 19 4.3 4.2 20 4.7 18 5.9 0301+19 1.387 24 58 19 6.6 6.4 20 7.2 13 1.7 2303+30 1.575 15 105 14 2.0 $\approx2$ 15 2.3 8 2.14 1237+25 1.38 41.0 33.7 9 2.67 $2.8\pm0.1$ 10 3.26
--- abstract: 'Using semi-empirical isochrones, we find the age of the Taurus star-forming region to be 3-4 Myr. Comparing the disc fraction in Taurus to young massive clusters suggests discs survive longer in this low density environment. We also present a method of photometrically de-reddening young stars using $iZJH$ data.' --- Introduction ============ Taurus is a low-density star-forming region containing primarily low-mass stars and so represents an ideal laboratory for studying the environmental effects on circumstellar disc lifetimes [@Kenyon2008 (Kenyon et al. 2008)]. To investigate the impact of the low-density environment on the discs in Taurus, we used the Wide-Field Camera (WFC) on the 2.5m Isaac Newton Telescope (INT) on La Palma to obtain $griZ$ photometry of 40 fields in Taurus. Our fields are focused on the densest regions not covered by the Sloan Digital Sky Survey. The resultant INT-WFC survey mainly covers the L1495, L1521 and L1529 clouds. We have augmented the WFC data with near-infrared $JHK$ data from 2MASS [@Cutri2003 (Cutri et al. 2003)]. To determine the age of Taurus we compare the semi-empirical isochrones discussed in [@Bell2013; @Bell2014 Bell et al. (2013, 2014)] to the observed colour-magnitude diagrams (CMDs). For a brief description of these isochrones see Bell et al. (these proceedings). De-reddening ============ The extinction in Taurus is spatially variable across the different clouds, and so we require a method of de-reddening the stars individually. We have found that in an $i$-$Z$, $J$-$H$ colour-colour diagram the reddening vectors are almost perpendicular to the theoretical stellar sequence (Fig.\[fig:izjh\_age\]), whose position is almost independent of age, and so we can de-redden stars using photometry alone. We construct a grid of models over a range of ages (1 to 10 Myr) and binary mass ratios (single star to equal mass binary). We adopt the reddening law from [@Fitzpatrick1999 Fitzpatrick (1999)], apply it to the atmospheric models and fold the result through sets of filter responses to derive reddening coefficients in each photometric system. [0.45]{} ![**Left:** $i$-$Z$, $J$-$H$ diagram for Taurus members. Asterisks are Class II sources, open circles are Class III sources. Overlaid as solid lines are a 2 and 4Myr isochrone. The dashed lines are reddening vectors in this colour space. **Right:** $r$, $r$-$i$ diagram for Taurus members identified as Class III. Isochrones of 1, 4 and 10Myr are overlaid. Asterisks indicate the position of a theoretical star with mass 0.75M$_\odot$. The black dashed line shows a reddening vector for A$_V$ = 1 mag. []{data-label="fig:izjh_age"}](izjh.eps "fig:"){width="\textwidth" height="5.5cm"} \[fig:izjh\_ccd\] [0.45]{} ![**Left:** $i$-$Z$, $J$-$H$ diagram for Taurus members. Asterisks are Class II sources, open circles are Class III sources. Overlaid as solid lines are a 2 and 4Myr isochrone. The dashed lines are reddening vectors in this colour space. **Right:** $r$, $r$-$i$ diagram for Taurus members identified as Class III. Isochrones of 1, 4 and 10Myr are overlaid. Asterisks indicate the position of a theoretical star with mass 0.75M$_\odot$. The black dashed line shows a reddening vector for A$_V$ = 1 mag. []{data-label="fig:izjh_age"}](age.eps "fig:"){width="\textwidth" height="5.5cm"} \[fig:age\] It is well known that for a fixed value of E(B-V), extinction in a given filter will vary with [$T_{\mathrm{eff}}$]{} (see e.g. [@Bell2013 Bell et al. 2013]). To account for this we use extinction tables to redden the isochrones for a grid of E($B$-$V$) and [$T_{\mathrm{eff}}$]{} values, and compare the reddened model grid to the data. We adopt a Bayesian approach and marginalise over binary mass, age and [$T_{\mathrm{eff}}$]{}. We take the extinction values from the model with the highest likelihood, and use this to de-redden the star. Taurus ====== Plotting the de-reddened Taurus members in the $r$, $r$-$i$ CMD, we notice that a significant fraction of the Class II objects appear much fainter than the primary locus. This is likely an accretion effect, and if we were to fit for the age of these members we would derive an age that is erroneously old. To avoid this effect, we fit only the Class III sources. We note that those Class II sources that are not scattered below the sequence lie coincident with the Class III sources, and thus we believe the age derived from the Class III sources alone will be representative of the overall age. We plot our de-reddened Taurus members in an $r$, $r$-$i$ CMD to fit for the age (Fig.\[fig:izjh\_age\]). We find that isochrones of 3-4 Myr (older than is commonly quoted in the literature) trace the observed stellar sequence well. To ensure consistency with the [@Bell2013 Bell et al. (2013)] age scale we compare the position of a theoretical star with a mass of 0.75 M$_\odot$ to the middle of the observed sequence. We find consistency with the overall isochrone fitting, with an age of 3-4 Myr still providing a good fit. With a robust age for Taurus we then examined the disc fraction. Taurus has a disc fraction of 69% [@Luhman2010 (Luhman et al. 2010)]. If we compare this to the other clusters in [@Bell2013 Bell et al. (2013)], which are on the same age scale, we find that Taurus has the largest disc fraction in the sample, significantly higher than the group of young (2 Myr), massive clusters, suggesting that discs may have survived longer in the low density environment present in Taurus. 2013, *MNRAS*, 434, 806 2014, *MNRAS*, 445, 3496 2003, *2MASS All Sky Catalog of point sources.* 1999, *PASP*, 111, 63 2008, *Handbook of Star Forming Regions, Vol. 1*, p.405 2010, *ApJS*, 186, 111
--- abstract: 'Inspired by the paper of Tasaka [@tasaka], we study the relations between totally odd, motivic depth-graded multiple zeta values. Our main objective is to determine the rank of the matrix $C_{N,r}$ defined by Brown [@Brown]. We will give new proofs for (conjecturally optimal) upper bounds on ${\operatorname{rank}}C_{N,3}$ and ${\operatorname{rank}}C_{N,4}$, which were first obtained by Tasaka [@tasaka]. Finally, we present a recursive approach to the general problem, which reduces the evaluation of ${\operatorname{rank}}C_{N,r}$ to an isomorphism conjecture.' address: - 'Charlotte Dietze, Max-Planck-Institut für Mathematik,Vivatsgasse 7,53111 Bonn, Germany' - 'Chorkri Manai, Max-Planck-Institut für Mathematik,Vivatsgasse 7,53111 Bonn, Germany' - 'Christian Nöbel, Max-Planck-Institut für Mathematik,Vivatsgasse 7,53111 Bonn, Germany' - 'Ferdinand Wagner, Max-Planck-Institut für Mathematik,Vivatsgasse 7,53111 Bonn, Germany' author: - Charlotte Dietze - Chokri Manai - Christian Nöbel - Ferdinand Wagner bibliography: - 'mzv.bib' date: September 2016 title: 'Totally Odd Depth-graded Multiple Zeta Values and Period Polynomials' --- Introduction ============ In this paper we will be interested in $\mathbb{Q}$-linear relations among totally odd depth-graded multiple zeta values (MZVs), for which there conjecturally is a bijection with the kernel of a specific matrix $C_{N,r}$ connected to restricted even period polynomials (for a definition, see [@schneps] or [@gkz2006 Section 5]). For integers $n_1,\ldots,n_{r-1}\geq1$ and $n_r\geq2$, the MZV of $n_1,\ldots,n_r$ is defined as the number $$\begin{aligned} \zeta(n_1,\ldots,n_r)\coloneqq \sum_{0<k_1<\cdots<k_r} \frac{1}{k_1^{n_1}\cdots k_r^{n_r}}{\,}.\end{aligned}$$ We call the sum $n_1+\cdots+n_r$ of arguments the weight and their number $r$ the depth of $\zeta(n_1,\ldots,n_r)$. One classical question about MZVs is counting the number of linearly independent $\mathbb Q$-linear relations between MZVs. It is highly expected, but for now seemingly out of reach that there are no relations between MZVs of different weight. Such questions become reachable when considered in the motivic setting. Motivic MZVs $\zeta^{\mathfrak m}(n_1,\ldots,n_r)$ are elements of a certain $\mathbb Q$-algebra $\mathcal H=\bigoplus_{N\geq 0}\mathcal H_N$ which was constructed by Brown in [@brownMixedMotives] and is graded by the weight $N$. Any relation fulfilled by motivic MZVs also holds for the corresponding MZVs via the period homomorphism $per\colon\mathcal H\to \mathbb R$. We further restrict to depth-graded MZVs: Let $\mathcal Z_{N,r}$ and $\mathcal H_{N,r}$ denote the $\mathbb Q$-vector space spanned by the real respectively motivic MZVs of weight $N$ and depth $r$ modulo MZVs of lower depth. The depth-graded MZV of $n_1,\ldots,n_r$, that is, the equivalence class of $\zeta(n_1,\ldots,n_r)$ in $\mathcal Z_{N,r}$, is denoted by $\zeta_{\mathfrak D}(n_1,\ldots,n_r)$. The elements of $\mathcal H_{N,r}$ are denoted $\zeta_{\mathfrak D}^{\mathfrak m}(n_1,\ldots,n_r)$ analogously. The dimension of $\mathcal Z_{N,r}$ is subject of the Broadhurst-Kreimer Conjecture. The generating function of the dimension of the space $\mathcal Z_{N,r}$ is given by $$\begin{aligned} \sum_{N,r\geq0}\dim_{\mathbb Q}\mathcal Z_{N,r}\cdot x^N y^r\overset?= \frac{1- \mathbb E(x)y}{1-\mathbb O(x)y+\mathbb S(x) y^2 - \mathbb S(x)y^4}{\,},\end{aligned}$$ where we denote $\mathbb E(x)\coloneqq \frac{x^2}{1-x^2}=x^2+x^4+x^6+\cdots$, $\mathbb O(x)\coloneqq \frac{x^3}{1-x^2}=x^3+x^5+x^7+\cdots$, and $\mathbb S(x)\coloneqq \frac{x^{12}}{(1-x^4)(1-x^6)}$. It should be mentioned that $\mathbb S(x)=\sum_{n>0}\dim\mathcal S_n\cdot x^n$, where $\mathcal S_n$ denotes the space of cusp forms of weight $n$, for which there is an isomorphism to the space of restricted even period polynomials of degree $n-2$ (defined in [@schneps] or [@gkz2006 Section 5]). In his paper [@Brown], Brown considered the $\mathbb Q$-vector space $\mathcal Z_{N,r}^{{\operatorname{odd}}}$ (respectively $\mathcal H_{N,r}^{{\operatorname{odd}}}$) of totally odd (motivic) and depth-graded MZVs, that is, $\zeta_{\mathfrak D}(n_1,\ldots,n_r)$ (respectively $\zeta_{\mathfrak D}^{\mathfrak m}(n_1,\ldots,n_r)$) for $n_i\geq3$ odd, and linked them to a certain explicit matrix $C_{N,r}$, where $N=n_1+\cdots+n_r$ denotes the weight. In particular, he showed that any right annihilator $(a_{n_1,\ldots,n_r})_{(n_1,\ldots,n_r)\in S_{N,r}}$ of $C_{N,r}$ induces a relation $$\begin{aligned} \sum_{(n_1,\ldots,n_r)\in S_{N,r}}a_{n_1,\ldots,n_r}\zeta_{\mathfrak D}^{\mathfrak m}(n_1,\ldots,n_r)=0{\,},\text{ hence also }\sum_{(n_1,\ldots,n_r)\in S_{N,r}}a_{n_1,\ldots,n_r}\zeta_{\mathfrak D}(n_1,\ldots,n_r)=0\end{aligned}$$(see Section \[sec:preliminaries\] for the notations) and conjecturally all relations in $\mathcal Z_{N,r}^{{\operatorname{odd}}}$ arise in this way. This led to the following conjecture (the uneven part of the Broadhurst-Kreimer Conjecture). \[con:Brown\] The generating series of the dimension of $\mathcal Z_{N,r}^{{\operatorname{odd}}}$ and the rank of $C_{N,r}$ are given by $$\begin{aligned} 1+\sum_{N,r>0}{\operatorname{rank}}C_{N,r}\cdot x^Ny^r\overset?=1+\sum_{N,r>0}\dim_{\mathbb Q}\mathcal Z_{N,r}^{{\operatorname{odd}}}\cdot x^Ny^r\overset?=\frac1{1-\mathbb O(x)y+\mathbb S(x)y^2}{\,}.\end{aligned}$$ The contents of this paper are as follows. In Section \[sec:preliminaries\], we explain our notations and define the matrices $C_{N,r}$ due to Brown [@Brown] as well as $E_{N,r}$ and $E_{N,r}^{(j)}$ considered by Tasaka [@tasaka]. In Section \[sec:known results\], we briefly state some of Tasaka’s results on the matrix $E_{N,r}$. Section \[sec:main tools\] is devoted to further investigate the connection between the left kernel of $E_{N,r}$ and restricted even period polynomials, which was first discovered by Baumard and Schneps [@schneps] and appears again in [@tasaka Theorem 3.6]. In Section \[sec:main results\], we will apply our methods to the cases $r=3$ and $r=4$. The first goal of Section \[sec:main results\] will be to show \[thm:case3\]Assume that the map from Theorem \[thm:injection\] is injective. We then have the lower bound $$\begin{aligned} \sum_{N>0}\dim_{\mathbb Q}\ker C_{N,3}\cdot x^N\geq 2 \mathbb O(x)\mathbb S(x){\,},\end{aligned}$$ where $\geq$ means that for every $N>0$ the coefficient of $x^N$ on the right-hand side does not exceed the corresponding one on the left-hand side. This was stated without proof in [@tasaka]. Furthermore, we will give a new proof by the polynomial methods developed in Section \[sec:main tools\] for the following result. \[thm:case4\]Assume that the map from Theorem \[thm:injection\] is injective. We then have the lower bound $$\begin{aligned} \sum_{N>0}\dim_{\mathbb Q}\ker C_{N,4}\cdot x^N\geq 3 \mathbb O(x)^2\mathbb S(x)-\mathbb S(x)^2{\,}.\end{aligned}$$ In the last two subsections of this paper, we will consider the case of depth 5 and give an idea for higher depths. For depth 5, we will prove that upon Conjecture \[con:isomorphism\] due to Tasaka ([@tasaka Section 3]), the lower bound predicted by Conjecture \[con:Brown\] holds, i.e. $$\begin{aligned} \sum_{N>0}\dim_{\mathbb Q}\ker C_{N,5}\cdot x^N\geq 4 \mathbb O(x)^3\mathbb S(x)-3 \mathbb O(x)\mathbb S(x)^2{\,}.\end{aligned}$$ These bounds are conjecturally sharp (i.e. the ones given by Conjecture \[con:Brown\]). Finally, we will prove a recursion for value of $\dim_{\mathbb Q}\ker C_{N,r}$ under the assumption of a similar isomorphism conjecture stated at the end of Section \[sec:main tools\], which was proposed by Claire Glanois. Acknowledgments {#acknowledgments .unnumbered} --------------- This research was conducted as part of the Hospitanzprogramm (internship program) at the Max-Planck-Institut für Mathematik (Bonn). We would like to express our deepest thanks to our mentor, Claire Glanois, for introducing us into the theory of multiple zeta values. We are also grateful to Daniel Harrer, Matthias Paulsen and Jörn Stöhler for many helpful comments. Preliminaries {#sec:preliminaries} ============= Notations --------- In this section we introduce our notations and we give some definitions. As usual, for a matrix $A$ we define $\ker A$ to be the set of right annihilators of $A$. Apart from this, we mostly follow the notations of Tasaka in his paper [@tasaka]. Let $$\begin{aligned} S_{N,r}\coloneqq \left\{(n_1,\ldots,n_r)\in\mathbb Z^r\ |\ n_1+\cdots+n_r=N,\ n_1,\ldots,n_r\geq3\text{ odd}\right\}{\,},\end{aligned}$$ where $N$ and $r$ are natural numbers. Since the elements of the set $S_{N,r}$ will be used as indices of matrices and vectors, we usually arrange them in lexicographically decreasing order. Let $$\begin{aligned} \mathbf V_{N,r}\coloneqq \left\langle \left.x_1^{m_1-1}\cdots x_r^{m_r-1}\ \right|\ (m_1,\ldots,m_r)\in S_{N,r}\right\rangle_\mathbb{Q}\end{aligned}$$denote the vector space of restricted totally even homogeneous polynomials of degree $N-r$ in $r$ variables. There is a natural isomorphism from $\mathbf V_{N,r}$ to the $\mathbb Q$-vector space ${\mathsf{Vect}}_{N,r}$ of $n$-tuples $(a_{n_1,\ldots,n_r})_{(n_1,\ldots,n_r)\in S_{N,r}}$ indexed by totally odd indices $(n_1,\ldots,n_r)\in S_{N,r}$, which we denote $$\begin{aligned} \label{eq:natiso} \begin{split} \pi\colon\mathbf V_{N,r}&\overset{\sim\,}{\longrightarrow}{\mathsf{Vect}}_{N,r}\\ \sum_{(n_1,\ldots,n_r)\in S_{N,r}}a_{n_1,\ldots,n_r}x_1^{n_1-1}\cdots x_r^{n_r-1}&\longmapsto \left(a_{n_1,\ldots,n_r}\right)_{(n_1,\ldots,n_r)\in S_{N,r}}{\,}. \end{split} \end{aligned}$$ We assume vectors to be row vectors by default. Finally, let $\mathbf W_{N,r}$ be the vector subspace of $\mathbf V_{N,r}$ defined by $$\begin{aligned} \mathbf W_{N,r}\coloneqq \left\{P\in\mathbf V_{N,r}\ |\ P(x_1,\ldots,x_r)\right.&=P(x_2-x_1,x_2,x_3,\ldots,x_r)\\&\left.\phantom=-P(x_2-x_1,x_1,x_3,\ldots,x_r)\right\}{\,}.\end{aligned}$$ That is, $P(x_1,x_2,x_3,\ldots,x_r)$ is a sum of restricted even period polynomials in $x_1,x_2$ multiplied by monomials in $x_3,\ldots,x_r$. More precisely, one can decompose $$\begin{aligned} \mathbf W_{N,r}=\bigoplus_{\substack{1<n<N\\n\text{ even}}}\mathbf W_{n,2}\otimes \mathbf V_{N-n,r-2} \label{eq:decomposition}{\,},\end{aligned}$$ where $\mathbf W_{n,2}$ is the space of restricted even period polynomials of degree $n-2$. Since $\mathbf W_{n,2}$ is isomorphic to the space $\mathcal S_n$ of cusp forms of weight $n$ by Eichler-Shimura correspondence (see [@zagier]), leads to the following dimension formula. \[lem:wnreq\] For all $r\geq 2$, $$\begin{aligned} \sum_{N>0}\dim_{\mathbb Q}\mathbf W_{N,r}\cdot x^N= \mathbb O(x)^{r-2} \mathbb S(x){\,}.\end{aligned}$$ Ihara action and the matrices $E_{N,r}$ and $C_{N,r}$ ----------------------------------------------------- We use Tasaka’s notation (from [@tasaka]) for the polynomial representation of the Ihara action defined by Brown [@Brown Section 6]. Let $$\begin{aligned} {\mathbin{\underline{\circ}}}\colon\mathbb Q[x_1]\otimes\mathbb Q[x_2,\ldots,x_r]&\longrightarrow\mathbb Q[x_1,\ldots,x_r] \\ f\otimes g&\longmapsto f{\mathbin{\underline{\circ}}}g{\,},\end{aligned}$$where $f{\mathbin{\underline{\circ}}}g$ denotes the polynomial $$\begin{gathered} (f{\mathbin{\underline{\circ}}}g)(x_1,\ldots,x_r)\coloneqq f(x_1)g(x_2,\ldots,x_r)+\sum_{i=1}^{r-1}\Bigl(f(x_{i+1}-x_i)g(x_1,\ldots,\hat x_{i+1},\ldots,x_r)\\ -(-1)^{\deg f}f(x_i-x_{i+1})g(x_1,\ldots,\hat x_i,\ldots,x_r)\Bigr){\,}.\end{gathered}$$ (the hats are to indicate, that $x_{i+1}$ and $x_i$ resp. are omitted in the above expression). For integers $m_1,\ldots,m_r,n_1,\cdots,n_r \geq 1$, let furthermore the integer $e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\ldots,n_r}}}$ denote the coefficient of $x_1^{n_1-1}\cdots x_r^{n_r-1}$ in $x_1^{m_1-1} {\mathbin{\underline{\circ}}}\left(x_1^{m_2-1}\cdots x_{r-1}^{m_r-1}\right)$, i.e. $$\begin{aligned} \label{eq:e} x_1^{m_1-1} {\mathbin{\underline{\circ}}}\left(x_1^{m_2-1}\cdots x_{r-1}^{m_r-1}\right) = \sum_{\substack{n_1 + \cdots + n_r = m_1 + \cdots + m_r \\ n_1, \cdots, n_r \geq 1}} e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\ldots,n_r}}}x_1^{n_1-1}\cdots x_r^{n_r-1}{\,}.\end{aligned}$$ Note that $e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\ldots,n_r}}}=0$ if $m_1+\cdots+m_r\not=n_1+\cdots+n_r$. One can explicitly compute the integers $e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\cdots,n_r}}}$ by the following formula: ([@tasaka Lemma 3.1]) $$\begin{gathered} e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\ldots,n_r}}}=\delta{{\textstyle\binom{m_1,\ldots,m_r}{n_1\ldots,n_r}}}+\sum_{i=1}^{r-1}\delta{{\textstyle\binom{\hat m_1,m_2,\ldots,m_i,\hat m_{i+1},m_{i+2},\ldots,m_r}{n_1,\ldots,n_{i-1},\hat n_i,\hat n_{i+1},n_{i+2},\ldots,n_r}}}\\ \cdot\left((-1)^{n_i}\binom{m_1-1}{n_i-1}+(-1)^{m_1-n_{i+1}}\binom{m_1-1}{n_{i+1}-1}\right)$$ (again, the hats are to indicate that $m_1,m_{i+1},n_i,n_{i+1}$ are omitted), where $$\begin{aligned} \delta{{\textstyle\binom{m_1,\ldots,m_{s}}{n_1,\ldots,n_{s}}}} \coloneqq \begin{cases} 1\quad\text{if } m_i=n_i \text{ for all }i\in\{1,\ldots,s\} \\ 0\quad\text{else} \end{cases}\end{aligned}$$ denotes the usual Kronecker delta. \[def:enrq\] Let $N,r$ be positive integers. - We define the $|S_{N,r}|\times |S_{N,r}|$ matrix $$\begin{aligned} E_{N,r}\coloneqq \left(e{{\textstyle\binom{m_1,\ldots,m_r}{n_1,\ldots,n_r}}}\right)_{(m_1,\ldots,m_r),(n_1,\ldots,n_r)\in S_{N,r}}{\,}.\end{aligned}$$ - For integers $r\geq j\geq 2$ we also define the $|S_{N,r}|\times |S_{N,r}|$ matrix $$\begin{aligned} E_{N,r}^{(j)}\coloneqq \left(\delta{{\textstyle\binom{m_1,\ldots,m_{r-j}}{n_1,\ldots,n_{r-j}}}}e{{\textstyle\binom{m_{r-j+1},\ldots,m_r}{n_{r-j+1},\ldots,n_r}}} \right)_{(m_1,\ldots,m_r),(n_1,\ldots,n_r)\in S_{N,r}}{\,}.\end{aligned}$$ \[def:cnr\] The $|S_{N,r}|\times |S_{N,r}|$ matrix $C_{N,r}$ is defined as $$\begin{aligned} C_{N,r}\coloneqq E_{N,r}^{(2)}\cdot E_{N,r}^{(3)}\cdots E_{N,r}^{(r-1)}\cdot E_{N,r}{\,}. \end{aligned}$$ Known Results {#sec:known results} ============= Recall the map $\pi\colon\mathbf V_{N,r}\rightarrow{\mathsf{Vect}}_{N,r}$ (equation ). Theorem \[thm:Schneps\] due to Baumard and Schneps [@schneps] establishes a connection between the left kernel of the matrix $E_{N,2}$ and the space $\mathbf W_{N,2}$ of restricted even period polynomials. This connection was further investigated by Tasaka [@tasaka], relating $\mathbf W_{N,r}$ and the left kernel of $E_{N,r}$ for arbitrary $r\geq2$. \[thm:Schneps\] For each integer $N>0$ we have $$\begin{aligned} \pi\left(\mathbf W_{N,2}\right)=\ker{\prescript{t\!}{}{E}}_{N,2}{\,}.\end{aligned}$$ \[thm:injection\] Let $r\geq2$ be a positive integer and $F_{N,r}=E_{N,r}-{\operatorname{id}}_{{\mathsf{Vect}}_{N,r}}$. Then, the following $\mathbb Q$-linear map is well-defined: $$\begin{aligned} \label{eq:TasakasFail} \begin{split} \mathbf W_{N,r}&\longrightarrow \ker{\prescript{t\!}{}{E}}_{N,r}\\ P(x_1,\ldots,x_r)&\longmapsto\pi(P)F_{N,r}{\,}. \end{split} \end{aligned}$$ \[con:isomorphism\]For all $r\geq2$, the map described in Theorem \[thm:injection\] is an isomorphism. \[rem:isor2\] For now, only the case $r=2$ is known, which is an immediate consequence of Theorem \[thm:Schneps\]. In [@tasaka], Tasaka suggests a proof of injectivity, but it seems to contain a gap, which, as far as the authors are aware, couldn’t be fixed yet. However, assuming the injectivity of morphisms one has the following relation. \[cor:enrineq\]For all $r\geq2$, $$\begin{aligned} \sum_{N>0}\dim_{\mathbb Q}\ker{\prescript{t\!}{}{E}}_{N,r}\cdot x^N\geq\mathbb O(x)^{r-2}\mathbb S(x){\,}. \end{aligned}$$ Main Tools {#sec:main tools} ========== Decompositions of $E_{N,r}^{(j)}$ --------------------------------- We use the following decomposition lemma: \[lem:blockdia\] Let $2\leq j\leq r-1$ and arrange the indices $(m_1,\ldots,m_r),(n_1,\ldots,n_r)\in S_{N,r}$ of $E_{N,r}^{(j)}$ in lexicographically decreasing order. Then, the matrix $E_{N,r}^{(j)}$ has block diagonal structure $$\begin{aligned} E_{N,r}^{(j)}={\operatorname{diag}}\left(E_{3r-3,r-1}^{(j)},E_{3r-1,r-1}^{(j)},\ldots,E_{N-3,r-1}^{(j)}\right){\,}.\end{aligned}$$ This follows directly from Definition \[def:enrq\]. \[cor:blockdia\] We have $$\begin{aligned} E_{N,r}^{(2)}E_{N,r}^{(3)}\cdots E_{N,r}^{(r-1)}={\operatorname{diag}}\left(C_{3r-3,r-1},C_{3r-1,r-1},\ldots,C_{N-3,r-1}\right){\,}. \end{aligned}$$ Multiplying the block diagonal representations of $E_{N,r}^{(2)},E_{N,r}^{(3)},\ldots,E_{N,r}^{(r-1)}$ block by block together with Definition \[def:cnr\] yields the desired result. \[cor:enrjocnr\] For all $r\geq 3$, $$\begin{aligned} \sum_{N>0}\dim_{\mathbb Q}\ker\left(E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)\cdot x^N=\mathbb O(x)\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,r-1}\cdot x^N{\,}. \end{aligned}$$ According to Corollary \[cor:blockdia\], the matrix $E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}$ has block diagonal structure, the blocks being $C_{3r-3,r-1},C_{3r-1,r-1},\ldots,C_{N-3,r-1}$. Hence, $$\begin{aligned} \sum_{N>0}\dim_{\mathbb Q}\ker\left(E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)\cdot x^N &= \sum_{N>0} \left( \sum_{k=3r-3}^{N-3}\dim_{\mathbb Q}\ker C_{k,r-1} \right) \cdot x^N \\ &=\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,r-1}\left(x^{N+3}+x^{N+5}+x^{N+7}+\cdots\right)\\ &=\mathbb O(x)\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,r-1}\cdot x^N{\,}, \end{aligned}$$ thus proving the assertion. Connection to polynomials ------------------------- Motivated by Theorem \[thm:injection\], we interpret the right action of the matrices $E_{N,r}^{(2)},\ldots,E_{N,r}^{(r-1)},E_{N,r}^{(r)}=E_{N,r}$ on ${\mathsf{Vect}}_{N,r}$ as endomorphisms of the polynomial space $\mathbf V_{N,r}$. Having established this, we will prove Theorems \[thm:case3\] and \[thm:case4\] from a polynomial point of view. \[def:phij\] The *restricted totally even part* of a polynomial $Q(x_1,\ldots,x_r)\in\mathbf V_{N,r}$ is the sum of all of its monomials, in which each exponent of $x_1,\ldots,x_r$ is even and at least 2. Let $r\geq j$. We define the $\mathbb Q$-linear map $$\begin{aligned} {\varphi^{(r)}_{j}}\colon\mathbf V_{N,r}\longrightarrow\mathbf V_{N,r}{\,}, \end{aligned}$$ which maps each polynomial $Q(x_1,\ldots,x_r)\in \mathbf V_{N,r}$ to the restricted totally even part of $$\begin{gathered} Q(x_1,\ldots,x_r)+\sum_{i=r-j+1}^{r-1}\Bigl(Q(x_1,\ldots,x_{r-j},x_{i+1}-x_i,x_{r-j+1},\ldots,\hat x_{i+1},\ldots,x_r)\\ -Q(x_1,\ldots,x_{r-j},x_{i+1}-x_i,x_{r-j+1},\ldots,\hat x_i,\ldots,x_r)\Bigr){\,}. \end{gathered}$$ Note that ${\varphi^{(r)}_{1}}\equiv{\operatorname{id}}_{\mathbf V_{N,r}}$. The following lemma shows that the map ${\varphi^{(r)}_{j}}$ corresponds to the right action of the matrix $E_{N,r}^{(j)}$ on ${\mathsf{Vect}}_{N,r}$ via the isomorphism $\pi$. \[lem:phij\] Let $r\geq j$. Then, for each polynomial $Q\in\mathbf V_{N,r}$, $$\begin{aligned} \pi\left({\varphi^{(r)}_{j}}(Q)\right)=\pi(Q)E_{N,r}^{(j)}{\,}.\end{aligned}$$ or equivalently, the following diagram commutes: \(a) at (0,2) [$\mathbf V_{N,r}$]{}; (b) at (3,2) [$\mathbf V_{N,r}$]{}; (c) at (0,0) [${\mathsf{Vect}}_{N,r}$]{}; (d) at (3,0) [${\mathsf{Vect}}_{N,r}$]{}; (e) at (0,-0.5) [$v$]{}; (f) at (3,-0.5) [$v\cdot E_{N,r}^{(j)}$]{}; (c1) at (e) ; (d1) at (f) ; (a) – (b) node\[pos=0.5,above\][${\varphi^{(r)}_{j}}$]{}; (c) – (d); (a) – (c) node\[pos=0.5,sloped,above=-2pt\][$\sim$]{} node\[pos=0.5,left\][$\pi$]{}; (b) – (d) node\[pos=0.5,sloped,above=-2pt\][$\sim$]{} node\[pos=0.5,left\][$\pi$]{}; (c1) – (d1); We proceed by induction on $r$. Let $r=j$ and $$\begin{aligned} Q(x_1,\ldots,x_j)=\sum_{(n_1,\ldots,n_j)\in S_{N,j}}q_{n_1,\ldots,n_j}x_1^{n_1-1}\cdots x_r^{n_j-1}{\,}.\end{aligned}$$Then, $E_{N,j}^{(j)}=E_{N,j}$ and thus $$\begin{aligned} \pi(Q)E_{N,j}=\left(\sum_{(m_1,\ldots,m_j)\in S_{N,j}}q_{n_1,\ldots,n_j}e{{\textstyle\binom{m_1,\ldots,m_j}{n_1,\ldots,n_j}}}\right)_{(n_1,\ldots,n_j)\in S_{N,j}}{\,}.\end{aligned}$$ By and linearity of the Ihara action ${\mathbin{\underline{\circ}}}$, the row vector on the right-hand side corresponds to $\pi$ applied to the restricted totally even part of the polynomial $$\begin{aligned} \label{eq:phiihara} \sum_{(n_1,\ldots,n_j)\in S_{N,j}}q_{n_1,\ldots,n_j}x_1^{n_1-1}{\mathbin{\underline{\circ}}}\left(x_1^{n_2-1}\cdots x_{j-1}^{n_j-1}\right){\,}.\end{aligned}$$ On the other hand, plugging $r=j$ into Definition \[def:phij\] yields that ${\varphi^{(j)}_{j}}(Q(x_1,\ldots,x_j))$ corresponds to the restricted totally even part of some polynomial, which by definition of the Ihara action ${\mathbin{\underline{\circ}}}$ coincides with the polynomial defined in . Thus, the claim holds for $r=j$. Now suppose that $r\geq j+1$ and the claim is proven for all smaller $r$. Let us decompose $$\begin{aligned} Q(x_1,\ldots,x_r)=\sum_{\substack{n_1=3\\n_1\text{ odd}}}^{N-(3r-3)}x_1^{n_1-1}\cdot Q_{N-n_1}(x_2,\ldots,x_r){\,},\end{aligned}$$where the $Q_k$ are restricted totally even homogeneous polynomials in $r-1$ variables. In particular, $Q_k\in\mathbf V_{k,r-1}$ for all $k$. Arrange the indices of $\pi(Q)$ in lexicographically decreasing order. Then, by grouping consecutive entries, $\pi(Q)$ is the list-like concatenation of $\pi(Q_{3r-3}),\ldots,\pi(Q_{N-3})$, which we denote by $$\begin{aligned} \pi(Q)=\big(\pi(Q_{3r-3}),\pi(Q_{3r-1}),\ldots,\pi(Q_{N-3})\big){\,}.\end{aligned}$$ Since we have lexicographically decreasing order of indices, the block diagonal structure of $E_{N,r}^{(j)}$ stated in Lemma \[lem:blockdia\] yields $$\begin{aligned} \pi(Q)E_{N,r}^{(j)}&=\left(\pi(Q_{3r-3})E_{3r-3,r-1}^{(j)},\pi(Q_{3r-1})E_{3r-1,r-1}^{(j)},\ldots,\pi(Q_{N-3})E_{N-3,r-1}^{(j)}\right)\\ &=\left(\pi\left({\varphi^{(r-1)}_{j}}(Q_{3r-3})\right),\pi\left({\varphi^{(r-1)}_{j}}(Q_{3r-1})\right),\ldots,\pi\left({\varphi^{(r-1)}_{j}}(Q_{N-3})\right)\right)\\ &=\pi\left({\varphi^{(r)}_{j}}(Q)\right)\end{aligned}$$by linearity of ${\varphi^{(r)}_{j}}$ and the induction hypothesis. This shows the assertion. \[cor:phiEiso\] For all $r\geq2$, $$\begin{aligned} {\operatorname{Im}}{\prescript{t\!}{}{\left(E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)}}\cap\ker{\prescript{t\!}{}{E}}_{N,r}\cong\ker{\varphi^{(r)}_{r}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ\cdots\circ{\varphi^{(r)}_{2}}\right){\,}.\end{aligned}$$ By the previous Lemma \[lem:phij\], the following diagram commutes: \(a) at (0,2) [$\mathbf V_{N,r}$]{}; (b) at (3,2) [$\mathbf V_{N,r}$]{}; (c) at (6,2) [$\cdots$]{}; (d) at (9,2) [$\mathbf V_{N,r}$]{}; (e) at (12,2) [$\mathbf V_{N,r}$]{}; (A) at (0,0) [${\mathsf{Vect}}_{N,r}$]{}; (B) at (3,0) [${\mathsf{Vect}}_{N,r}$]{}; (C) at (6,0) [$\cdots$]{}; (D) at (9,0) [${\mathsf{Vect}}_{N,r}$]{}; (E) at (12,0) [${\mathsf{Vect}}_{N,r}$]{}; /in[a/A,b/B,d/D,e/E]{}[ () – () node\[pos=0.5,sloped,above=-2pt\][$\sim$]{} node\[pos=0.5,left\][$\pi$]{}; ]{} //in[a/b/2,b/c/3,c/d/r-1,d/e/r]{}[ () – () node\[pos=0.5,above\][${\varphi^{(r)}_{\j}}$]{}; ]{} //in[A/B/2,B/C/3,C/D/r-1,D/E/r]{}[ () – () node\[pos=0.5,above\][${}\cdot E_{N,r}^{(\j)}$]{}; ]{} From this, we have ${\operatorname{Im}}{\prescript{t\!}{}{\left(E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)}}\cong{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ\cdots\circ{\varphi^{(r)}_{2}}\right)$ and $\ker{\prescript{t\!}{}{E}}_{N,r}\cong\ker{\varphi^{(r)}_{r}}$. Thereby, the claim is established. \[lem:kerphi\]Let $j\leq r-1$. Then, $$\begin{aligned} \ker{\varphi^{(r)}_{j}}\cong\bigoplus_{n<N}\mathbf V_{N-n,r-j}\otimes\ker E_{n,j}{\,}. \end{aligned}$$ Let $Q\in\mathbf V_{N,r}$. We may decompose $$\begin{aligned} Q(x_1,\ldots,x_r)=\sum_{n<N}\ \sum_{(n_1,\ldots,n_{r-j})\in S_{N-n,r-j}}x_1^{n_1-1}\cdots x_{r-j}^{n_{r-j}-1}R_{n_1,\ldots,n_{r-j}}(x_{r-j+1},\ldots,x_r){\,},\end{aligned}$$ where $R_{n_1,\ldots,n_{r-j}}\in\mathbf V_{n,j}$ is a restricted totally even homogeneous polynomial. Note that we have $Q\in\ker{\varphi^{(r)}_{j}}$ if and only if ${\varphi^{(j)}_{j}}(R_n)=0$ holds for each $R_n$ in the above decomposition. By Lemma \[lem:phij\], ${\varphi^{(j)}_{j}}(R_n)=0$ if and only if $\pi(R_n)\in\ker E_{n,j}$. Now, the assertion is immediate. \[cor:phijres\] Let $2\leq j\leq r-2$. The restricted map $$\begin{aligned} {\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}\colon\mathbf W_{N,r}\longrightarrow\mathbf W_{N,r} \end{aligned}$$ is well-defined and satisfies $$\begin{aligned} \ker{\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}\cong\bigoplus_{n<N}\mathbf W_{N-n,r-j}\otimes\ker E_{n,j}{\,}. \end{aligned}$$ Since $j\leq r-2$, for each $Q\in\mathbf V_{N,r}$ the map $Q(x_1,\ldots,x_r)\mapsto{\varphi^{(r)}_{j}}(Q)$ does not interfere with $x_1$ or $x_2$ and thus not with the defining property of $\mathbf W_{N,r}$. Hence, ${\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}$ is well-defined. The second assertion is done just like in the previous Lemma \[lem:kerphi\]. \[lem:fnr\] Let $r\geq3$. For all $P\in\mathbf W_{N,r}$, $$\begin{aligned} \pi(-P)E_{N,r}^{(r-1)}=\pi(P)F_{N,r}{\,}.\end{aligned}$$ Recall that by Lemma \[lem:phij\], $$\begin{aligned} \pi(-P)E_{N,r}^{(r-1)}&=\pi\left( {\varphi^{(r)}_{r-1}}\big(-P(x_1,\ldots,x_r)\big)\right)\\ &=\begin{multlined}[t] \pi\bigg(-P(x_1,\ldots,x_r)+\sum_{i=2}^{r-1}\Big(-P(x_1,x_{i+1}-x_i,x_2,\ldots, \hat x_{i+1},\ldots,x_r)\\ +P(x_1,x_{i+1}-x_i,x_2,\ldots, \hat x_i,\ldots,x_r)\Big)\bigg) \end{multlined}\\ &=\begin{multlined}[t] \pi\bigg(-P(x_1,\ldots,x_r)+\sum_{i=2}^{r-1}\Big(P(x_{i+1}-x_i,x_1,\ldots, \hat x_{i+1},\ldots,x_r)\\ -P(x_{i+1}-x_i,x_1,\ldots, \hat x_i,\ldots,x_r)\Big)\bigg){\,}, \end{multlined}\end{aligned}$$ since $-P$ is antisymmetric with respect to $x_1\leftrightarrow x_2$. In the same way we compute $$\begin{aligned} \pi(P)F_{N,r}&=\pi(P)\left(E_{N,r}-{\operatorname{id}}_{{\mathsf{Vect}}_{N,r}}\right)=\pi\big({\varphi^{(r)}_{r}}(P(x_1,\ldots,x_r))\big)-\pi(P)\\ &=\begin{multlined}[t] \pi\bigg(P(x_1,\ldots,x_r)+\sum_{i=1}^{r-1}\Big(P(x_{i+1}-x_i,x_1,\ldots, \hat x_{i+1},\ldots,x_r)\\ -P(x_{i+1}-x_i,x_1,\ldots, \hat x_i,\ldots,x_r)\Big)\bigg)-\pi(P){\,}. \end{multlined}\end{aligned}$$ Now the desired result follows from $$\begin{aligned} P(x_1,x_2,\ldots,x_r)+P(x_2-x_1,x_1,x_3,\ldots,x_r)-P(x_2-x_1,x_2,\ldots,x_r)=0{\,},\end{aligned}$$ since $P$ is in $\mathbf W_{N,r}$. \[cor:kerimineq\] Assume that the map from Theorem \[thm:injection\] is injective. Then, for all $r\geq 3$, $$\begin{aligned} \dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{E}}_{N,r}^{(r-1)}\cap\ker{\prescript{t\!}{}{E}}_{N,r}\right)\geq\dim_{\mathbb Q}\mathbf W_{N,r}{\,}.\end{aligned}$$ This is immediate by the previous Lemma \[lem:fnr\]. \[lem:imphi\] For all $r\geq3$, $$\begin{aligned} {\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)\subseteq\ker {\varphi^{(r)}_{r}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ\cdots\circ{\varphi^{(r)}_{2}}\right){\,}.\end{aligned}$$ We may replace the right-hand side by just $\ker{\varphi^{(r)}_{r}}$. Note that by Corollary \[cor:phijres\] the composition of restricted ${\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}$ on the left-hand side is well-defined. Moreover, each $Q\in{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)$ can be represented as $Q={\varphi^{(r)}_{r-1}}(P)$ for some $P\in\mathbf W_{N,r}$ and thus $Q\in\ker{\varphi^{(r)}_{r}}$ according to Lemma \[lem:fnr\] and Theorem \[thm:injection\]. Similar to Conjecture \[con:isomorphism\] we expect a stronger result to be true, which is stated in the following conjecture due to Claire Glanois: \[con:imiso\] For all $r\geq 3$, $$\begin{aligned} {\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)=\ker {\varphi^{(r)}_{r}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ\cdots\circ{\varphi^{(r)}_{2}}\right){\,}.\end{aligned}$$ Note that intersecting $\ker{\varphi^{(r)}_{r}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ\cdots\circ{\varphi^{(r)}_{2}}\right)$, Conjecture \[con:imiso\] does not need the injectivity from Conjecture \[con:isomorphism\]. However, we haven’t been able to derive Conjecture \[con:imiso\] from Conjecture \[con:isomorphism\], so it is not necessarily weaker. Main Results {#sec:main results} ============ Throughout this section we will assume that the map from Theorem \[thm:injection\] is injective, i.e. the injectivity part of Conjecture \[con:isomorphism\] is true. This was also the precondition for Tasaka’s original proof of Theorem \[thm:case4\]. Proof of Theorem \[thm:case3\]. ------------------------------- By Corollary \[cor:enrjocnr\], Remark \[rem:isor2\] and the fact that $E_{N,2}=C_{N,2}$ we obtain $$\begin{aligned} \label{eq:case3ineq1} \sum_{N>0}\dim_{\mathbb Q}\ker E_{N,3}^{(2)}\cdot x^N=\mathbb O(x)\sum_{N>0}\dim_{\mathbb Q}\ker C_{N,2}\cdot x^N=\mathbb O(x)\mathbb S(x){\,}.\end{aligned}$$ We use Corollary \[cor:kerimineq\] and Lemma \[lem:wnreq\] to obtain $$\begin{aligned} \label{eq:case3ineq2} \sum_{N>0}\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{E}}_{N,3}^{(2)} \cap \ker{\prescript{t\!}{}{E}}_{N,3}\right)\cdot x^N \geq \sum_{N>0}\dim_{\mathbb Q}\mathbf W_{N,3}\cdot x^N = \mathbb O(x) \mathbb S(x){\,}.\end{aligned}$$ Now observe that since $C_{N,3}=E_{N,3}^{(2)}E_{N,3}$, we have $$\begin{aligned} \dim_{\mathbb Q}\ker C_{N,3}=\dim_{\mathbb Q}\ker{\prescript{t\!}{}{E}}_{N,3}^{(2)}+\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{E}}_{N,3}^{(2)} \cap \ker{\prescript{t\!}{}{E}}_{N,3}\right){\,}.\end{aligned}$$ By and , the assertion is proven. Proof of Theorem \[thm:case4\]. ------------------------------- Since $C_{N,4}=E_{N,4}^{(2)}E_{N,4}^{(3)}E_{N,4}$, we may split $\dim_{\mathbb Q}\ker C_{N,4}$ into $$\begin{aligned} \dim_{\mathbb Q}\ker C_{N,4}=\dim_{\mathbb Q}\ker{\prescript{t\!}{}{\left( E_{N,4}^{(2)}E_{N,4}^{(3)}\right)}}+\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{\left( E_{N,4}^{(2)}E_{N,4}^{(3)}\right)}} \cap \ker{\prescript{t\!}{}{E}}_{N,4}\right){\,}.\end{aligned}$$ The two summands on the right-hand side are treated separately. For the first one, by Corollary \[cor:enrjocnr\] and Theorem \[thm:case3\] one has $$\begin{aligned} \label{eq:case4ineq1} \sum_{N>0}\dim_{\mathbb Q}\ker{\prescript{t\!}{}{\left( E_{N,4}^{(2)}E_{N,4}^{(3)}\right)}}\cdot x^N\geq2\mathbb O(x)^2\mathbb S(x){\,}.\end{aligned}$$ For the second one, we use Corollary \[cor:phiEiso\] and Lemma \[lem:imphi\] to obtain $$\begin{aligned} \dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{\left( E_{N,4}^{(2)}E_{N,4}^{(3)}\right)}} \cap \ker{\prescript{t\!}{}{E}}_{N,4}\right)&\geq\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(4)}_{3}}\circ{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}\right)\\ &=\dim_{\mathbb Q}{\operatorname{Im}}{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}{\,},\end{aligned}$$ since we assume ${\varphi^{(4)}_{3}}$ to be injective on $\mathbf W_{N,r}$ according to Conjecture \[con:isomorphism\]. According to Corollary \[cor:phijres\] and Theorem \[thm:Schneps\], $$\begin{aligned} \ker{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}\cong\bigoplus_{n<N}\mathbf W_{N-n,2}\otimes\ker E_{n,2}\cong\bigoplus_{n<N}\mathbf W_{N-n,2}\otimes\mathbf W_{n,2}{\,}.\end{aligned}$$ Now, by $\dim_{\mathbb Q}{\operatorname{Im}}{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}=\dim_{\mathbb Q}\mathbf W_{N,4}-\dim_{\mathbb Q}\ker{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}$ we obtain $$\begin{aligned} \label{eq:case4ineq2} \sum_{N>0}\dim_{\mathbb Q}{\operatorname{Im}}{\varphi^{(4)}_{2}\big|_{\mathbf W_{N,4}}}\cdot x^N=\mathbb O(x)^2\mathbb S(x)-\mathbb S(x)^2{\,}.\end{aligned}$$ Combining and , the proof is finished. The case $r=5$ assuming Conjecture \[con:isomorphism\]. ------------------------------------------------------- In addition to the injectivity of , we now assume Conjecture \[con:isomorphism\] is true in the case $r=3$, i.e. $$\begin{aligned} \sum_{N>0}\dim_{\mathbb Q}\ker{\prescript{t\!}{}{E}}_{N,3}\cdot x^N=\mathbb O(x)\mathbb S(x)\label{eq:case5eq1}\end{aligned}$$ by Corollary \[cor:enrineq\]. Our goal is to prove the lower bound $$\begin{aligned} \sum_{N>0}\dim_{\mathbb Q}\ker C_{N,5}\cdot x^N\geq 4 \mathbb O(x)^3\mathbb S(x)-3 \mathbb O(x)\mathbb S(x)^2{\,},\label{eq:case5}\end{aligned}$$ which as an equality would be the exact value predicted by Conjecture \[con:Brown\]. Again we use the decomposition $C_{N,5}=E_{N,5}^{(2)}E_{N,5}^{(3)}E_{N,5}^{(4)}E_{N,5}$ to split $\dim_{\mathbb Q}\ker C_{N,5}$ into $$\begin{aligned} \dim_{\mathbb Q}\ker C_{N,5}=\dim_{\mathbb Q}\ker{\prescript{t\!}{}{\left( E_{N,5}^{(2)}E_{N,5}^{(3)}E_{N,5}^{(4)}\right)}}+\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{ \left( E_{N,5}^{(2)}E_{N,5}^{(3)}E_{N,5}^{(4)}\right)}} \cap \ker{\prescript{t\!}{}{E}}_{N,5}\right){\,}.\end{aligned}$$ Applying Corollary \[cor:enrjocnr\] and Theorem \[thm:case4\] to the first summand on the right-hand side, we obtain $$\begin{aligned} \label{eq:case5ineq1} \sum_{N>0}\dim_{\mathbb Q}\ker E_{N,5}^{(2)}E_{N,5}^{(3)}E_{N,5}^{(4)}\cdot x^N\geq3\mathbb O(x)^3\mathbb S(x)-\mathbb O(x)\mathbb S(x)^2{\,}.\end{aligned}$$ Again, for the second summand Corollary \[cor:phiEiso\] and Lemma \[lem:imphi\] yield $$\begin{aligned} \begin{split} \dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{\left( E_{N,5}^{(2)}E_{N,5}^{(3)}E_{N,5}^{(4)}\right)}} \cap \ker{\prescript{t\!}{}{E}}_{N,5}\right)&\geq\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(5)}_{4}}\circ{\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\circ{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}\right)\\ &=\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\circ{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}\right){\,}, \end{split}\end{aligned}$$ since ${\varphi^{(5)}_{4}}$ is injective on $\mathbf W_{N,r}$ by our assumption. According to Corollary \[cor:phijres\] and Theorem \[thm:Schneps\], $$\begin{aligned} \ker{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}\cong\bigoplus_{n<N}\mathbf W_{N-n,3}\otimes\ker E_{n,2}\cong\bigoplus_{n<N}\mathbf W_{N-n,3}\otimes\mathbf W_{n,2}\end{aligned}$$ and by , $$\begin{aligned} \ker{\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\cong\bigoplus_{n<N}\mathbf W_{N-n,2}\otimes\ker E_{n,3}\cong\bigoplus_{n<N}\mathbf W_{N-n,2}\otimes\mathbf W_{n,3}{\,}.\end{aligned}$$ Now, by $$\begin{aligned} \dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\circ{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}\right)\geq\dim_{\mathbb Q}\mathbf W_{N,5}-\dim_{\mathbb Q}\ker{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}-\dim_{\mathbb Q}\ker{\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\end{aligned}$$ we arrive at $$\begin{aligned} \label{eq:case5ineq2} \sum_{N>0}\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(5)}_{3}\big|_{\mathbf W_{N,5}}}\circ{\varphi^{(5)}_{2}\big|_{\mathbf W_{N,5}}}\right)\cdot x^N\geq\mathbb O(x)^3\mathbb S(x)-2\mathbb O(x)\mathbb S(x)^2{\,}.\end{aligned}$$ Combining and yields the desired result. A recursive approach to the general case $r\geq2$ ------------------------------------------------- In this section, we show that one can recursively derive the exact value of $\dim_{\mathbb Q}\ker C_{N,r}$ from Conjecture \[con:imiso\]. Let us fix some notations: For $r\geq2$, let us define the formal series $$\begin{aligned} B_r(x)&\coloneqq \sum_{N>0}\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)\cdot x^N\tag i\\ T_r(x)&\coloneqq \sum_{N>0}\dim_{\mathbb Q}\ker C_{N,r}\cdot x^N\tag{ii}{\,}.\end{aligned}$$ We set $T_0(x),T_1(x)\coloneqq 0$. The main observation is the following lemma: \[lem:recursion\] Assume that Conjecture \[con:imiso\] is true and that the map from Theorem \[thm:injection\] is injective. Then, for $r\geq 3$ the following recursion holds: $$\begin{aligned} B_r(x)=\mathbb O(x)^{r-2}\mathbb S(x)-\sum_{j=2}^{r-2}\mathbb O(x)^{r-j-2}\mathbb S(x)B_j(x){\,}.\end{aligned}$$ We have $$\begin{gathered} \dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)\\ =\begin{aligned}[t] \dim_{\mathbb Q}\mathbf W_{N,r}&- \sum_{j=2}^{r-2}\dim_{\mathbb Q}\ker{\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{j-1}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)\\ &-\dim_{\mathbb Q}\ker{\varphi^{(r)}_{r-1}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right){\,}. \end{aligned}\end{gathered}$$ Since we assume ${\varphi^{(r)}_{r-1}}$ to be injective on $\mathbf W_{N,r}$, the last summand on the right-hand side vanishes. Let $2\leq j\leq r-2$. As the restriction to $\mathbf W_{N,r}$ only affects $x_1$ and $x_2$, whereas ${\varphi^{(r)}_{j}}$ acts on $x_{r-j+1},\ldots,x_r$, we obtain $$\begin{gathered} \ker{\varphi^{(r)}_{j}\big|_{\mathbf W_{N,r}}}\cap{\operatorname{Im}}\left({\varphi^{(r)}_{j-1}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right)\\ \begin{aligned}[t] &=\bigoplus_{n<N}\mathbf W_{N-n,r-j}\otimes\left(\ker{\varphi^{(j)}_{j}}\cap{\operatorname{Im}}\left({\varphi^{(j)}_{j-1}}\circ\cdots\circ{\varphi^{(j)}_{2}}\right)\right)\\ &=\bigoplus_{n<N}\mathbf W_{N-n,r-j}\otimes{\operatorname{Im}}\left({\varphi^{(j)}_{j-1}}\circ{\varphi^{(j)}_{j-2}\big|_{\mathbf W_{n,j}}}\circ\cdots\circ{\varphi^{(j)}_{2}\big|_{\mathbf W_{n,j}}}\right){\,}, \end{aligned}\end{gathered}$$ where the last equality follows from Conjecture \[con:imiso\]. Hence, if we denote $$\begin{aligned} a_{N,r}=\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right){\,},\end{aligned}$$we obtain the recursion $$\begin{aligned} \label{eq:casercauchy} a_{N,r}=\dim_{\mathbb Q}\mathbf W_{N,r}-\sum_{j=2}^{r-2}\sum_{n<N}\dim_{\mathbb Q}\mathbf W_{N-n,r-j}\cdot a_{n,j}{\,}.\end{aligned}$$ By Lemma \[lem:wnreq\] and the convolution formula for multiplying formal series, equation  establishes the claim. \[thm:recursion\] Upon Conjecture \[con:imiso\] and the injectivity of , for all $r\geq3$ the following recursion is satisfied: $$\begin{aligned} T_r(x)=\mathbb O(x)T_{r-1}(x)-\mathbb S(x)T_{r-2}(x)+\mathbb O(x)^{r-2}\mathbb S(x){\,}.\end{aligned}$$ As we assume Conjecture \[con:imiso\], we get from Definition \[def:cnr\] and Corollary \[cor:phiEiso\] $$\begin{aligned} \dim_{\mathbb Q}\ker C_{N,r}&= \begin{multlined}[t] \dim_{\mathbb Q}\ker{\prescript{t\!}{}{\left( E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)}}\\+\dim_{\mathbb Q}\left({\operatorname{Im}}{\prescript{t\!}{}{\left( E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)}} \cap \ker{\prescript{t\!}{}{E}}_{N,r}\right) \end{multlined}\\ &=\begin{multlined}[t] \dim_{\mathbb Q}\ker{\prescript{t\!}{}{\left( E_{N,r}^{(2)}\cdots E_{N,r}^{(r-1)}\right)}}\\+\dim_{\mathbb Q}{\operatorname{Im}}\left({\varphi^{(r)}_{r-1}}\circ{\varphi^{(r)}_{r-2}\big|_{\mathbf W_{N,r}}}\circ\cdots\circ{\varphi^{(r)}_{2}\big|_{\mathbf W_{N,r}}}\right) \end{multlined}\end{aligned}$$ and thus, by Corollary \[cor:enrjocnr\], $$\begin{aligned} T_r(x)=\mathbb O(x)T_{r-1}(x)+B_r(x){\,}.\end{aligned}$$ Using Lemma \[lem:recursion\], we obtain $$\begin{aligned} T_r(x)&=\mathbb O(x)T_{r-1}(x)+\mathbb O(x)^{r-2}\mathbb S(x)-\sum_{j=2}^{r-2}\mathbb O(x)^{r-j-2}\mathbb S(x)\big(T_j(x)-\mathbb O(x)T_{j-1}(x)\big)\\ &=\mathbb O(x)T_{r-1}(x)+\mathbb O(x)^{r-2}\mathbb S(x)-\mathbb S(x)T_{r-2}(x)+\mathbb O(x)^{r-3}\mathbb S(x)T_1(x)\\ &=\mathbb O(x)T_{r-1}(x)-\mathbb S(x)T_{r-2}(x)+\mathbb O(x)^{r-2}\mathbb S(x){\,},\end{aligned}$$where by definition $T_1(x)=0$. The conclusion follows. Note that by our choice of $T_0(x)$ and $T_1(x)$, Theorem \[thm:recursion\] remains true for $r=2$ since we know from [@schneps] that $T_2(x)=\mathbb S(x)$. Under the assumption of Conjecture \[con:imiso\] and injectivity in , we are now ready to prove that the generating series of ${\operatorname{rank}}C_{N,r}$ equals the explicit series $\frac{1}{1-\mathbb O(x)y+\mathbb S(x)y^2}$ as was claimed in Conjecture \[con:Brown\]. This (under the same assumptions though) proves the motivic version of Conjecture \[con:Brown\] (i.e. with $\mathcal Z_{N,r}^{{\operatorname{odd}}}$ replaced by $\mathcal H_{N,r}^{{\operatorname{odd}}}$). Let $R_r(x)=\mathbb O(x)^r-T_r(x)$ and note that by Theorem \[thm:recursion\] $$\begin{aligned} R_r(x)=\mathbb O(x)R_{r-1}(x)-\mathbb S(x)R_{r-2}(x)\end{aligned}$$for all $r\geq2$. Hence, $$\begin{aligned} \left(1-\mathbb O(x)y+\mathbb S(x)y^2\right)\sum_{r\geq0}R_r(x)y^r&= \begin{aligned}[t] &\sum_{r\geq2}\big(R_r(x)-\mathbb O(x)R_{r-1}(x)+\mathbb S(x)R_{r-2}(x)\big)y^r\\ &+R_0(x)+R_1(x)y-\mathbb O(x)R_0(x)y \end{aligned}\\ &=R_0(x)+\mathbb O(x)y-\mathbb O(x)y\\ &=1\end{aligned}$$and thus $$\begin{aligned} 1+\sum_{N,r>0}{\operatorname{rank}}C_{N,r}\cdot x^Ny^r=\sum_{r\geq0}R_r(x)y^r=\frac1{1-\mathbb O(x)y+\mathbb S(x)y^2}{\,},\end{aligned}$$ which is the desired result.
IFUP-TH 2013/21 1.4truecm [**Background Field Method,**]{} .5truecm [**Batalin-Vilkovisky Formalism And**]{} .5truecm [**Parametric Completeness Of Renormalization**]{} 1truecm *Damiano Anselmi* .2truecm *Dipartimento di Fisica “Enrico Fermi”, Università di Pisa,* *and INFN, Sezione di Pisa,* *Largo B. Pontecorvo 3, I-56127 Pisa, Italy,* .2truecm damiano.anselmi@df.unipi.it 1.5truecm **Abstract** We investigate the background field method with the Batalin-Vilkovisky formalism, to generalize known results, study the parametric completeness of general gauge theories and achieve a better understanding of several properties. In particular, we study renormalization and gauge dependence to all orders. Switching between the background field approach and the usual approach by means of canonical transformations, we prove parametric completeness without making use of cohomological theorems; namely we show that if the starting classical action is sufficiently general all divergences can be subtracted by means of parameter redefinitions and canonical transformations. Our approach applies to renormalizable and nonrenormalizable theories that are manifestly free of gauge anomalies and satisfy the following assumptions: the gauge algebra is irreducible and closes off shell, the gauge transformations are linear functions of the fields, and closure is field independent. Yang-Mills theories and quantum gravity in arbitrary dimensions are included, as well as effective and higher-derivative versions of them, but several other theories, such as supergravity, are left out. 1truecm Introduction ============ The background field method [@dewitt; @abbott] is a convenient tool to quantize gauge theories and make explicit calculations, particularly when it is used in combination with the dimensional-regularization technique. It amounts to choosing a nonstandard gauge fixing in the conventional approach and, among its virtues, it keeps the gauge transformations intact under renormalization. However, it takes advantage of properties that only particular classes of theories have. The Batalin-Vilkovisky formalism [@bata] is also useful for quantizing general gauge theories, especially because it collects all ingredients of infinitesimal gauge symmetries in a single identity, the master equation, which remains intact through renormalization, at least in the absence of gauge anomalies. Merging the background field method with the Batalin-Vilkovisky formalism is not only an interesting theoretical subject *per se*, but can also offer a better understanding of known results, make us appreciate aspects that have been overlooked, generalize the validity of crucial theorems about the quantization of gauge theories and renormalization, and help us address open problems. For example, an important issue concerns the generality of the background field method. It would be nice to formulate a unique treatment for all gauge theories, renormalizable and nonrenormalizable, unitary and higher derivative, with irreducible or reducible gauge algebras that close off shell or only on shell. However, we will see that at this stage it is not possible to achieve that goal, due to some intrinsic features of the background field method. Another important issue that we want to emphasize more than has been done so far is the problem of *parametric completeness* in general gauge theories [@regnocoho]. To ensure renormalization-group (RG) invariance, all divergences must be subtracted by redefining parameters and making canonical transformations. When a theory contains all independent parameters necessary to achieve this goal, we say that it is parametrically complete. The RG-invariant renormalization of divergences may require the introduction of missing Lagrangian terms, multiplied by new physical constants, or even deform the symmetry algebra in nontrivial ways. However, in nonrenormalizable theories such as quantum gravity and supergravity it is not obvious that the action can indeed be adjusted to achieve parametric completeness. One way to deal with this problem is to classify the whole cohomology of invariants and hope that the solution satisfies suitable properties. This method requires lengthy technical proofs that must be done case by case [@coho], and therefore lacks generality. Another way is to let renormalization build the new invariants automatically, as shown in ref. [@regnocoho], with an algorithm that is able to iteratively extend the classical action converting divergences into finite counterterms. However, that procedure is mainly a theoretical tool, because although very general and conceptually minimal, it is practically unaffordable. Among the other things, it leaves the possibility that renormalization may dynamically deform the gauge symmetry in physically observable ways. A third possibility is the one we are going to treat here, taking advantage of the background field method. Where it applies, it makes cohomological classifications unnecessary and excludes that renormalization may dynamically deform the symmetry in observable ways. Because of the intrinsic properties of the background field method, the approach of this paper, although general enough, is not exhaustive. It is general enough because it includes the gauge symmetries we need for physical applications, namely Abelian and non-Abelian Yang-Mills symmetries, local Lorentz symmetry and invariance under general changes of coordinates. At the same time, it is not exhaustive because it excludes other potentially interesting symmetries, such as local supersymmetry. To be precise, our results hold for every gauge symmetry that satisfies the following properties: the algebra of gauge transformations ($i$) closes off shell and ($ii$) is irreducible; moreover ($iii$) there exists a choice of field variables where the gauge transformations $\delta _{\Lambda }\phi $ of the physical fields $\phi $ are linear functions of $\phi $ and the closure $[\delta _{\Lambda },\delta _{\Sigma }]=\delta _{[\Lambda ,\Sigma ]}$ of the algebra is $\phi $ independent. We expect that with some technical work it will be possible to extend our results to theories that do not satisfy assumption ($ii$), but our impression is that removing assumptions ($i$) and ($iii$) will be much harder, if not impossible. In this paper we also assume that the theory is manifestly free of gauge anomalies. Our results apply to renormalizable and nonrenormalizable theories that satisfy the assumptions listed so far, among which are QED, Yang-Mills theories, quantum gravity and Lorentz-violating gauge theories [@lvgauge], as well as effective [@weinberg], higher-derivative [@stelle] and nonlocal [@tombola] versions of such theories, in arbitrary dimensions, and extensions obtained including any set of composite fields. We recall that Stelle’s proof [@stelle] that higher-derivative quantum gravity is renormalizable was incomplete, because it assumed without proof a generalization of the Kluberg-Stern–Zuber conjecture [@kluberg] for the cohomological problem satisfied by counterterms. Even the cohomological analysis of refs. [@coho] does not directly apply to higher-derivative quantum gravity, because the field equations of higher-derivative theories are not equal to perturbative corrections of the ordinary field equations. These remarks show that our results are quite powerful, because they overcome a number of difficulties that otherwise need to be addressed case by case. Strictly speaking, our results, in their present form, do not apply to chiral theories, such as the Standard Model coupled to quantum gravity, where the cancellation of anomalies is not manifest. Nevertheless, since all other assumptions we have made concern just the forms of gauge symmetries, not the forms of classical actions, nor the limits around which perturbative expansions are defined, we expect that our results can be extended to all theories involving the Standard Model or Lorentz-violating extensions of it [@kostelecky; @LVSM]. However, to make derivations more easily understandable it is customary to first make proofs in the framework where gauge anomalies are manifestly absent, and later extend the results by means of the Adler-Bardeen theorem [@adlerbardeen]. We follow the tradition on this, and plan to devote a separate investigation to anomaly cancellation. Although some of our results are better understandings or generalizations of known properties, we do include them for the sake of clarity and self-consistence. We think that our formalism offers insight on the issues mentioned above and gives a more satisfactory picture. In particular, the fact that background field method makes cohomological classifications unnecessary is something that apparently has not been appreciated enough so far. Moreover, our approach points out the limits of applicability of the background field method. To achieve parametric completeness we proceed in four basic steps. First, we study renormalization to all orders subtracting divergences “as they come”, which means without worrying whether the theory contains enough independent parameters for RG invariance or not. Second, we study how the renormalized action and the renormalized $\Gamma $ functional depend on the gauge fixing, and work out how the renormalization algorithm maps a canonical transformation of the classical theory into a canonical transformation of the renormalized theory. Third, we renormalize the canonical transformation that continuously interpolates between the background field approach and the conventional approach. Fourth, comparing the two approaches we show that if the classical action $S_{c}(\phi ,\lambda )$ contains all gauge invariant terms determined by the starting gauge symmetry, then there exists a canonical transformation $\Phi ,K\rightarrow \hat{\Phi},\hat{K}$ such that $$S_{R\hspace{0.01in}\text{min}}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)=S_{c}(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\tau (\lambda ))-\int R^{\alpha }(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{C})\hat{K}_{\alpha }, \label{key0}$$ where $S_{R\hspace{0.01in}\text{min}}$ is the renormalized action with the gauge-fixing sector switched off, $\Phi ^{\alpha }=\{\phi ,C\}$ are the fields ($C$ being the ghosts), $K_{\alpha }$ are the sources for the $\Phi^{\alpha}$ transformations $R^{\alpha }(\Phi )$, ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }$ are the background fields, $\lambda $ are the physical couplings and $\tau (\lambda )$ are $\lambda $ redefinitions. Identity (\[key0\]) shows that all divergences can be renormalized by means of parameter redefinitions and canonical transformations, which proves parametric completeness. Power counting may or may not restrict the form of $S_{c}(\phi ,\lambda )$. Basically, under the assumptions we have made the background transformations do not renormalize, and the quantum fields $\phi $ can be switched off and then restored from their background partners ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }$. Nevertheless, the restoration works only up to a canonical transformation, which gives (\[key0\]). The story is a bit more complicated than this, but this simplified version is enough to appreciate the main point. However, when the assumptions we have made do not hold, the argument fails, which shows how peculiar the background field method is. Besides giving explicit examples where the construction works, we address some problems that arise when the assumptions listed above are not satisfied. A somewhat different approach to the background field method in the framework of the Batalin-Vilkovisky formalism exists in the literature. In refs. [@quadri] Binosi and Quadri considered the most general variation $\delta {\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }=\Omega $ of the background gauge field ${\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }$ in Yang-Mills theory, and obtained a modified Batalin-Vilkovisky master equation that controls how the functional $\Gamma $ depends on ${\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu } $. Instead, here we introduce background copies of both physical fields and ghosts, which allows us to split the symmetry transformations into “quantum transformations” and “background transformations”. The master equation is split into the three identities (\[treide\]), which control invariances under the two types of transformations. The paper is organized as follows. In section 2 we formulate our approach and derive its basic properties, emphasizing the assumptions we make and why they are necessary. In section 3 we renormalize divergences to all orders, subtracting them “as they come”. In section 4 we derive the basic differential equations of gauge dependence and integrate them, which allows us to show how a renormalized canonical transformation emerges from its tree-level limit. In section 5 we derive (\[key0\]) and prove parametric completeness. In section 6 we give two examples, non-Abelian Yang-Mills theory and quantum gravity. In section 7 we make remarks about parametric completeness and recapitulate where we stand now on this issue. Section 8 contains our conclusions, while the appendix collects several theorems and identities that are used in the paper. We use the dimensional-regularization technique and the minimal subtraction scheme. Recall that the functional integration measure is invariant with respect to perturbatively local changes of field variables. Averages $\langle \cdots \rangle $ always denote the sums of *connected* Feynman diagrams. We use the Euclidean notation in theoretical derivations and switch to Minkowski spacetime in the examples. Background field method and Batalin-Vilkovisky formalism ======================================================== In this section we formulate our approach to the background field method with the Batalin-Vilkovisky formalism. To better appreciate the arguments given below it may be useful to jump back and forth between this section and section 6, where explicit examples are given. If the gauge algebra closes off shell, there exists a canonical transformation that makes the solution $S(\Phi ,K)$ of the master equation $(S,S)=0$ depend linearly on the sources $K$. We write $$S(\Phi ,K)=\mathcal{S}(\Phi )-\int R^{\alpha }(\Phi )K_{\alpha }. \label{solp}$$ The fields $\Phi ^{\alpha }=\{\phi ^{i},C^{I},\bar{C}^{I},B^{I}\}$ are made of physical fields $\phi ^{i}$, ghosts $C^{I}$ (possibly including ghosts of ghosts and so on), antighosts $\bar{C}^{I}$ and Lagrange multipliers $B^{I}$ for the gauge fixing. Moreover, $K_{\alpha }=\{K_{\phi }^{i},K_{C}^{I},K_{\bar{C}}^{I},K_{B}^{I}\}$ are the sources associated with the symmetry transformations $R^{\alpha}(\Phi)$ of the fields $\Phi ^{\alpha }$, while $$\mathcal{S}(\Phi )=S_{c}(\phi )+(S,\Psi )$$ is the sum of the classical action $S_{c}(\phi )$ plus the gauge fixing, which is expressed as the antiparenthesis of $S$ with a $K$-independent gauge fermion $\Psi (\Phi )$. We recall that the antiparentheses are defined as $$(X,Y)=\int \left\{ \frac{\delta _{r}X}{\delta \Phi ^{\alpha }}\frac{\delta _{l}Y}{\delta K_{\alpha }}-\frac{\delta _{r}X}{\delta K_{\alpha }}\frac{\delta _{l}Y}{\delta \Phi ^{\alpha }}\right\} ,$$ where the summation over the index $\alpha $ is understood. The integral is over spacetime points associated with repeated indices. The non-gauge-fixed action $$S_{\text{min}}(\Phi ,K)=S_{c}(\phi )-\int R_{\phi }^{i}(\phi ,C)K_{\phi }^{i}-\int R_{C}^{I}(\phi ,C)K_{C}^{I}, \label{smin}$$ obtained by dropping antighosts, Lagrange multipliers and their sources, also solves the master equation, and is called the minimal solution. Antighosts $\bar{C}$ and Lagrange multipliers $B$ form trivial gauge systems, and typically enter (\[solp\]) by means of the gauge fixing $(S,\Psi )$ and a contribution $$\Delta S_{\text{nm}}=-\int B^{I}K_{\bar{C}}^{I}, \label{esto}$$ to $-\int R^{\alpha }K_{\alpha }$. Let $\mathcal{R}^{\alpha }(\Phi ,C)$ denote the transformations the fields $\Phi ^{\alpha }$ would have if they were matter fields. Each function $\mathcal{R}^{\alpha }(\Phi ,C)$ is a bilinear form of $\Phi ^{\alpha }$ and $C$. Sometimes, to be more explicit, we also use the notation $\mathcal{R}_{\bar{C}}^{I}(\bar{C},C)$ and $\mathcal{R}_{B}^{I}(B,C)$ for $\bar{C}$ and $B$, respectively. It is often convenient to replace (\[esto\]) with the alternative nonminimal extension $$\Delta S_{\text{nm}}^{\prime }=-\int \left( B^{I}+\mathcal{R}_{\bar{C}}^{I}(\bar{C},C)\right) K_{\bar{C}}^{I}-\int \mathcal{R}_{B}^{I}(B,C)K_{B}^{I}. \label{estobar}$$ For example, in Yang-Mills theories we have $$\Delta S_{\text{nm}}^{\prime }=-\int \left( B^{a}-gf^{abc}C^{b}\bar{C}^{c}\right) K_{\bar{C}}^{a}+g\int f^{abc}C^{b}B^{c}K_{B}^{a}$$ and in quantum gravity $$\Delta S_{\text{nm}}^{\prime }=-\int \left( B_{\mu }+\bar{C}_{\rho }\partial _{\mu }C^{\rho }-C^{\rho }\partial _{\rho }\bar{C}_{\mu }\right) K_{\bar{C}}^{\mu }+\int \left( B_{\rho }\partial _{\mu }C^{\rho }+C^{\rho }\partial _{\rho }B_{\mu }\right) K_{B}^{\mu }, \label{nmqg}$$ where $C^{\mu }$ are the ghosts of diffeomorphisms. Observe that (\[estobar\]) can be obtained from (\[esto\]) making the canonical transformation generated by $$F_{\text{nm}}(\Phi ,K^{\prime })=\int \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},C)K_{B}^{I\hspace{0.01in}\prime }.$$ Requiring that $F_{\text{nm}}$ indeed give (\[estobar\]) we get the identities $$\mathcal{R}_{B}^{I}(B,C)=-\int B^{J}\frac{\delta _{l}}{\delta \bar{C}^{J}}\mathcal{R}_{\bar{C}}^{I}(\bar{C},C),\qquad \int \left( R_{C}^{J}\frac{\delta _{l}}{\delta C^{J}}+\mathcal{R}_{\bar{C}}^{J}(\bar{C},C)\frac{\delta _{l}}{\delta \bar{C}^{J}}\right) \mathcal{R}_{\bar{C}}^{I}(\bar{C},C)=0, \label{iddo}$$ which can be easily checked both for Yang-Mills theories and gravity. In this paper the notation $R^{\alpha }(\Phi )$ refers to the field transformations of (\[smin\]) plus those of the nonminimal extension (\[esto\]), while $\bar{R}^{\alpha }(\Phi )$ refers to the transformations of (\[smin\]) plus (\[estobar\]). Background field action ----------------------- To apply the background field method, we start from the gauge invariance of the classical action $S_{c}(\phi )$, $$\int R_{c}^{i}(\phi ,\Lambda )\frac{\delta _{l}S_{c}(\phi )}{\delta \phi ^{i}}=0, \label{lif}$$ where $\Lambda $ are the arbitrary functions that parametrize the gauge transformations $\delta \phi ^{i}=R_{c}^{i}$. Shifting the fields $\phi $ by background fields ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }$, and introducing arbitrary background functions ${\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu }$ we can write the identity $$\int \left[ R_{c}^{i}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\Lambda )+X^{i}\right] \frac{\delta _{l}S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })}{\delta \phi ^{i}}+\int \left[ R_{c}^{i}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })-X^{i}\right] \frac{\delta _{l}S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i}}=0,$$ which is true for arbitrary functions $X^{i}$. If we choose $$X^{i}=R_{c}^{i}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })-R_{c}^{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu }),$$ the transformations of the background fields contain only background fields and coincide with $R_{c}^{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })$. We find $$\int \left[ R_{c}^{i}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\Lambda +{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })-R_{c}^{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })\right] \frac{\delta _{l}S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })}{\delta \phi ^{i}}+\int R_{c}^{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu })\frac{\delta _{l}S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i}}=0. \label{bas}$$ Thus, denoting background quantities by means of an underlining, we are led to consider the action $$S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int R^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })({\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }-K_{\alpha }), \label{sback}$$ which solves the master equation $\llbracket S,S\rrbracket =0$, where the antiparentheses are defined as $$\llbracket X,Y\rrbracket =\int \left\{ \frac{\delta _{r}X}{\delta \Phi ^{\alpha }}\frac{\delta _{l}Y}{\delta K_{\alpha }}+\frac{\delta _{r}X}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\frac{\delta _{l}Y}{\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }}-\frac{\delta _{r}X}{\delta K_{\alpha }}\frac{\delta _{l}Y}{\delta \Phi ^{\alpha }}-\frac{\delta _{r}X}{\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }}\frac{\delta _{l}Y}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\right\} .$$ More directly, if $S(\Phi ,K)=S_{c}(\phi )-\int R^{\alpha }(\Phi )K_{\alpha } $ solves $(S,S)=0$, the background field can be introduced with a canonical transformation. Start from the action $$S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=S_{c}(\phi )-\int R^{\alpha }(\Phi )K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }, \label{sback0}$$ which obviously satisfies two master equations, one in the variables $\Phi ,K $ and the other one in the variables ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$. [*A fortiori*]{}, it also satisfies $\llbracket S,S\rrbracket =0$. Relabeling fields and sources with primes and making the canonical transformation generated by the functional $$F_{\text{b}}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int (\Phi ^{\alpha }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha })K_{\alpha }^{\prime }+\int {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }, \label{casbac}$$ we obtain (\[sback\]), and clearly preserve $\llbracket S,S\rrbracket =0$. The shift ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ is called background field, while $\Phi $ is called quantum field. We also have quantum sources $K$ and background sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$. Finally, we have background transformations, those described by the background ghosts ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ or the functions ${\mkern2mu\underline{\mkern-2mu\smash{\Lambda }\mkern-2mu}\mkern2mu }$ in (\[bas\]), and quantum transformations, those described by the quantum ghosts $C$ and (\[esto\]) or the functions $\Lambda $ in (\[bas\]). The action (\[sback\]) is not the most convenient one to study renormalization. It is fine in the minimal sector (the one with antighosts and Lagrange multipliers switched off), but not in the nonminimal one. Now we describe the improvements we need to make. #### Non-minimal sector So far we have introduced background copies of all fields. Nevertheless, strictly speaking we do not need to introduce copies of the antighosts $\bar{C}$ and the Lagrange multipliers $B$, since we do not need to gauge-fix the background. Thus we drop ${\mkern2mu\underline{\mkern-2mu\smash{\bar{C}}\mkern-2mu}\mkern2mu }$, ${\mkern2mu\underline{\mkern-2mu\smash{B}\mkern-2mu}\mkern2mu }$ and their sources from now on, and define ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }=\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{I},0,0\}$, ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\{{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\phi }^{i},{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}^{I},0,0\}$. Observe that then we have $R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })=\bar{R}^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })=\{R_{\phi }^{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }),R_{C}^{I}({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }),0,0\}$. Let us compare the nonminimal sectors (\[esto\]) and (\[estobar\]). If we choose (\[esto\]), $\bar{C}$ and $B$ do not transform under background transformations. Since (\[esto\]) are the only terms that contain $K_{\bar{C}}$, they do not contribute to one-particle irreducible diagrams and do not receive radiative corrections. Moreover, $K_{B}$ does not appear in the action. Instead, if we choose the nonminimal sector (\[estobar\]), namely if we start from $$S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=S_{c}(\phi )-\int \bar{R}^{\alpha }(\Phi )K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha } \label{sback1}$$ instead of (\[sback0\]), the transformation (\[casbac\]) gives the action $$S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int (\bar{R}^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })-\bar{R}^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }))K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }. \label{sback2}$$ In particular, using the linearity of $\mathcal{R}_{\bar{C}}^{I}$ and $\mathcal{R}_{B}^{I}$ in $C$, we see that (\[estobar\]) is turned into itself plus $$-\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\bar{C}}^{I}-\int \mathcal{R}_{B}^{I}(B,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I}. \label{bacca}$$ Because of these new terms, $\bar{C}$ and $B$ now transform as ordinary matter fields under background transformations. This is the correct background transformation law we need for them. On the other hand, the nonminimal sector (\[estobar\]) also generates nontrivial quantum transformations for $\bar{C}$ and $B$, which are renormalized and complicate our derivations. It would be better to have (\[estobar\]) in the background sector and (\[esto\]) in the nonbackground sector. To achieve this goal, we make the canonical transformation generated by $$F_{\text{nm}}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \Phi ^{\alpha }K_{\alpha }^{\prime }+\int {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I\hspace{0.01in}\prime } \label{casbacca}$$ on (\[sback\]). Using (\[iddo\]) again, the result is $$\begin{aligned} S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }) &=&S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int (R^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })-R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }))K_{\alpha } \nonumber \\ &&-\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\bar{C}}^{I}-\int \mathcal{R}_{B}^{I}(B,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I}-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }. \label{sbacca}\end{aligned}$$ This is the background field action we are going to work with. It is straightforward to check that (\[sbacca\]) satisfies $\llbracket S,S\rrbracket =0$. #### Separating the background and quantum sectors Now we separate the background sector from the quantum sector. To do this properly we need to make further assumptions. First, we assume that there exists a choice of field variables where the functions $R^{\alpha }(\Phi )$ are at most quadratic in $\Phi $. We call it *linearity assumption*. It is equivalent to assume that the gauge transformations $\delta _{\Lambda }\phi ^{i}=R_{c}^{i}(\phi ,\Lambda )$ of (\[lif\]) are linear functions of the fields $\phi $ and closure is expressed by $\phi $-independent identities $[\delta _{\Lambda },\delta _{\Sigma }]=\delta _{[\Lambda ,\Sigma ]}$. The linearity assumption is satisfied by all gauge symmetries of physical interest, such as those of QED, non-Abelian Yang-Mills theory, quantum gravity and the Standard Model. On the other hand, it is not satisfied by other important symmetries, among which is supergravity, where the gauge transformations either close only on shell or are not linear in the fields. Second, we assume that the gauge algebra is irreducible, which ensures that the set $\Phi $ contains only ghosts and not ghosts of ghosts. Under these assumptions, we make the canonical transformation generated by $$F_{\tau }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \Phi ^{\alpha }K_{\alpha }^{\prime }+\int {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+(\tau -1)\int {\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{I}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}^{I\hspace{0.01in}\prime } \label{backghost}$$ on the action (\[sbacca\]). This transformation amounts to rescaling the background ghosts ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{I}$ by a factor $\tau $ and their sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}^{I}$ by a factor $1/\tau $. Since we do not have background antighosts, (\[backghost\]) is the background-ghost-number transformation combined with a rescaling of the background sources. The action (\[sbacca\]) is not invariant under (\[backghost\]). Using the linearity assumption it is easy to check that the transformed action $S_{\tau }$ is linear in $\tau $. Writing $S_{\tau }=\hat{S}+\tau \bar{S}$ we can split the total action $S$ into the sum $\hat{S}+\bar{S}$ of a *quantum action* $\hat{S}$ and a *background action* $\bar{S}$. Precisely, the quantum action $\hat{S}$ does not depend on the background sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and the background ghosts ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$, but only on the background copies ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }$ of the physical fields. We have $$\hat{S}=\hat{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int R^{\alpha }(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },C,\bar{C},B)K_{\alpha }. \label{deco}$$ Note that, in spite of the notation, the functions $R^{\alpha }(\Phi )$ are actually $\bar{C}$ independent. Moreover, we find $$\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=-\int \mathcal{R}^{\alpha }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }, \label{sbar}$$ where, for $\phi $ and $C$, $$\mathcal{R}^{\alpha }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })=R^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })-R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })-R^{\alpha }(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },C,\bar{C},B). \label{batra}$$ These functions transform $\phi $ and $C$ as if they were matter fields and are of course linear in $\Phi $ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$. Note that formula (\[batra\]) does not hold for antighosts and Lagrange multipliers. In the end all quantum fields transform as matter fields under background transformations. The master equation $\llbracket S,S\rrbracket =0$ decomposes into the three identities $$\llbracket \hat{S},\hat{S}\rrbracket =\llbracket \hat{S},\bar{S}\rrbracket =\llbracket \bar{S},\bar{S}\rrbracket =0, \label{treide}$$ which we call *background field master equations*. The quantum transformations are described by $\hat{S}$ and the background ones are described by $\bar{S}$. Background fields are inert under quantum transformations, because $\llbracket \hat{S},{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }\rrbracket =0$. Note that $$\llbracket \hat{S},\llbracket \bar{S},X\rrbracket \rrbracket +\llbracket \bar{S},\llbracket \hat{S},X\rrbracket \rrbracket =0, \label{uso}$$ where $X$ is an arbitrary local functional. This property follows from the Jacobi identity of the antiparentheses and $\llbracket \hat{S},\bar{S}\rrbracket =0$, and states that background and quantum transformations commute. #### Gauge-fixing Now we come to the gauge fixing. In the usual approach, the theory is typically gauge-fixed by means of a canonical transformation that amounts to replacing the action $S$ by$\ S+(S,\Psi )$, where $\Psi $ is a local functional of ghost number $-1$ and depends only on the fields $\Phi $. Using the background field method it is convenient to search for a ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent gauge-fixing functional $\Psi (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })$ that is also invariant under background transformations, namely such that $$\llbracket \bar{S},\Psi \rrbracket =0. \label{backgf}$$ Then we fix the gauge with the usual procedure, namely we make a canonical transformation generated by $$F_{\text{gf}}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\Psi (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }). \label{backgfgen}$$ Because of (\[backgf\]) the gauge-fixed action reads $$S_{\text{gf}}=\hat{S}+\bar{S}+\llbracket \hat{S},\Psi \rrbracket . \label{fgback}$$ Defining $\hat{S}_{\text{gf}}=\hat{S}+\llbracket \hat{S},\Psi \rrbracket $, identities (\[treide\]), (\[uso\]) and (\[backgf\]) give $\llbracket \hat{S}_{\text{gf}},\hat{S}_{\text{gf}}\rrbracket =\llbracket \hat{S}_{\text{gf}},\bar{S}\rrbracket =0$, so it is just like gauge-fixing $\hat{S}$. Since both $\hat{S}$ and $\Psi $ are ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent, $\hat{S}_{\text{gf}}$ is also ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent. Observe that the canonical transformations (\[backghost\]) and (\[backgfgen\]) commute; therefore we can safely apply the transformation (\[backghost\]) to the gauge-fixed action. A gauge fixing satisfying (\[backgf\]) is called *background-preserving gauge fixing*. In some derivations of this paper the background field master equations (\[treide\]) are violated in intermediate steps; therefore we need to prove properties that hold more generally. Specifically, consider an action $$S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)+\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }), \label{assu}$$ equal to the sum of a ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$- and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent “quantum action” $\hat{S}$, plus a “background action” $\bar{S}$ that satisfies the following requirements: ($i$) it is a linear function of the quantum fields $\Phi $, ($ii$) it gets multiplied by $\tau $ when applying the canonical transformation (\[backghost\]), and ($iii$) $\delta _{l}\bar{S}/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent. In particular, requirement ($ii$) implies that $\bar{S}$ vanishes at ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }=0$. Since $\bar{S}$ is a linear function of $\Phi $, it does not contribute to one-particle irreducible diagrams. Since $\hat{S}$ does not depend on ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$, while $\bar{S}$ vanishes at ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }=0$, $\bar{S}$ receives no radiative corrections. Thus the $\Gamma$ functional associated with the action (\[assu\]) satisfies $$\Gamma (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{\Gamma}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)+\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }). \label{becco}$$ Moreover, thanks to theorem \[thb\] of the appendix we have the general identity $$\llbracket \Gamma ,\Gamma \rrbracket =\langle \llbracket S,S\rrbracket \rangle , \label{univ}$$ under the sole assumption that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent. Applying the canonical transformation (\[backghost\]) to $\Gamma $ we find $\Gamma _{\tau }=\hat{\Gamma}+\tau \bar{S}$, so (\[univ\]) gives the identities $$\llbracket \hat{\Gamma},\hat{\Gamma}\rrbracket =\langle \llbracket \hat{S},\hat{S}\rrbracket \rangle ,\qquad \llbracket \bar{S},\hat{\Gamma}\rrbracket =\langle \llbracket \bar{S},\hat{S}\rrbracket \rangle . \label{give}$$ When $\llbracket S,S\rrbracket =0$ we have $$\llbracket \Gamma ,\Gamma \rrbracket =\llbracket \hat{\Gamma},\hat{\Gamma}\rrbracket =\llbracket \bar{S},\hat{\Gamma}\rrbracket =0. \label{msb}$$ Observe that, thanks to the linearity assumption, an $\bar{S}$ equal to (\[sbar\]) satisfies the requirements of formula (\[assu\]). Now we give details about the background-preserving gauge fixing we pick for the action (\[sbacca\]). It is convenient to choose gauge-fixing functions $G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}$ that are linear in the quantum fields $\phi $, where $G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )$ may contain derivative operators. Precisely, we choose the gauge fermion $$\Psi (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })=\int \bar{C}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}, \label{psiback}$$ and assume that it satisfies (\[backgf\]). A more common choice would be (see (\[seeym\]) for Yang-Mills theory) $$\Psi (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })=\int \bar{C}^{I}\left( G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}+\xi _{IJ}B^{J}\right) ,$$ where $\xi _{IJ}$ are gauge-fixing parameters. In this case, when we integrate the $B$ fields out the expressions $G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}$ get squared. However, (\[psiback\]) is better for our purposes, because it makes the canonical transformations (\[casbacca\]) and (\[backgfgen\]) commute with each other. We call the choice (\[psiback\]) *regular Landau gauge*. The gauge-field propagators coincide with the ones of the Landau gauge. Nevertheless, while the usual Landau gauge (with no $B$’s around) is singular, here gauge fields are part of multiplets that include the $B$’s, therefore (\[psiback\]) is regular. In the regular Landau gauge, using (\[backgf\]) and applying (\[backgfgen\]) to (\[sbacca\]) we find $$S_{\text{gf}}=\hat{S}_{\text{gf}}+\bar{S}=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int R^{\alpha }(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },C,\bar{C},B)\tilde{K}_{\alpha }-\int \mathcal{R}^{\alpha }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }, \label{sbaccagf}$$ where the tilde sources $\tilde{K}_{\alpha }$ coincide with $K_{\alpha }$ apart from $\tilde{K}_{\phi }^{i}$ and $\tilde{K}_{\bar{C}}^{I}$, which are $$\tilde{K}_{\phi }^{i}=K_{\phi }^{i}-\bar{C}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },-\overleftarrow{\partial }),\qquad \tilde{K}_{\bar{C}}^{I}=K_{\bar{C}}^{I}-G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}. \label{chif}$$ Recalling that the functions $R^{\alpha }(\Phi )$ are $\bar{C}$ independent, we see that $\hat{S}_{\text{gf}}$ does not depend on $K_{\phi }^{i}$ and $\bar{C}$ separately, but only through the combination $\tilde{K}_{\phi }^{i}$. Every one-particle irreducible diagram with $\bar{C}^{I}$ external legs actually factorizes a $-\bar{C}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },-\overleftarrow{\partial })$ on those legs. Replacing one or more such objects with $K_{\phi }^{i}$s, we obtain other contributing diagrams. Conversely, replacing one or more $K_{\phi }^{i}$-external legs with $-\bar{C}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },-\overleftarrow{\partial })$ we also obtain contributing diagrams. Therefore, all radiative corrections, as well as the renormalized action $\hat{S}_{R}$ and the $\Gamma $ functionals $\hat{\Gamma}$ and $\hat{\Gamma}_{R}$ associated with the action (\[sbaccagf\]), do not depend on $K_{\phi }^{i}$ and $\bar{C}$ separately, but only through the combination $\tilde{K}_{\phi }^{i}$. The only $B$-dependent terms of $\hat{S}_{\text{gf}}$, provided by $\llbracket S,\Psi \rrbracket $ and (\[esto\]), are $$\Delta S_{B}\equiv -\int B^{I}\tilde{K}_{\bar{C}}^{I}=\int B^{I}\left( G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}-K_{\bar{C}}^{I}\right) , \label{chio}$$ and are quadratic or linear in the quantum fields. For this reason, no one-particle irreducible diagrams can contain external $B$ legs, therefore $\Delta S_{B}$ is nonrenormalized and goes into $\hat{S}_{R}$, $\hat{\Gamma}$ and $\hat{\Gamma}_{R}$ unmodified. We thus learn that using linear gauge-fixing functions we can set $\bar{C}=B=0$ and later restore the correct $\bar{C}$ and $B$ dependencies in $\hat{S}_{\text{gf}}$, $\hat{S}_{R}$, $\hat{\Gamma}$ and $\hat{\Gamma}_{R}$ just by replacing $K_{\phi }^{i}$ with $\tilde{K}_{\phi }^{i}$ and adding $\Delta S_{B}$. From now on when no confusion can arise we drop the subscripts of $S_{\text{gf}}$ and $\hat{S}_{\text{gf}}$ and assume that the background field theory is gauge-fixed in the way just explained. Background-preserving canonical transformations ----------------------------------------------- It is useful to characterize the most general canonical transformations $\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }\rightarrow \Phi ^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }$ that preserve the background field master equations (\[treide\]) and the basic properties of $\hat{S}$ and $\bar{S}$. By definition, all canonical transformations preserve the antiparentheses, so (\[treide\]) are turned into $$\llbracket \hat{S}^{\prime },\hat{S}^{\prime }\rrbracket ^{\prime }=\llbracket \hat{S}^{\prime },\bar{S}^{\prime }\rrbracket ^{\prime }=\llbracket \bar{S}^{\prime },\bar{S}^{\prime }\rrbracket ^{\prime }=0. \label{tretre}$$ Moreover, $\hat{S}^{\prime }$ should be ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{\prime }$ independent, while $\bar{S}$ should be invariant, because it encodes the background transformations. This means $$\bar{S}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }). \label{thesisback}$$ We prove that a canonical transformation defined by a generating functional of the form $$F(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime }), \label{cannonaback}$$ where $Q$ is a ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }$- and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent local functional such that $$\llbracket \bar{S},Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)\rrbracket =0, \label{assumback}$$ satisfies our requirements. Since $Q$ is ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent, the background fields and the sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}$ do not transform: ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime }={\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$, ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}^{\prime }={\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}$. Moreover, the action $\hat{S}^{\prime }$ is clearly ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{\prime }$ independent, as desired, so we just need to prove (\[thesisback\]). For convenience, multiply $Q$ by a constant parameter $\zeta $ and consider the canonical transformations generated by $$F_{\zeta }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\zeta Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime }). \label{fg}$$ Given a functional $X(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })$ it is often useful to work with the tilde functional $$\tilde{X}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=X(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }),{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })). \label{tildedfback}$$ obtained by expressing the primed sources in terms of unprimed fields and sources. Assumption (\[assumback\]) tells us that $Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)$ is invariant under background transformations. Since $\Phi ^{\alpha }$ and $K_{\beta }$ transform as matter fields under such transformations, it is clear that $\delta Q/\delta K_{\alpha }$ and $\delta Q/\delta \Phi ^{\beta }$ transform precisely like them, as well as $\Phi ^{\alpha \hspace{0.01in}\prime }$ and $K_{\beta }^{\prime }$. Moreover, we have $\llbracket \bar{S},\tilde{Q}\rrbracket =0$ for every $\zeta $. Applying theorem \[theorem5\] to $\chi =\bar{S}$ we obtain $$\frac{\partial ^{\prime }\bar{S}^{\prime }}{\partial \zeta }=\frac{\partial \bar{S}}{\delta \zeta }-\llbracket \bar{S},\tilde{Q}\rrbracket =\frac{\partial \bar{S}}{\delta \zeta }, \label{tocback}$$ where $\partial ^{\prime }/\partial \zeta $ is taken at constant primed variables and $\partial /\partial \zeta $ is taken at constant unprimed variables. If we treat the unprimed variables as $\zeta $ independent, and the primed variables as functions of them and $\zeta $, the right-hand side of (\[tocback\]) vanishes. Varying $\zeta $ from 0 to 1 we get $$\bar{S}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\bar{S}^{\prime }(\Phi ^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }),$$ where now the relations among primed and unprimed variables are those specified by (\[cannonaback\]). We call the canonical transformations just defined *background-preserving canonical transformations*. We stress once again that they do not just preserve the background field (${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime }={\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$), but also the background transformations ($\bar{S}^{\prime }=\bar{S}$) and the ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independence of $\hat{S}$. The gauge-fixing canonical transformation (\[backgfgen\]) is background preserving. Canonical transformations may convert the sources $K$ into functions of both fields and sources. However, the sources are external, while the fields are integrated over. Thus, canonical transformations must be applied at the level of the action $S$, not at the levels of generating functionals. In the functional integral they must be meant as mere replacements of integrands. Nevertheless, we recall that there exists a way [@fieldcov; @masterf; @mastercan] to upgrade the formalism of quantum field theory and overcome these problems. The upgraded formalism allows us to implement canonical transformations as true changes of field variables in the functional integral, and closely track their effects inside generating functionals, as well as throughout the renormalization algorithm. Renormalization =============== In this section we give the basic algorithm to subtract divergences to all orders. As usual, we proceed by induction in the number of loops and use the dimensional-regularization technique and the minimal subtraction scheme. We assume that gauge anomalies are manifestly absent, i.e. that the background field master equations (\[treide\]) hold exactly at the regularized level. We first work on the classical action $S=\hat{S}+\bar{S}$ of (\[sbaccagf\]) and define a background-preserving subtraction algorithm. Then we generalize the results to non-background-preserving actions. Call $S_{n}$ and $\Gamma _{n}$ the action and the $\Gamma $ functional renormalized up to $n$ loops included, with $S_{0}=S$, and write the loop expansion as $$\Gamma _{n}=\sum_{k=0}^{\infty }\hbar ^{n}\Gamma _{n}^{(k)}.$$ The inductive assumptions are that $S_{n}$ has the form (\[assu\]), with $\bar{S}$ given by (\[sbar\]), and $$\begin{aligned} S_{n} &=&S+\text{poles},\qquad \Gamma _{n}^{(k)}<\infty ~~\forall k\leqslant n, \label{assu1} \\ \llbracket S_{n},S_{n}\rrbracket &=&\mathcal{O}(\hbar ^{n+1}),\qquad \llbracket \bar{S},S_{n}\rrbracket =0, \label{assu2}\end{aligned}$$ where “poles” refers to the divergences of the dimensional regularization. Clearly, the assumptions (\[assu1\]) and (\[assu2\]) are satisfied for $n=0$. Using formulas (\[give\]) and recalling that $\llbracket S_{n},S_{n}\rrbracket $ is a local insertion of order $\mathcal{O}(\hbar ^{n+1})$, we have $$\llbracket \Gamma _{n},\Gamma _{n}\rrbracket =\langle \llbracket S_{n},S_{n}\rrbracket \rangle =\llbracket S_{n},S_{n}\rrbracket +\mathcal{O}(\hbar ^{n+2}),\qquad \llbracket \bar{S},\Gamma _{n}\rrbracket =\langle \llbracket \bar{S},S_{n}\rrbracket \rangle =0. \label{gnback2}$$ By $\llbracket S,S\rrbracket =0$ and the first of (\[assu1\]), $\llbracket S_{n},S_{n}\rrbracket $ is made of pure poles. Now, take the order $\hbar ^{n+1}$ of equations (\[gnback2\]) and then their divergent parts. The second of (\[assu1\]) tells us that all subdivergences are subtracted away, so the order-$\hbar ^{n+1}$ divergent part $\Gamma _{n\text{ div}}^{(n+1)}$ of $\Gamma _{n}$ is a local functional. We obtain $$\llbracket S,\Gamma _{n\text{ div}}^{(n+1)}\rrbracket =\frac{1}{2}\llbracket S_{n},S_{n}\rrbracket +\mathcal{O}(\hbar ^{n+2}),\qquad \llbracket \bar{S},\Gamma _{n\text{ div}}^{(n+1)}\rrbracket =0. \label{gn2back}$$ Define $$S_{n+1}=S_{n}-\Gamma _{n\text{ div}}^{(n+1)}. \label{snp1back}$$ Since $S_{n}$ has the form (\[assu\]), $\Gamma _{n}$ has the form (\[becco\]), therefore both $\hat{\Gamma}_{n}$ and $\Gamma _{n\text{ div}}^{(n+1)}$ are ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent, which ensures that $S_{n+1}$ has the form (\[assu\]) (with $\bar{S}$ given by (\[sbar\])). Moreover, the first inductive assumption of (\[assu1\]) is promoted to $S_{n+1}$. The diagrams constructed with the vertices of $S_{n+1} $ are the diagrams of $S_{n}$, plus new diagrams containing vertices of $-\Gamma _{n\text{ div}}^{(n+1)}$; therefore $$\Gamma _{n+1}^{(k)}=\Gamma _{n}^{(k)}<\infty ~~\forall k\leqslant n,\qquad \Gamma _{n+1}^{(n+1)}=\Gamma _{n}^{(n+1)}-\Gamma _{n\text{ div}}^{(n+1)}<\infty ,$$ which promotes the second inductive assumption of (\[assu1\]) to $n+1$ loops. Finally, formulas (\[gn2back\]) and (\[snp1back\]) give $$\llbracket S_{n+1},S_{n+1}\rrbracket =\llbracket S_{n},S_{n}\rrbracket -2\llbracket S,\Gamma _{n\text{ div}}^{(n+1)}\rrbracket +\mathcal{O}(\hbar ^{n+2})=\mathcal{O}(\hbar ^{n+2}),\qquad \llbracket \bar{S},S_{n+1}\rrbracket =0,$$ so (\[assu2\]) are also promoted to $n+1$ loops. We conclude that the renormalized action $S_{R}=S_{\infty }$ and the renormalized generating functional $\Gamma _{R}=\Gamma _{\infty }$ satisfy the background field master equations $$\llbracket S_{R},S_{R}\rrbracket =\llbracket \bar{S},S_{R}\rrbracket =0,\qquad \llbracket \Gamma _{R},\Gamma _{R}\rrbracket =\llbracket \bar{S},\Gamma _{R}\rrbracket =0. \label{finback}$$ For later convenience we write down the form of $S_{R}$, which is $$S_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)+\bar{S}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)-\int \mathcal{R}^{\alpha }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }. \label{sr1}$$ In the usual (non-background field) approach the results just derived hold if we just ignore background fields and sources, as well as background transformations, and use the standard parentheses $(X,Y)$ instead of $\llbracket X,Y\rrbracket $. Then the subtraction algorithm starts with a classical action $S(\Phi ,K)$ that satisfies the usual master equation $(S,S)=0$ exactly at the regularized level and ends with a renormalized action $S_{R}(\Phi ,K)=S_{\infty }(\Phi ,K)$ and a renormalized generating functional $\Gamma _{R}(\Phi ,K)=\Gamma _{\infty }(\Phi ,K)$ that satisfy the usual master equations $(S_{R},S_{R})=(\Gamma _{R},\Gamma _{R})=0$. In the presence of background fields ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ and background sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$, ignoring invariance under background transformations (encoded in the parentheses $\llbracket \bar{S},S\rrbracket $, $\llbracket \bar{S},S_{n}\rrbracket $, $\llbracket \bar{S},S_{R}\rrbracket $ and similar ones for the $\Gamma $ functionals), we can generalize the results found above to any classical action $S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ that satisfies $\llbracket S,S\rrbracket =0$ at the regularized level and is such that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent. Indeed, these assumptions allow us to apply theorem \[thb\], instead of formulas (\[give\]), which is enough to go through the subtraction algorithm ignoring the parentheses $\llbracket \bar{S},X\rrbracket $. We have $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\delta _{l}\Gamma /\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\delta _{l}S_{n}/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ for every $n$. Thus, we conclude that a classical action $S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ that satisfies $\llbracket S,S\rrbracket =0$ at the regularized level and is such that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent gives a renormalized action $S_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ and a $\Gamma $ functional $\Gamma _{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ that satisfy $\llbracket S_{R},S_{R}\rrbracket =\llbracket \Gamma _{R},\Gamma _{R}\rrbracket =0$ and $\delta _{l}S_{R}/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\delta _{l}\Gamma _{R}/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$. The renormalization algorithm of this section is a generalization to the background field method of the procedure first given in ref. [@lavrov]. Since it subtracts divergences just as they come, as emphasized by formula (\[snp1back\]), we use to call it “raw” subtraction [@regnocoho], to distinguish it from algorithms where divergences are subtracted away at each step by means of parameter redefinitions and canonical transformations. The raw subtraction does not ensure RG invariance [@regnocoho], because it subtracts divergent terms even when there is no (running) parameter associated with them. For the same reason, it tells us very little about parametric completeness. In power-counting renormalizable theories the raw subtraction is satisfactory, since we can start from a classical action $S_{c}$ that already contains all gauge-invariant terms that are generated back by renormalization. Nevertheless, in nonrenormalizable theories, such as quantum gravity, effective field theories and nonrenormalizable extensions of the Standard Model, in principle renormalization can modify the symmetry transformations in physically observable ways (see ref. [@regnocoho] for a discussion about this possibility). In section 5 we prove that this actually does not happen under the assumptions we have made in this paper; namely when gauge anomalies are manifestly absent, the gauge algebra is irreducible and closes off shell, and $R^{\alpha }(\Phi )$ are quadratic functions of the fields $\Phi $. Precisely, renormalization affects the symmetry only by means of canonical transformations and parameter redefinitions. Then, to achieve parametric completeness it is sufficient to include all gauge-invariant terms in the classical action $S_{c}(\phi )$, as classified by the starting gauge symmetry. The background field method is crucial to prove this result without advocating involved cohomological classifications. Gauge dependence ================ In this section we study the dependence on the gauge fixing and the renormalization of canonical transformations. We first derive the differential equations that govern gauge dependence; then we integrate them and finally use the outcome to describe the renormalized canonical transformation that switches between the background field approach and the conventional approach. These results will be useful in the next section to prove parametric completeness. The parameters of a canonical transformation are associated with changes of field variables and changes of gauge fixing. For brevity we call all of them “gauge-fixing parameters” and denote them with $\xi $. Let (\[cannonaback\]) be a tree-level canonical transformation satisfying (\[assumback\]). We write $Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\xi )$ to emphasize the $\xi $ dependence of $Q$. We prove that for every gauge-fixing parameter $\xi $ there exists a local ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$- and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent functional $Q_{R,\xi }$ such that $$Q_{R,\xi }=\widetilde{Q_{\xi }}+\mathcal{O}(\hbar )\text{-poles},\qquad \langle Q_{R,\xi }\rangle <\infty , \label{babaoback}$$ and $$\frac{\partial S_{R}}{\partial \xi }=\llbracket S_{R},Q_{R,\xi }\rrbracket ,\qquad \llbracket \bar{S},Q_{R,\xi }\rrbracket =0,\qquad \frac{\partial \Gamma _{R}}{\partial \xi }=\llbracket \Gamma _{R},\langle Q_{R,\xi }\rangle \rrbracket , \label{backgind}$$ where $Q_{\xi }=\partial Q/\partial \xi $, $\widetilde{Q_{\xi }}$ is defined as shown in (\[tildedfback\]) and the average is calculated with the action $S_{R}$. We call the first and last equations of the list (\[backgind\]) *differential equations of* *gauge dependence*. They ensure that renormalized functionals depend on gauge-fixing parameters in a cohomologically exact way. Later we integrate equations (\[backgind\]) and move every gauge dependence inside a (renormalized) canonical transformation. A consequence is that physical quantities are gauge independent. We derive (\[backgind\]) proceeding inductively in the number of loops, as usual. The inductive assumption is that there exists a ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$- and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent local functional $Q_{n,\xi }=\widetilde{Q_{\xi }}+\mathcal{O}(\hbar )$-poles such that $\langle Q_{n,\xi }\rangle $ is convergent up to the $n$th loop included (the average being calculated with the action $S_{n}$) and $$\frac{\partial S_{n}}{\partial \xi }=\llbracket S_{n},Q_{n,\xi }\rrbracket +\mathcal{O}(\hbar ^{n+1}),\qquad \llbracket \bar{S},Q_{n,\xi }\rrbracket =0. \label{inda1back}$$ Applying the identity (\[thesis\]), which here holds with the parentheses $\llbracket X,Y\rrbracket $, we easily see that $Q_{0,\xi }=\widetilde{Q_{\xi }}$ satisfies (\[inda1back\]) for $n=0$. Indeed, taking $\chi =S$ and noting that $\left. \partial S^{\prime }/\partial \xi \right| _{\Phi ^{\prime },K^{\prime }}=0$, since the parameter $\xi $ is absent before the transformation (a situation that we describe using primed variables), we get the first relation of (\[inda1back\]), without $\mathcal{O}(\hbar )$ corrections. Applying (\[thesis\]) to $\chi =\bar{S}$ and recalling that $\bar{S}$ is invariant, we get the second relation of (\[inda1back\]). Let $Q_{n,\xi \hspace{0.01in}\text{div}}^{(n+1)}$ denote the $\mathcal{O}(\hbar ^{n+1})$ divergent part of $\langle Q_{n,\xi }\rangle $. The inductive assumption ensures that all subdivergences are subtracted away, so $Q_{n,\xi \hspace{0.01in}\text{div}}^{(n+1)}$ is local. Define $$Q_{n+1,\xi }=Q_{n,\xi }-Q_{n,\xi \hspace{0.01in}\text{div}}^{(n+1)}. \label{refdback}$$ Clearly, $Q_{n+1,\xi }$ is ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent and equal to $\widetilde{Q_{\xi }}+\mathcal{O}(\hbar )$-poles. Moreover, by construction $\langle Q_{n+1,\xi }\rangle $ is convergent up to the $(n+1)$-th loop included, where the average is calculated with the action $S_{n+1}$. Now, corollary \[corolla\] tells us that $\llbracket \bar{S},Q_{n,\xi }\rrbracket =0$ and $\llbracket \bar{S},S_{n}\rrbracket =0$ imply $\llbracket \bar{S},\langle Q_{n,\xi }\rangle \rrbracket =0$. Taking the $\mathcal{O}(\hbar ^{n+1})$ divergent part of this formula we obtain $\llbracket \bar{S},Q_{n,\xi \hspace{0.01in}\text{div}}^{(n+1)}\rrbracket =0$; therefore the second formula of (\[inda1back\]) is promoted to $n+1$ loops. Applying corollary \[cora\] to $\Gamma _{n}$ and $S_{n}$, with $X=Q_{n,\xi }$, we have the identity $$\frac{\partial \Gamma _{n}}{\partial \xi }=\llbracket \Gamma _{n},\langle Q_{n,\xi }\rangle \rrbracket +\left\langle \frac{\partial S_{n}}{\partial \xi }-\llbracket S_{n},Q_{n,\xi }\rrbracket \right\rangle +\frac{1}{2}\left\langle \llbracket S_{n},S_{n}\rrbracket \hspace{0.01in}Q_{n,\xi }\right\rangle _{\Gamma }, \label{provef}$$ where $\left\langle AB\right\rangle _{\Gamma }$ denotes the one-particle irreducible diagrams with one $A$ insertion and one $B$ insertion. Now, observe that if $A=\mathcal{O}(\hbar ^{n_{A}})$ and $B=\mathcal{O}(\hbar ^{n_{B}})$ then $\left\langle AB\right\rangle _{\Gamma }=\mathcal{O}(\hbar ^{n_{A}+n_{B}+1})$, since the $A,B$ insertions can be connected only by loops. Let us take the $\mathcal{O}(\hbar ^{n+1})$ divergent part of (\[provef\]). By the inductive assumption (\[assu2\]), the last term of (\[provef\]) can be neglected. By the inductive assumption (\[inda1back\]) we can drop the average in the second-to-last term. We thus get $$\frac{\partial \Gamma _{n\ \text{div}}^{(n+1)}}{\partial \xi }=\llbracket \Gamma _{n\ \text{div}}^{(n+1)},Q_{0,\xi }\rrbracket +\llbracket S,Q_{n,\xi \hspace{0.01in}\text{div}}^{(n+1)}\rrbracket +\frac{\partial S_{n}}{\partial \xi }-\llbracket S_{n},Q_{n,\xi }\rrbracket +\mathcal{O}(\hbar ^{n+2}).$$ Using this fact, (\[snp1back\]) and (\[refdback\]) we obtain $$\frac{\partial S_{n+1}}{\partial \xi }=\llbracket S_{n+1},Q_{n+1,\xi }\rrbracket +\mathcal{O}(\hbar ^{n+2}), \label{sunpi}$$ which promotes the first inductive hypothesis of (\[inda1back\]) to order $\hbar ^{n+1}$. When $n$ is taken to infinity, the first two formulas of (\[backgind\]) follow, with $Q_{R,\xi }=Q_{\infty ,\xi }$. The third identity of (\[backgind\]) follows from the first one, using (\[provef\]) with $n=\infty $ and $\llbracket \hat{S}_{R},\hat{S}_{R}\rrbracket =0$. This concludes the derivation of (\[backgind\]). Integrating the differential equations of gauge dependence {#integrating} ---------------------------------------------------------- Now we integrate the first two equations of (\[backgind\]) and find the renormalized canonical transformation that corresponds to a tree-level transformation (\[cannonaback\]) satisfying (\[assumback\]). Specifically, we prove that There exists a background-preserving canonical transformation $$F_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime },\xi )=\int \ \Phi ^{A}K_{A}^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{A}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{A}^{\prime }+Q_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\xi ), \label{finalcanback}$$ where $Q_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\xi )=Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\xi )+\mathcal{O}(\hbar )$ is a ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$- and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$-independent local functional, such that the transformed action $S_{f}(\Phi ^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=S_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu },\xi )$ is $\xi $ independent and invariant under background transformations: $$\frac{\partial S_{f}}{\partial \xi }=0,\qquad \llbracket \bar{S},S_{f}\rrbracket =0. \label{gindep2back}$$ *Proof*. To prove this statement we introduce a new parameter $\zeta $ multiplying the whole functional $Q$ of (\[cannonaback\]), as in (\[fg\]). We know that $\llbracket \bar{S},Q\rrbracket =0$ implies $\llbracket \bar{S},\tilde{Q}\rrbracket =0$. If we prove that the $\zeta $ dependence can be reabsorbed into a background-preserving canonical transformation we also prove the same result for every gauge-fixing parameter $\xi $ and also for all of them together. The differential equations of gauge dependence found above obviously apply with $\xi \rightarrow \zeta $. Specifically, we show that the $\zeta $ dependence can be reabsorbed in a sequence of background-preserving canonical transformations $S_{R\hspace{0.01in}n}\rightarrow S_{R\hspace{0.01in}n+1}$ (with $S_{R\hspace{0.01in}0}=S_{R}$), generated by $$F_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{A}K_{A}^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{A}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{A}^{\prime }+H_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\zeta ), \label{fn}$$ where $H_{n}=\mathcal{O}(\hbar ^{n})$, and such that $$\frac{\partial S_{R\hspace{0.01in}n}}{\partial \zeta }=\llbracket S_{R\hspace{0.01in}n},T_{n}\rrbracket ,\qquad T_{n}=\mathcal{O}(\hbar ^{n}). \label{tn}$$ The functionals $T_{n}$ and $H_{n}$ are determined by the recursive relations $$\begin{aligned} T_{n+1}(\Phi ^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\zeta ) &=&T_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K,\zeta )-\widetilde{\frac{\partial H_{n}}{\partial \zeta }}, \label{d1} \\ H_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\zeta ) &=&\int_{0}^{\zeta }d\zeta ^{\prime }\hspace{0.01in}T_{n,n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime },\zeta ^{\prime }), \label{d2}\end{aligned}$$ with the initial conditions $$T_{0}=Q_{R,\zeta },\qquad H_{0}=\zeta Q.$$ In formula (\[d1\]) the tilde operation (\[tildedfback\]) on $\partial H_{n}/\partial \zeta $ and the canonical transformation $\Phi ,K\rightarrow \Phi ^{\prime },K^{\prime }$ are the ones defined by $F_{n}$. In formula (\[d2\]) $T_{n,n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime })$ denotes the contributions of order $\hbar ^{n}$ to $T_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime }))$, the function $K(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime })$ also being determined by $F_{n}$. Note that for $n>0$ we have $T_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime }))=T_{n}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K^{\prime })+\mathcal{O}(\hbar ^{n+1})$, therefore formula (\[d2\]), which determines $H_{n}$ (and so $F_{n}$), does not really need $F_{n}$ on the right-hand side. Finally, (\[d2\]) is self-consistent for $n=0$. Formula (\[thesis\]) of the appendix describes how the dependence on parameters is modified by a canonical transformation. Applying it to (\[tn\]), we get $$\frac{\partial S_{R\hspace{0.01in}n+1}}{\partial \zeta }=\frac{\partial S_{R\hspace{0.01in}n}}{\partial \zeta }-\llbracket S_{R\hspace{0.01in}n},\widetilde{\frac{\partial H_{n}}{\partial \zeta }}\rrbracket =\llbracket S_{R\hspace{0.01in}n},T_{n}-\widetilde{\frac{\partial H_{n}}{\partial \zeta }}\rrbracket ,$$ whence (\[d1\]) follows. For $n=0$ the first formula of (\[babaoback\]) gives $T_{0}=\widetilde{Q}+\mathcal{O}(\hbar )$, therefore $T_{1}=\mathcal{O}(\hbar )$. Then (\[d2\]) gives $H_{1}=\mathcal{O}(\hbar )$. For $n>0$ the order $\hbar ^{n}$ of $T_{n+1}$ vanishes by formula (\[d2\]); therefore $T_{n+1}=\mathcal{O}(\hbar ^{n+1})$ and $H_{n+1}=\mathcal{O}(\hbar ^{n+1})$, as desired. Consequently, $S_{f}\equiv S_{R\hspace{0.01in}\infty }$ is $\zeta $ independent, since (\[tn\]) implies $\partial S_{R\hspace{0.01in}\infty }/\partial \zeta =0$. Observe that ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$ and ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independence is preserved at each step. Finally, all operations defined by (\[d1\]) and (\[d2\]) are background preserving. We conclude that the canonical transformation $F_{R}$ obtained composing the $F_{n}$s solves the problem. Using (\[gindep2back\]) and (\[give\]) we conclude that in the new variables $$\frac{\partial \Gamma _{f}}{\partial \xi }=\left\langle \frac{\partial S_{f}}{\partial \xi }\right\rangle =0\qquad \llbracket \bar{S},\Gamma _{f}\rrbracket =0, \label{finalmenteback}$$ for all gauge-fixing parameters $\xi $. Non-background-preserving canonical transformations --------------------------------------------------- In the usual approach the results derived so far apply with straightforward modifications. It is sufficient to ignore the background fields and sources, as well as the background transformations, and use the standard parentheses $(X,Y)$ instead of $\llbracket X,Y\rrbracket $. Thus, given a tree-level canonical transformation generated by $$F(\Phi ,K^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+Q(\Phi ,K^{\prime },\xi ), \label{f1}$$ there exists a local functional $Q_{R,\xi }$ satisfying (\[babaoback\]) such that $$\frac{\partial S_{R}}{\partial \xi }=(S_{R},Q_{R,\xi }),\qquad \frac{\partial \Gamma _{R}}{\partial \xi }=(\Gamma _{R},\langle Q_{R,\xi }\rangle ), \label{br}$$ and there exists a renormalized canonical transformation $$F_{R}(\Phi ,K^{\prime })=\int \ \Phi ^{A}K_{A}^{\prime }+Q_{R}(\Phi ,K^{\prime },\xi ), \label{f2}$$ where $Q_{R}(\Phi ,K^{\prime },\xi )=Q(\Phi ,K^{\prime },\xi )+\mathcal{O}(\hbar )$ is a local functional, such that the transformed action $S_{f }(\Phi ^{\prime },K^{\prime })=S_{R}(\Phi ,K,\xi )$ is $\xi $ independent. Said differently, the entire $\xi $ dependence of $S_{R}$ is reabsorbed into the transformation: $$S_{R}(\Phi ,K,\xi )=S_{f}(\Phi ^{\prime }(\Phi ,K,\xi ),K^{\prime }(\Phi ,K,\xi )).$$ In the presence of background fields ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ and background sources ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$, dropping assumption (\[assumback\]) and ignoring invariance under background transformations, encoded in the parentheses $\llbracket \bar{S},X\rrbracket $, the results found above can be easily generalized to any classical action $S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ that solves $\llbracket S,S\rrbracket =0$ and is such that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent, and to any ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$-independent canonical transformation. Indeed, these assumptions are enough to apply theorem \[thb\] and corollary \[cora\], and go through the derivation ignoring the parentheses $\llbracket \bar{S},X\rrbracket $. The tree-level canonical transformation is described by a generating functional of the form $$F(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+Q(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },K^{\prime },\xi ). \label{fa1}$$ We still find the differential equations $$\frac{\partial S_{R}}{\partial \xi }=\llbracket S_{R},Q_{R,\xi }\rrbracket ,\qquad \frac{\partial \Gamma _{R}}{\partial \xi }=\llbracket \Gamma _{R},\langle Q_{R,\xi }\rangle \rrbracket , \label{eqw}$$ where $Q_{R,\xi }$ satisfies (\[babaoback\]). When we integrate the first of these equations with the procedure defined above we build a renormalized canonical transformation $$F_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime },\xi )=\int \ \Phi ^{\alpha}K_{\alpha}^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha}^{\prime }+Q_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },K^{\prime },\xi ), \label{fra1}$$ where $Q_{R}=Q+\mathcal{O}(\hbar )$ is a local functional, such that the transformed action $S_{f}(\Phi ^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\prime },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=S_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu },\xi )$ is $\xi $ independent. The only difference is that now $Q_{R,\xi }$, $\langle Q_{R,\xi }\rangle $, $T_{n}$, $H_{n}$ and $Q_{R}$ can depend on ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$, which does not disturb any of the arguments used in the derivation. Canonical transformations in ----------------------------- We have integrated the first equation of (\[eqw\]), and shown that the $\xi$ dependence can be reabsorbed in the canonical transformation (\[fra1\]) on the renormalized action $S_{R}$, which gives the $\xi$-independent action with $S_{\rm f}$. We know that the generating functional $\Gamma_{\rm f}$ of one-particle irreducible Green functions determined by $S_{\rm f}$ is $\xi$ independent. We can also prove that $\Gamma_{\rm f}$ can be obtained applying a (non-local) canonical transformation directly on $\Gamma_{R}$. To achieve this goal we integrate the second equation of (\[eqw\]). The integration algorithm is the same as the one of subsection \[integrating\], with the difference that $Q_{R,\xi}$ is replaced by $\langle Q_{R,\xi}\rangle$. The canonical transformation on $\Gamma_{R}$ has a generating functional of the form $$F_{\Gamma}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime },\xi )=\int \ \Phi ^{\alpha}K_{\alpha}^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha}^{\prime }+Q_{\Gamma}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },K^{\prime },\xi ), \label{fra2}$$ where $Q_{\Gamma}=Q+{\cal O}(\hbar)$ (non-local) radiative corrections. The result just obtained is actually more general, and proves that if $S$ is any action that solves the master equation (it can be the classical action, the renormalized action, or any other action) canonical transformations on $S$ correspond to canonical transformations on the $\Gamma$ functional determined by $S$. See [@quadri] for a different derivation of this result in Yang-Mills theory. Our line of reasoning can be recapitulated as follows: in the usual approach, ($i$) make a canonical transformation (\[f1\]) on $S$; ($ii$) derive the equations of gauge dependence for the action, which are $\partial S/\partial\xi=( S,Q_{\xi}) $; ($iii$) derive the equations of gauge dependence for the $\Gamma$ functional determined by $S$, which are $\partial \Gamma/\partial\xi=( \Gamma,\langle Q_{\xi}\rangle)$, and integrate them. The property just mentioned may sound obvious, and is often taken for granted, but actually needed to be proved. The reason is that the canonical transformations we are talking about are not true changes of field variables inside functional integrals, but mere replacements of integrands [@fieldcov]. Therefore, we cannot automatically infer how a transformation on the action $S$ affects the generating functionals $Z$, $W=\ln Z$ and $\Gamma$, and need to make some additional effort to get where we want. We recall that to skip this kind of supplementary analysis we need to use the formalism of the master functional, explained in refs. [@masterf; @mastercan]. Application ----------- An interesting application that illustrates the results of this section is the comparison between the renormalized action (\[sr1\]), which was obtained with the background field method and the raw subtraction procedure of section 3, and the renormalized action $S_{R}^{\prime }$ that can be obtained with the same raw subtraction in the usual non-background field approach. The usual approach is retrieved by picking a gauge fermion $\Psi ^{\prime }$ that depends on $\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$, such as $$\Psi ^{\prime }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })=\int \bar{C}^{I}G^{Ii}(0,\partial )(\phi ^{i}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i}). \label{gfno}$$ Making the canonical transformation generated by $$F_{\text{gf}}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime })=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\Psi ^{\prime }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }) \label{backgfgen2}$$ on (\[sback\]) we find the classical action $$S^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{S}^{\prime }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K)+\bar{S}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }), \label{sr2c}$$ where $$\hat{S}^{\prime }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K)=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int R^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })\bar{K}_{\alpha },\qquad \bar{S}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })({\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }-K_{\alpha }), \label{sr2gf}$$ and the barred sources $\bar{K}_{\alpha }$ coincide with $K_{\alpha }$ apart from $\bar{K}_{\phi }^{i}$ and $\bar{K}_{\bar{C}}^{I}$, which are $$\bar{K}_{\phi }^{i}=K_{\phi }^{i}-\bar{C}^{I}G^{Ii}(0,-\overleftarrow{\partial }),\qquad \bar{K}_{\bar{C}}^{I}=K_{\bar{C}}^{I}-G^{Ii}(0,\partial )(\phi ^{i}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i}). \label{chif2}$$ Clearly, $\hat{S}^{\prime }$ is the gauge-fixed classical action of the usual approach, apart from the shift $\Phi \rightarrow \Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$. The radiative corrections are generated only by $\hat{S}^{\prime }$ and do not affect $\bar{S}^{\prime }$. Indeed, $\hat{S}^{\prime }$ as well as the radiative corrections are unaffected by setting ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }=0$ and then shifting $\Phi $ back to $\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$, while $\bar{S}^{\prime }$ disappears doing this. Thus, $\bar{S}^{\prime }$ is nonrenormalized, and the renormalized action $S_{R}^{\prime }$ has the form $$S_{R}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\hat{S}_{R}^{\prime }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K)+\bar{S}^{\prime }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }). \label{sr2}$$ Now we compare the classical action (\[sbaccagf\]) of the background field method with the classical action (\[sr2c\]) of the usual approach. We recapitulate how they are obtained with the help of the following schemes: $$\begin{tabular}{cccccccccc} (\ref{sback}) & $\stackrel{(\ref{casbacca})}{\mathrel{\scalebox{2.5}[1]{$\longrightarrow$}}}$ & (\ref{sbacca}) & $\stackrel{(\ref{backgfgen})}{\mathrel{\scalebox{2.5}[1]{$\longrightarrow$}}}$ & $S=$ (\ref{sbaccagf}) & \qquad \qquad & (\ref{sback}) & $\stackrel{(\ref{backgfgen2})}{\mathrel{\scalebox{2.5}[1]{$ \longrightarrow$}}}$ & $S^{\prime }=$ (\ref {sr2gf}). & \end{tabular}$$ Above the arrows we have put references to the corresponding canonical transformations, which are (\[casbacca\]), (\[backgfgen\]) and (\[backgfgen2\]) and commute with one another. We can interpolate between the classical actions (\[sbaccagf\]) and (\[sr2gf\]) by means of a ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$-independent non-background-preserving canonical transformation generated by $$F_{\xi }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime },\xi )=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\xi \Delta \Psi +\xi \int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I\hspace{0.01in}\prime }. \label{ds}$$ where $\xi $ is a gauge-fixing parameter that varies from 0 to 1, and $$\Delta \Psi =\int \bar{C}^{I}\left( G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}-G^{Ii}(0,\partial )(\phi ^{i}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i})\right) .$$ Precisely, start from the non-background field theory (\[sr2c\]), andtake its variables to be primed ones. We know that $\hat{S}^{\prime }$ depends on the combination $$\tilde{K}_{\phi }^{i\hspace{0.01in}\prime }=K_{\phi }^{i\hspace{0.01in}\prime }-\bar{C}^{I\hspace{0.01in}}G^{Ii}(0,-\overleftarrow{\partial }), \label{kip}$$ and we have $\bar{C}^{I\hspace{0.01in}}=\bar{C}^{I\hspace{0.01in}\prime }$. Expressing the primed fields and sources in terms of the unprimed ones and $\xi $, we find the interpolating classical action $$S_{\xi }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })-\int R^{\alpha }(\Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })\tilde{K}_{\alpha }(\xi )-\xi \int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })\tilde{K}_{\bar{C}}^{I}(\xi )-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })({\mkern2mu\underline{\mkern-2mu\smash{\tilde{K}}\mkern-2mu}\mkern2mu }_{\alpha }(\xi )-\tilde{K}_{\alpha }(\xi )), \label{sx}$$ where $\tilde{K}_{C}^{I}(\xi )=K_{C}^{I}$, $$\tilde{K}_{\phi }^{i}(\xi )=K_{\phi }^{i}-\xi \bar{C}^{I\hspace{0.01in}}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },-\overleftarrow{\partial })-(1-\xi )\bar{C}^{I\hspace{0.01in}}G^{Ii}(0,-\overleftarrow{\partial }), \label{convex}$$ while the other $\xi $-dependent tilde sources have expressions that we do not need to report here. It suffices to say that they are $K_{\phi }^{i}$ independent, such that $\delta _{r}S_{\xi }/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=-R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })$, and linear in the quantum fields $\Phi $, apart from ${\mkern2mu\underline{\mkern-2mu\smash{\tilde{K}}\mkern-2mu}\mkern2mu }_{\phi }^{i}(\xi )$, which is quadratic. Thus the action $S_{\xi }$ and the transformation $F_{\xi }$ satisfy the assumptions that allow us to apply theorem \[thb\] and corollary \[cora\]. Actually, (\[ds\]) is of type (\[fa1\]); therefore we have the differential equations (\[eqw\]) and the renormalized canonical transformation (\[fra1\]). We want to better characterize the renormalized version $F_{R}$ of $F_{\xi }$. We know that the derivative of the renormalized $\Gamma $ functional with respect to $\xi $ is governed by the renormalized version of the average $$\left\langle \widetilde{\frac{\partial F_{\xi }}{\partial \xi }}\right\rangle =\langle \Delta \Psi \rangle +\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I}.$$ It is easy to see that $\langle \Delta \Psi \rangle $ is independent of ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$, $B$, $K_{\bar{C}}$ and $K_{B}$, since no one-particle irreducible diagrams with such external legs can be constructed. In particular, $K_{B}^{I\hspace{0.01in}\prime }=K_{B}^{I}$. Moreover, using the explicit form (\[sx\]) of the action $S_{\xi }$ and arguments similar to the ones that lead to formulas (\[chif\]), we easily see that $\langle \Delta \Psi \rangle $ is equal to $\Delta \Psi $ plus a functional that does not depend on $K_{\phi }^{i}$ and $\bar{C}^{I}$ separately, but only on the convex combination (\[convex\]). Indeed, the $\bar{C}$-dependent terms of (\[sx\]) that do not fit into the combination (\[convex\]) are $K_{\phi }^{i}$ independent and at most quadratic in the quantum fields, so they cannot generate one-particle irreducible diagrams that have either $K_{\phi }^{i}$ or $\bar{C}$ on the external legs. Clearly, the renormalization of $\langle \Delta \Psi \rangle $ also satisfies the properties just stated for $\langle \Delta \Psi \rangle $. Following the steps of the previous section we can integrate the $\xi $ derivative and reconstruct the full canonical transformation. However, formula (\[d2\]) shows that the integration over $\xi $ must be done by keeping fixed the unprimed fields $\Phi $ and the primed sources $K^{\prime } $. When we do this for the zeroth canonical transformation $F_{0}$ of (\[fn\]), the combination $\tilde{K}_{\phi }^{i}(\xi )$ is turned into (\[kip\]), which is $\xi $ independent. Every other transformation $F_{n}$ of (\[fn\]) preserves the combination (\[kip\]), so the integrated canonical transformation does not depend on $K_{\phi }^{i}$ and $\bar{C}^{I}$ separately, but only on the combination $\tilde{K}_{\phi }^{i\hspace{0.01in}\prime }$, and the generating functional of the renormalized version $F_{R}$ of $F_{\xi }$ has the form $$F_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime },\xi )=\int \ \Phi ^{\alpha }K_{\alpha }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }+\xi \Delta \Psi +\xi \int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{I\hspace{0.01in}\prime }+\Delta F_{\xi }(\phi ,C,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },\tilde{K}_{\phi }^{\prime },K_{C}^{\prime },\xi ). \label{fr}$$ Using this expression we can verify [*a posteriori*]{} that indeed $\tilde{K}_{\phi }^{i}(\xi )$ depends just on (\[kip\]), not on $K_{\phi }^{i\hspace{0.01in}\prime }$ and $\bar{C}^{I}$ separately. Moreover, (\[fr\]) implies $${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha \hspace{0.01in}\prime }={\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha},\qquad B^{I\hspace{0.01in}\prime }=B^{I}+\xi \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }),\qquad \bar{C}^{I\hspace{0.01in}\prime }=\bar{C}^{I},\qquad K_{B}^{I\hspace{0.01in}\prime }=K_{B}^{I}. \label{besid}$$ In the next section these results are used to achieve parametric completeness. Renormalization and parametric completeness =========================================== The raw renormalization algorithm of section 3 subtracts away divergences just as they come. It does not ensure, *per se*, RG invariance, for which it is necessary to prove parametric completeness, namely that all divergences can be subtracted by redefining parameters and making canonical transformations. We must show that we can include in the classical action all invariants that are generated back by renormalization, and associate an independent parameter with each of them. The purpose of this section is to show that the background field method allows us to prove parametric completeness in a rather straightforward way, making cohomological classifications unnecessary. We want to relate the renormalized actions (\[sr1\]) and (\[sr2\]). From the arguments of the previous section we know that these two actions are related by the canonical transformation generated by (\[fr\]) at $\xi =1$. We have $$\hat{S}_{R}^{\prime }(\Phi ^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime })-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })({\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }^{\prime }-K_{\alpha }^{\prime })=\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)-\int \mathcal{R}^{\alpha }(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{\alpha }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }. \label{ei}$$ From (\[fr\]) we find the transformation rules (\[besid\]) at $\xi =1$ and $$\phi ^{\prime }=\phi +\frac{\delta \Delta F}{\delta \tilde{K}_{\phi }^{\prime }},\qquad K_{\bar{C}}^{I}=K_{\bar{C}}^{I\hspace{0.01in}\prime }+G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i}-G^{Ii}(0,\partial )(\phi ^{i\hspace{0.01in}\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i})+\frac{\delta }{\delta \bar{C}^{I}}\int \mathcal{R}_{\bar{C}}^{J}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{J}, \label{beside}$$ where $\Delta F$ is $\Delta F_{\xi}$ at $\xi=1$. Here and below we sometimes understand indices when there is no loss of clarity. We want to express equation (\[ei\]) in terms of unprimed fields and primed sources, and then set $\phi =C={\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }=0$. We denote this operation with a subscript 0. Keeping ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },\bar{C},B$ and $K^{\prime }$ as independent variables, we get $$\begin{aligned} \hat{S}_{R}^{\prime }(\Phi _{0}^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime }) &=&\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)-\int B^{I\prime }(K_{\bar{C}}^{I\hspace{0.01in}\prime }-G^{Ii}(0,\partial )(\phi _{0}^{i\hspace{0.01in}\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i})) \nonumber \\ &&-\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })\frac{\delta }{\delta \bar{C}^{I}}\int \mathcal{R}_{\bar{C}}^{J}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{J\hspace{0.01in}\prime }-\int R^{\alpha }({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })({\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha 0}+K_{\alpha }^{\prime }). \label{uio}\end{aligned}$$ To derive this formula we have used $$\hat{S}_{R}(\{0,0,\bar{C},B\},{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)=\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)-\int B^{I}K_{\bar{C}}^{I}, \label{mina}$$ together with (\[iddo\]), (\[besid\]) and (\[beside\]). The reason why (\[mina\]) holds is that at $C=0$ there are no objects with positive ghost numbers inside the left-hand side of this equation; therefore we can drop every object that has a negative ghost number, which means $\bar{C}$ and all sources $K$ but $K_{\bar{C}}^{I}$. Since (\[chio\]) are the only $B$ and $K_{\bar{C}}^{I}$-dependent terms, and they are not renormalized, at $\phi =0$ we find (\[mina\]). Now, consider the canonical transformation $\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },\bar{C},B\},\breve{K}\rightarrow \Phi ^{\prime \prime },K^{\prime }$ defined by the generating functional $$F(\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },\bar{C},B\},K^{\prime })=\int {\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }K_{\phi }^{\prime }+\int \ {\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }K_{C}^{\prime }+F_{R}(\{0,0,\bar{C},B\},{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },0,1). \label{for}$$ It gives the transformation rules $$\begin{aligned} \Phi ^{\hspace{0.01in}\prime \prime } &=&{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }+\Phi _{0}^{\prime },\qquad \breve{K}_{\phi }=K_{\phi }^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\phi 0},\qquad \breve{K}_{C}=K_{C}^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C0}, \\ \breve{K}_{\bar{C}}^{I} &=&K_{\bar{C}}^{I\hspace{0.01in}\prime }-G^{Ii}(0,\partial )\phi ^{i\hspace{0.01in}\prime \prime }+\frac{\delta }{\delta \bar{C}^{I}}\int \mathcal{R}_{\bar{C}}^{J}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })K_{B}^{J\hspace{0.01in}\prime },\qquad \breve{K}_{B}=K_{B}^{\prime },\end{aligned}$$ which turn formula (\[uio\]) into $$\begin{aligned} \hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime }) &=&\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)-\int R_{\phi }^{i}({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })\breve{K}_{\phi }^{i}-\int R_{C}^{I}({\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu })\breve{K}_{C}^{I} \nonumber \\ &&-\int B^{I}\breve{K}_{\bar{C}}^{I}-\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })\breve{K}_{\bar{C}}^{I}-\int \mathcal{R}_{B}^{I}(B,{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu })\breve{K}_{B}^{I}. \label{fin0}\end{aligned}$$ Note that $(\hat{S}_{R}^{\prime },\hat{S}_{R}^{\prime })=0$ is automatically satisfied by (\[fin0\]). Indeed, we know that $\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)$ is invariant under background transformations, and so is $\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)$, because $\Phi $ and $K$ transform as matter fields. We can classify $\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)$ using its gauge invariance. Let $\mathcal{G}_{i}(\phi )$ denote a basis of gauge-invariant local functionals constructed with the physical fields $\phi $. Then $$\hat{S}_{R}(0,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0)=\sum_{i}\tau _{i}\mathcal{G}_{i}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }), \label{comple}$$ for suitable constants $\tau _{i}$. Now we manipulate these results in several ways to make their consequences more explicit. To prepare the next discussion it is convenient to relabel $\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu },\bar{C},B\}$ as $\breve{\Phi}^{\alpha }$ and $K^{\prime }$ as $K^{\prime \prime }$. Then formulas (\[for\]) and (\[fin0\]) tell us that the canonical transformation $$F_{1}(\breve{\Phi},K^{\prime \prime })=\int \breve{\phi}K_{\phi }^{\prime \prime }+\int \ \breve{C}K_{C}^{\prime \prime }+F_{R}(\{0,0,{\breve{\bar{C}}},\breve{B}\},\{\breve{\phi},\breve{C}\},K^{\prime \prime },0,1) \label{forex}$$ is such that $$\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })=\hat{S}_{R}(0,\breve{\phi},0)-\int \bar{R}^{\alpha }(\breve{\Phi})\breve{K}_{\alpha }. \label{fin0ex}$$ #### Parametric completeness Making the further canonical transformation $\breve{\Phi},\breve{K}\rightarrow \Phi ,K$ generated by $$F_{2}(\Phi ,\breve{K})=\int \Phi ^{\alpha }\breve{K}_{\alpha }+\int \bar{C}^{I}G^{Ii}(0,\partial )\phi ^{i}-\int \mathcal{R}_{\bar{C}}^{I}(\bar{C},C)\breve{K}_{B}^{I},$$ we get $$\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })=\hat{S}_{R}(0,\phi ,0)-\int R^{\alpha }(\Phi )\bar{K}_{\alpha }, \label{keynb}$$ where the barred sources are the ones of (\[chif2\]) at ${\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }=0$. If we start from the most general gauge-invariant classical action, $$S_{c}(\phi ,\lambda )\equiv \sum_{i}\lambda _{i}\mathcal{G}_{i}(\phi ), \label{scgen}$$ where $\lambda _{i}$ are physical couplings (apart from normalization constants), identities (\[comple\]) and (\[keynb\]) give $$\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })=S_{c}(\phi ,\tau (\lambda ))-\int R^{\alpha }(\Phi )\bar{K}_{\alpha }. \label{kk}$$ This result proves parametric completeness in the usual approach, because it tells us that the renormalized action of the usual approach is equal to the classical action $\hat{S}^{\prime }(\Phi ,K)$ (check (\[sr2c\])-(\[sr2gf\]) at ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }={\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }=0$), apart from parameter redefinitions $\lambda \rightarrow \tau $ and a canonical transformation. In this derivation the role of the background field method is just to provide the key tool to prove the statement. We can also describe parametric completeness in the background field approach. Making the canonical transformation $\breve{\Phi},\breve{K}\rightarrow \hat{\Phi},\hat{K}$ generated by $$F_{2}^{\prime }(\hat{\Phi},\breve{K})=\int \hat{\Phi}^{\alpha }\breve{K}_{\alpha }+\int {\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu }^{i}\breve{K}_{\phi }^{i}+\int {\hat{\bar{C}}}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\hat{\phi}^{i}-\int \mathcal{R}_{\bar{C}}^{I}({\hat{\bar{C}}},\hat{C})\breve{K}_{B}^{I}, \label{cancan}$$ formula (\[fin0ex\]) becomes $$\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })=S_{c}(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\tau )-\int R^{\alpha }(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{C},{\hat{\bar{C}}},\hat{B})\widetilde{\hat{K}}_{\alpha }, \label{y0}$$ where the relations between tilde and nontilde sources are the hat versions of (\[chif\]). Next, we make a ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ translation on the left-hand side of (\[y0\]) applying the canonical transformation $\Phi ^{\prime \prime },K^{\prime \prime }$ $\rightarrow \Phi ^{\prime },K^{\prime }$ generated by $$F_{3}(\Phi ^{\prime },K^{\prime \prime })=\int (\Phi ^{\alpha \hspace{0.01in}\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha })K_{\alpha }^{\prime \prime }. \label{traslacan}$$ Doing so, $\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })$ is turned into $\hat{S}_{R}^{\prime }(\Phi ^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime })$. At this point, we want to compare the result we have obtained with (\[ei\]). Recall that formula (\[ei\]) involves the canonical transformation (\[fr\]) at $\xi =1$. If we set ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\prime }=0$ we project that canonical transformation onto a canonical transformation $\Phi ,K\rightarrow \Phi ^{\prime },K^{\prime }$ generated by $F_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime },0,1)$, where ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ is regarded as a spectator. Furthermore, it is convenient to set ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }=0$, because then formula (\[ei\]) turns into $$\hat{S}_{R}^{\prime }(\Phi ^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime })=\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K),$$ where now primed fields and sources are related to the unprimed ones by the canonical transformation generated by $F_{R}(\Phi ,\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0\},K^{\prime },0,1)$. Finally, recalling that $\hat{S}_{R}^{\prime }(\Phi ^{\prime \prime },K^{\prime \prime })=\hat{S}_{R}^{\prime }(\Phi ^{\prime }+{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K^{\prime })$ and using (\[y0\]) we get the key formula we wanted, namely $$\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },K)=S_{c}(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\tau (\lambda ))-\int R^{\alpha }(\hat{\phi}+{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{C},{\hat{\bar{C}}},\hat{B})\widetilde{\hat{K}}_{\alpha }. \label{key}$$ Observe that formula (\[key0\]) of the introduction is formula (\[key\]) with antighosts and Lagrange multipliers switched off. Checking (\[sbaccagf\]), formula (\[key\]) tells us that the renormalized background field action $\hat{S}_{R}$ is equal to the classical background field action $\hat{S}_{\text{gf}}$ up to parameter redefinitions $\lambda \rightarrow \tau $ and a canonical transformation. This proves parametric completeness in the background field approach. The canonical transformation $\Phi ,K\rightarrow \hat{\Phi},\hat{K}$ involved in formula (\[key\]) is generated by the functional $\hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{K})$ obtained composing the transformations generated by $F_{R}(\Phi ,\{{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },0\},K^{\prime },0,1)$, $F_{1}(\breve{\Phi},K^{\prime \prime })$, $F_{2}^{\prime }(\hat{\Phi},\breve{K})$ and $F_{3}(\Phi ^{\prime },K^{\prime \prime })$ of formulas (\[fr\]), (\[forex\]), (\[cancan\]) and (\[traslacan\]) (at ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }=0$). Working out the composition it is easy to prove that $${\hat{\bar{C}}}=\bar{C},\qquad \hat{B}=B,\qquad \hat{K}_{B}=K_{B},\qquad \hat{K}_{\bar{C}}^{I}-G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\hat{\phi}^{i}=K_{\bar{C}}^{I}-G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\partial )\phi ^{i},$$ and therefore $\hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{K})$ has the form $$\hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{K})=\int \Phi ^{\alpha }\hat{K}_{\alpha }+\Delta \hat{F}(\phi ,C,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{K}_{\phi }^{i}-{\hat{\bar{C}}}^{I}G^{Ii}({\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },-\overleftarrow{\partial }),\hat{K}_{C}), \label{frhat}$$ where $\Delta \hat{F}=\mathcal{O}(\hbar )$-poles. Examples ======== In this section we give two examples, non-Abelian gauge field theories and quantum gravity, which are also useful to familiarize oneself with the notation and the tools used in the paper. We switch to Minkowski spacetime. The dimensional-regularization technique is understood. Yang-Mills theory ----------------- The first example is non-Abelian Yang-Mills theory with simple gauge group $G $ and structure constants $f^{abc}$, coupled to fermions $\psi ^{i}$ in some representation described by anti-Hermitian matrices $T_{ij}^{a}$. The classical action $S_{c}(\phi )$ can be restricted by power counting, or enlarged to include all invariants of (\[scgen\]). The nonminimal non-gauge-fixed action $S$ is the sum $\hat{S}+\bar{S}$ of (\[deco\]) and (\[sbar\]). We find $$\begin{aligned} \hat{S} &=&S_{c}(\phi +{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu })+\int \hspace{0.01in}\left[ g(\bar{\psi}^{i}+{\mkern2mu\underline{\mkern-2mu\smash{\bar{\psi}}\mkern-2mu}\mkern2mu }^{i})T_{ij}^{a}C^{a}K_{\psi }^{j}+g\bar{K}_{\psi }^{i}T_{ij}^{a}C^{a}(\psi ^{j}+{\mkern2mu\underline{\mkern-2mu\smash{\psi }\mkern-2mu}\mkern2mu }^{j})\right] \\ &&-\int \hspace{0.01in}\left[ ({\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }C^{a}+gf^{abc}A_{\mu }^{b}C^{c})K^{\mu a}-\frac{1}{2}gf^{abc}C^{b}C^{c}K_{C}^{a}+B^{a}K_{\bar{C}}^{a}\right] ,\end{aligned}$$ and $$\begin{aligned} \bar{S} &=&gf^{abc}\int \hspace{0.01in}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{b}(A_{\mu }^{c}K^{\mu a}+C^{c}K_{C}^{a}+\bar{C}^{c}K_{\bar{C}}^{a}+B^{c}K_{B}^{a})+g\int \hspace{0.01in}\left[ \bar{\psi}^{i}T_{ij}^{a}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{a}K_{\psi }^{j}+\bar{K}_{\psi }^{i}T_{ij}^{a}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{a}\psi ^{j}\right] \nonumber \\ &&-\int \hspace{0.01in}\left[ ({\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{a}){\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }^{\mu a}-\frac{1}{2}gf^{abc}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{b}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{c}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{C}^{a}-g{\mkern2mu\underline{\mkern-2mu\smash{\bar{\psi}}\mkern-2mu}\mkern2mu }^{i}T_{ij}^{a}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{a}{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\psi }^{j}-g{\mkern2mu\underline{\mkern-2mu\smash{\bar{K}}\mkern-2mu}\mkern2mu }_{\psi }^{i}T_{ij}^{a}{\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }^{a}{\mkern2mu\underline{\mkern-2mu\smash{\psi }\mkern-2mu}\mkern2mu }^{j}\right] . \label{sbary}\end{aligned}$$ The covariant derivative ${\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }$ is the background one; for example ${\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }\Lambda ^{a}=\partial _{\mu }\Lambda ^{a}+gf^{abc}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }_{\mu }^{b}\Lambda ^{c}$. The first line of (\[sbary\]) shows that all quantum fields transform as matter fields under background transformations. It is easy to check that $\hat{S}$ and $\bar{S}$ satisfy $\llbracket \hat{S},\hat{S}\rrbracket =\llbracket \hat{S},\bar{S}\rrbracket =\llbracket \bar{S},\bar{S}\rrbracket =0$. A common background-preserving gauge fermion is $$\Psi =\int \bar{C}^{a}\left( -\frac{\lambda }{2}B^{a}+{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }A_{\mu }^{a}\right) , \label{seeym}$$ and the gauge-fixed action $\hat{S}_{\text{gf}}=\hat{S}+\llbracket \hat{S},\Psi \rrbracket $ reads $$\hat{S}_{\text{gf}}=\hat{S}-\frac{\lambda }{2}\int (B^{a})^{2}+\int B^{a}{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }A_{\mu }^{a}-\int \bar{C}^{a}{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }({\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }C^{a}+gf^{abc}A_{\mu }^{b}C^{c}).$$ Since the gauge fixing is linear in the quantum fields, the action $\hat{S}$ depends on the combination $K_{\mu }^{a}+{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }\bar{C}^{a}$ and not on $K_{\mu }^{a}$ and $\bar{C}^{a}$ separately. From now on we switch matter fields off, for simplicity, and set $\lambda =0$. We describe renormalization using the approach of this paper. First we concentrate on the standard power-counting renormalizable case, where $$S_{c}(A,g)=-\frac{1}{4}\int F_{\mu \nu }^{a}(A,g)F^{\mu \nu \hspace{0.01in}a}(A,g),\qquad \qquad F_{\mu \nu }^{a}(A,g)=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+gf^{abc}A_{\mu }^{b}A_{\nu }^{c}.$$ The key formula (\[key\]) gives $$\begin{aligned} \hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },K) &=&-\frac{Z}{4}\int F_{\mu \nu }^{a}(\hat{A}+{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },g)F^{\mu \nu \hspace{0.01in}a}(\hat{A}+{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },g)+\int \hat{B}^{a}{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }\hat{A}_{\mu }^{a}-\int \hat{B}^{a}\hat{K}_{\bar{C}}^{a} \nonumber \\ &&+\int \hspace{0.01in}(\hat{K}^{\mu a}+{\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }{\hat{\bar{C}}}^{a})({\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }_{\mu }\hat{C}^{a}+gf^{abc}\hat{A}_{\mu }^{b}\hat{C}^{c})+\frac{1}{2}gf^{abc}\int \hat{C}^{b}\hat{C}^{c}\hat{K}_{C}^{a}, \label{sat}\end{aligned}$$ where $Z$ is a renormalization constant. The most general canonical transformation $\Phi ,K\rightarrow \hat{\Phi},\hat{K}$ that is compatible with power counting, global gauge invariance and ghost number conservation can be easily written down. Introducing unknown constants where necessary, we find that its generating functional has the form $$\begin{aligned} \hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },\hat{K}) &=&\int (Z_{A}^{1/2}A_{\mu }^{a}+{\mkern2mu\underline{\mkern-2mu\smash{Z}\mkern-2mu}\mkern2mu }_{A}^{1/2}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }_{\mu }^{a})\hat{K}^{\mu a}+\int Z_{C}^{1/2}C^{a}\hat{K}_{C}^{a}+\int Z_{\bar{C}}^{1/2}\bar{C}^{a}\hat{K}_{\bar{C}}^{a} \\ &&+\int (Z_{B}^{1/2}B^{a}+\alpha {\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }A_{\mu }^{a}+\beta \partial ^{\mu }{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }_{\mu }^{a}+\gamma gf^{abc}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }^{\mu b}A_{\mu }^{c}+\delta gf^{abc}\bar{C}^{b}C^{c})\hat{K}_{B}^{a} \\ &&+\int \bar{C}^{a}(\zeta B^{a}+\xi {\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }A_{\mu }^{a}+\eta \partial ^{\mu }{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }_{\mu }^{a}+\theta gf^{abc}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }^{\mu b}A_{\mu }^{c}+\chi gf^{abc}\bar{C}^{b}C^{c}) \\ &&+\int \sigma \hat{K}_{\bar{C}}^{a}\hat{K}_{B}^{a}+\int \tau gf^{abc}C^{a}\hat{K}_{B}^{b}\hat{K}_{B}^{c}.\end{aligned}$$ Inserting it in (\[sat\]) and using the nonrenormalization of the $B$ and $K_{\bar{C}}$-dependent terms, we find $a=\beta =\gamma =\delta =\zeta =\theta =\chi =\sigma =\tau =0$ and $$\xi =1-Z_{\bar{C}}^{1/2}Z_{A}^{1/2},\qquad \eta =-Z_{\bar{C}}^{1/2}{\mkern2mu\underline{\mkern-2mu\smash{Z}\mkern-2mu}\mkern2mu }_{A}^{1/2},\qquad Z_{B}=Z_{\bar{C}}. \label{zeta}$$ It is easy to check that $Z_{\bar{C}}$ disappears from the right-hand side of (\[sat\]), so we can set $Z_{\bar{C}}=1$. Furthermore, we know that $\hat{S}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },K)$ is invariant under background transformations ($\llbracket \hat{S}_{R},\bar{S}\rrbracket =0$), which requires ${\mkern2mu\underline{\mkern-2mu\smash{Z}\mkern-2mu}\mkern2mu }_{A}=0$. Finally, the canonical transformation just reads $$\hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },\hat{K})=\int Z_{A}^{1/2}A_{\mu }^{a}\hat{K}^{\mu a}+\int Z_{C}^{1/2}C^{a}\hat{K}_{C}^{a}+\int \bar{C}^{a}\hat{K}_{\bar{C}}^{a}+\int B^{a}\hat{K}_{B}^{a}+(1-Z_{A}^{1/2})\int \bar{C}^{a}({\mkern2mu\underline{\mkern-2mu\smash{D}\mkern-2mu}\mkern2mu }^{\mu }A_{\mu }^{a}),$$ which contains the right number of independent renormalization constants and is of the form (\[frhat\]). Defining $Z_{g}=Z^{-1/2}$ and $Z_{A}^{\prime }=ZZ_{A}$ we can describe renormalization in a more standard way. Writing $$\hat{S}_{R}(0,\hat{A}+{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },0)=-\frac{1}{4}\int F_{\mu \nu }^{a}(Z_{A}^{\prime \hspace{0.01in}1/2}A+Z_{g}^{-1}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },gZ_{g})F^{\mu \nu \hspace{0.01in}a}(Z_{A}^{\prime \hspace{0.01in}1/2}A+Z_{g}^{-1}{\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu },gZ_{g}),$$ we see that $Z_{g}$ is the usual gauge-coupling renormalization constant, while $Z_{A}^{\prime }$ and $Z_{g}^{-2}$ are the wave-function renormalization constants of the quantum gauge field $A$ and the background gauge field ${\mkern2mu\underline{\mkern-2mu\smash{A}\mkern-2mu}\mkern2mu }$, respectively. We remark that the local divergent canonical transformation $\Phi ,K\rightarrow \hat{\Phi},\hat{K}$ corresponds to a highly nontrivial, convergent but non-local canonical transformation at the level of the $\Gamma $ functional. If the theory is not power-counting renormalizable, then we need to consider the most general classical action, equal to the right-hand side of (\[scgen\]). Counterterms include vertices with arbitrary numbers of external $\Phi $ and $K$ legs. Nevertheless, the key formula (\[key\]) ensures that the renormalized action $\hat{S}_{R}$ remains exactly the same, up to parameter redefinitions and a canonical transformation. The only difference is that now even the canonical transformation $\hat{F}_{R}(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\phi }\mkern-2mu}\mkern2mu },\hat{K})$ of (\[frhat\]) becomes nonpolynomial and highly nontrivial. Quantum gravity --------------- Having written detailed formulas for Yang-Mills theory, in the case of quantum gravity we can just outline the key ingredients. In particular, we stress that the linearity assumption is satisfied both in the first-order and second-order formalisms, both using the metric $g_{\mu \nu }$ and the vielbein $e_{\mu }^{a}$. For example, using the second-order formalism and the vielbein, the symmetry transformations are encoded in the expressions $$\begin{aligned} -\int R^{\alpha }(\Phi )K_{\alpha } &=&\int (e_{\rho }^{a}\partial _{\mu }C^{\rho }+C^{\rho }\partial _{\rho }e_{\mu }^{a}+C^{ab}e_{\mu b})K_{a}^{\mu }+\int C^{\rho }(\partial _{\rho }C^{\mu })K_{\mu }^{C} \\ &&+\int (C^{ac}\eta _{cd}C^{db}+C^{\rho }\partial _{\rho }C^{ab})K_{ab}^{C}-\int B_{\mu }K_{\bar{C}}^{\mu }-\int B_{ab}K_{\bar{C}}^{ab}, \\ -\int \bar{R}^{\alpha }(\Phi )K_{\alpha } &=&-\int R^{\alpha }(\Phi )K_{\alpha }-\int (\bar{C}_{\rho }\partial _{\mu }C^{\rho }-C^{\rho }\partial _{\rho }\bar{C}_{\mu })K_{\bar{C}}^{\mu }+\int \left( B_{\rho }\partial _{\mu }C^{\rho }+C^{\rho }\partial _{\rho }B_{\mu }\right) K_{B}^{\mu },\end{aligned}$$ in the minimal and nonminimal cases, respectively, where $C^{\mu }$ are the ghosts of diffeomorphisms, $C^{ab}$ are the Lorentz ghosts and $\eta _{ab}$ is the flat-space metric. We see that both $R^{\alpha }(\Phi )$ and $\bar{R}^{\alpha }(\Phi )$ are at most quadratic in $\Phi $. Matter fields are also fine, since vectors $A_{\mu }$, fermions $\psi $ and scalars $\varphi $ contribute with $$\begin{aligned} &&-\int (\partial _{\mu }C^{a}+gf^{abc}A_{\mu }^{b}C^{c}-C^{\rho }\partial _{\rho }A_{\mu }^{a}-A_{\rho }^{a}\partial _{\mu }C^{\rho })K_{A}^{\mu a}+\int \left( C^{\rho }\partial _{\mu }C^{a}+\frac{1}{2}gf^{abc}C^{b}C^{c}\right) K_{C}^{a} \\ &&\qquad +\int C^{\rho }(\partial _{\rho }\varphi )K_{\varphi }+\int C^{\rho }(\partial _{\rho }\bar{\psi})K_{\psi }-\frac{i}{4}\int \bar{\psi}\sigma ^{ab}C_{ab}K_{\psi }+\int K_{\bar{\psi}}C^{\rho }(\partial _{\rho }\psi )-\frac{i}{4}\int K_{\bar{\psi}}\sigma ^{ab}C_{ab}\psi ,\end{aligned}$$ where $\sigma ^{ab}=i[\gamma ^{a},\gamma ^{b}]/2$. Expanding around flat space, common linear gauge-fixing conditions for diffeomorphisms and local Lorentz symmetry are $\eta ^{\mu \nu }\partial _{\mu }e_{\nu }^{a}=\xi \eta ^{a\mu }\partial _{\mu }e_{\nu }^{b}\delta _{b}^{\nu }$, $e_{\mu }^{a}=e_{\nu }^{b}\eta _{b\mu }\eta ^{\nu a}$, respectively. In the first-order formalism we just need to add the transformation of the spin connection $\omega _{\mu }^{ab}$, encoded in $$\int (C^{\rho }\partial _{\rho }\omega _{\mu }^{ab}+\omega _{\rho }^{ab}\partial _{\mu }C^{\rho }-\partial _{\mu }C^{ab}+C^{ac}\eta _{cd}\omega _{\mu }^{db}-\omega _{\mu }^{ac}\eta _{cd}C^{db})K_{ab}^{\mu }.$$ Moreover, in this case we can also gauge-fix local Lorentz symmetry with the linear gauge-fixing condition $\eta ^{\mu \nu }\partial _{\mu }\omega _{\nu }^{ab}=0$, instead of $e_{\mu }^{a}=e_{\nu }^{b}\eta _{b\mu }\eta ^{\nu a}$. We see that all gauge symmetries that are known to have physical interest satisfy the linearity assumption, together with irreducibility and off-shell closure. On the other hand, more speculative symmetries (such as local supersymmetry) do not satisfy those assumptions in an obvious way. When auxiliary fields are introduced to achieve off-shell closure, some symmetry transformations (typically, those of auxiliary fields) are nonlinear [@superg]. The relevance of this issue is already known in the literature. For example, in ref. [@superspace] it is explained that in supersymmetric theories the standard background field method cannot be applied, precisely because the symmetry transformations are nonlinear. It is argued that the linearity assumption is tied to the linear splitting $\Phi \rightarrow \Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ between quantum fields $\Phi $ and background fields ${\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$, and that the problem of supersymmetric theories can be avoided with a nonlinear splitting of the form $\Phi \rightarrow \Phi +{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ $+$ nonlinear corrections. Perhaps it is possible to generalize the results of this paper to supergravity following those guidelines. From our viewpoint, the crucial property is that the background transformations of the quantum fields are linear in the quantum fields themselves, because then they do not renormalize. An alternative strategy, not bound to supersymmetric theories, is that of introducing (possibly infinitely many) auxiliary fields, replacing every nonlinear term appearing in the symmetry transformations with a new field $N $, and then proceeding similarly with the $N$ transformations and the closure relations, till all functions $R^{\alpha }$ are at most quadratic. The natural framework for this kind of job is the one of refs. [@fieldcov; @masterf; @mastercan], where the fields $N$ are dual to the sources $L$ coupled to composite fields. Using that approach the Batalin-Vilkovisky formalism can be extended to the composite-field sector of the theory and all perturbative canonical transformations can be studied as true changes of field variables in the functional integral, instead of mere replacements of integrands. For reasons of space, though, we cannot pursue this strategy here. The quest for parametric completeness: Where we stand now ========================================================= In this section we make remarks about the problem of parametric completeness in general gauge theories and recapitulate where we stand now on this issue. To begin with, consider non-Abelian Yang-Mills theory as a deformation of its Abelian limit. The minimal solution $S(g)$ of the master equation $(S(g),S(g))=0$ reads $$S(g)=-\frac{1}{4}\int F_{\mu \nu }^{a}F^{\mu \nu \hspace{0.01in}a}+\int K^{\mu a}\partial _{\mu }C^{a}+gf^{abc}\int \left( K^{\mu a}A_{\mu }^{b}+\frac{1}{2}K_{C}^{a}C^{b}\right) C^{c}.$$ Differentiating the master equation with respect to $g$ and setting $g=0$, we find $$(S,S)=0,\qquad (S,\omega )=0,\qquad S=S(0),\qquad \omega =\left. \frac{\mathrm{d}S(g)}{\mathrm{d}g}\right| _{g=0}.$$ On the other hand, we can easily prove that there exists no local functional $\chi $ such that $\omega =(S,\chi )$. Thus, we can say that $\omega $ is a nontrivial solution of the cohomological problem associated with an Abelian Yang-Mills theory that contains a suitable number of photons [@regnocoho]. Nevertheless, renormalization cannot turn $\omega $ on as a counterterm, because $S(0)$ is a free field theory. Even if we couple the theory to gravity and assume that massive fermions are present (which allows us to construct dimensionless parameters multiplying masses with the Newton constant), radiative corrections cannot dynamically “un-Abelian-ize” the theory, namely convert an Abelian theory into a non-Abelian one. One way to prove this fact is to note that the dependence on gauge fields is even at $g=0$, but not at $g\neq 0$. The point is, however, that cohomology *per se* is unable to prove it. Other properties must be advocated, such as the discrete symmetry just mentioned. In general, we cannot rely on cohomology only, and the possibility that gauge symmetries may be dynamically deformed in nontrivial and observable ways remains open. In ref. [@regnocoho] the issue of parametric completeness was studied in general terms. In that approach, which applies to all theories that are manifestly free of gauge anomalies, renormalization triggers an automatic parametric extension till the classical action becomes parametrically complete. The results of ref. [@regnocoho] leave the door open to dynamically induced nontrivial deformations of the gauge symmetry. Instead, the results found here close that door in all cases where they apply, which means manifestly nonanomalous irreducible gauge symmetries that close off shell and satisfy the linearity assumption. The reason is – we stress it again – that by formulas (\[kk\]) and (\[key\]) all dynamically induced deformations can be organized into parameter redefinitions and canonical transformations. As far as we know now, gauge symmetries can still be dynamically deformed in observable ways in theories that do not satisfy the assumptions of this paper. Supergravities are natural candidates to provide explicit examples. Conclusions =========== The background field method and the Batalin-Vilkovisky formalism are convenient tools to quantize general gauge field theories. In this paper we have merged the two to rephrase and generalize known results about renormalization, and to study parametric completeness. Our approach applies when gauge anomalies are manifestly absent, the gauge algebra is irreducible and closes off shell, the gauge transformations are linear functions of the fields, and closure is field independent. These assumptions are sufficient to include the gauge symmetries we need for physical applications, such as Abelian and non-Abelian Yang-Mills symmetries, local Lorentz symmetry and general changes of coordinates, but exclude other potentially interesting symmetries, such as local supersymmetry. Both renormalizable and nonrenormalizable theories are covered, such as QED, non-Abelian Yang-Mills theories, quantum gravity and Lorentz-violating gauge theories, as well as effective and higher-derivative models, in arbitrary dimensions, and also extensions obtained adding any set of composite fields. At the same time, chiral theories, and therefore the Standard Model, possibly coupled with quantum gravity, require the analysis of anomaly cancellation and the Adler-Bardeen theorem, which we postpone to a future investigation. The fact that supergravities are left out from the start, on the other hand, suggests that there should exist either a no-go theorem or a more advanced framework. At any rate, we are convinced that our formalism is helpful to understand several properties better and address unsolved problems. We have studied gauge dependence in detail, and renormalized the canonical transformation that continuously interpolates between the background field approach and the usual approach. Relating the two approaches, we have proved parametric completeness without making use of cohomological classifications. The outcome is that in all theories that satisfy our assumptions renormalization cannot hide any surprises; namely the gauge symmetry remains essentially the same throughout the quantization process. In the theories that do not satisfy our assumptions, instead, the gauge symmetry could be dynamically deformed in physically observable ways. It would be remarkable if we discovered explicit examples of theories where this sort of “dynamical creation” of gauge symmetries actually takes place. 12truept [**Acknowledgments**]{} 2truept The investigation of this paper was carried out as part of a program to complete the book [@webbook], which will be available at [`Renormalization.com`](http://renormalization.com) once completed. Appendix ======== In this appendix we prove several theorems and identities that are used in the paper. We use the Euclidean notation and the dimensional-regularization technique, which guarantees, in particular, that the functional integration measure is invariant under perturbatively local changes of field variables. The generating functionals $Z$ and $W$ are defined from $$Z(J,K,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })=\int [\mathrm{d}\Phi ]\hspace{0.01in}\exp (-S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })+\int \Phi ^{\alpha }J_{\alpha })=\exp W(J,K,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }), \label{defa}$$ and $\Gamma (\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ $=-W+\int \Phi ^{\alpha }J_{\alpha }$ is the $W$ Legendre transform. Averages denote the sums of connected diagrams (e.g. $\langle A(x)B(y)\rangle =\langle A(x)B(y)\rangle _{\text{nc}}-\langle A(x)\rangle \langle B(y)\rangle $, where $\langle A(x)B(y)\rangle _{\text{nc}}$ includes disconnected diagrams). Moreover, the average $\langle X\rangle $ of a local functional $X $ can be viewed as a functional of $\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu } $ (in which case it collects one-particle irreducible diagrams) or a functional of $J,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$. When we need to distinguish the two options we specify whether $\Phi $ or $J$ are kept constant in functional derivatives. First we work in the usual (non-background field) framework; then we generalize the results to the background field method. To begin with, we recall a property that is true even when the action $S(\Phi ,K)$ does not satisfy the master equation. The identity $(\Gamma ,\Gamma )=\langle (S,S)\rangle $ holds. \[th0\] *Proof*. Applying the change of field variables $$\Phi ^{\alpha }\rightarrow \Phi ^{\alpha }+\theta (S,\Phi ^{\alpha }) \label{chv}$$ to (\[defa\]), where $\theta $ is a constant anticommuting parameter, we obtain $$\theta \int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\frac{\delta _{l}S}{\delta \Phi ^{\alpha }}\right\rangle -\theta \int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\right\rangle J_{\alpha }=0,$$ whence $$\frac{1}{2}\langle (S,S)\rangle =-\int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\frac{\delta _{l}S}{\delta \Phi ^{\alpha }}\right\rangle =-\int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\right\rangle J_{\alpha }=\int \frac{\delta _{r}W}{\delta K_{\alpha }}\frac{\delta _{l}\Gamma }{\delta \Phi ^{\alpha }}=-\int \frac{\delta _{r}\Gamma }{\delta K_{\alpha }}\frac{\delta _{l}\Gamma }{\delta \Phi ^{\alpha }}=\frac{1}{2}(\Gamma ,\Gamma ).$$ Now we prove results for an action $S$ that satisfies the master equation $(S,S)=0$. If $(S,S)=0$ then $(\Gamma ,\langle X\rangle )=\langle (S,X)\rangle $ for every local functional $X$. \[theorem2\] *Proof*. Applying the change of field variables (\[chv\]) to $$\langle X\rangle =\frac{1}{Z(J,K)}\int [\mathrm{d}\Phi ]\hspace{0.01in}X\exp (-S+\int \Phi ^{\alpha }J_{\alpha }),$$ and using $(S,S)=0$ we obtain $$\int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\frac{\delta _{l}X}{\delta \Phi ^{\alpha }}\right\rangle =(-1)^{\varepsilon _{X}+1}\int \left\langle X\frac{\delta _{r}S}{\delta K_{\alpha }}\right\rangle \frac{\delta _{l}\Gamma }{\delta \Phi ^{\alpha }}, \label{r1}$$ where $\varepsilon _{X}$ denotes the statistics of the functional $X$ (equal to 0 if $X$ is bosonic, 1 if it is fermionic, modulo 2). Moreover, we also have $$\int \left\langle \frac{\delta _{r}S}{\delta \Phi ^{\alpha }}\frac{\delta _{l}X}{\delta K_{\alpha }}\right\rangle =\int \frac{\delta _{r}\Gamma }{\delta \Phi ^{\alpha }}\left\langle \frac{\delta _{l}X}{\delta K_{\alpha }}\right\rangle , \label{r2}$$ which can be proved starting from the expression on the left-hand side and integrating by parts. In the derivation we use the fact that since $X$ is local, $\delta _{r}\delta _{l}X/(\delta \Phi ^{\alpha }\delta K_{\alpha })$ is set to zero by the dimensional regularization, which kills the $\delta (0) $s and their derivatives. Next, straightforward differentiations give $$\begin{aligned} \left. \frac{\delta _{l}\langle X\rangle }{\delta K_{\alpha }}\right| _{J} &=&\left\langle \frac{\delta _{l}X}{\delta K_{\alpha }}\right\rangle -\left\langle \frac{\delta _{l}S}{\delta K_{\alpha }}X\right\rangle \label{r3} \\ &=&\left. \frac{\delta _{l}\langle X\rangle }{\delta K_{\alpha }}\right| _{\Phi }-\int \left. \frac{\delta _{l}J_{\beta }}{\delta K_{\alpha }}\right| _{\Phi }\left. \frac{\delta _{l}\langle X\rangle }{\delta J_{\beta }}\right| _{K}. \label{r4}\end{aligned}$$ At this point, using (\[r1\])-(\[r4\]) and $(J_{\alpha},\Gamma )=0$ (which can be proved by differentiating $(\Gamma ,\Gamma )=0$ with respect to $\Phi ^{\alpha}$), we derive $(\Gamma ,\langle X\rangle )=\langle (S,X)\rangle $. If $(S,S)=0$ and $$\frac{\partial S}{\partial \xi }=(S,X), \label{bbug}$$ where $X$ is a local functional and $\xi $ is a parameter, then $$\frac{\partial \Gamma }{\partial \xi }=(\Gamma ,\langle X\rangle ). \label{pprove}$$ \[bbugc\] *Proof*. Using theorem \[theorem2\] we have $$\frac{\partial \Gamma }{\partial \xi }=-\frac{\partial W}{\partial \xi }=\langle \frac{\partial S}{\partial \xi }\rangle =\langle (S,X)\rangle =(\Gamma ,\langle X\rangle ).$$ Now we derive results that hold even when $S$ does not satisfy the master equation. \[blabla\]The identity $$(\Gamma ,\langle X\rangle )=\langle (S,X)\rangle -\frac{1}{2}\langle (S,S)X\rangle _{\Gamma } \label{prove0}$$ holds, where $X$ is a generic local functional and $\langle AB\cdots Z\rangle _{\Gamma }$ denotes the set of connected, one-particle irreducible diagrams with one insertion of $A$, $B$, $\ldots Z$. This theorem is a generalization of theorem \[theorem2\]. It is proved by repeating the derivation without using $\left( S,S\right) =0$. First, observe that formula (\[r1\]) generalizes to $$\int \left\langle \frac{\delta _{r}S}{\delta K_{\alpha }}\frac{\delta _{l}X}{\delta \Phi ^{\alpha }}\right\rangle =(-1)^{\varepsilon _{X}+1}\int \left\langle X\frac{\delta _{r}S}{\delta K_{\alpha }}\right\rangle \frac{\delta _{l}\Gamma }{\delta \Phi ^{\alpha }}-\frac{1}{2}\langle (S,S)X\rangle . \label{r11}$$ On the other hand, formula (\[r2\]) remains the same, as well as (\[r3\]) and (\[r4\]). We have $$\left( \Gamma ,\langle X\rangle \right) =\langle (S,X)\rangle -\frac{1}{2}\langle (S,S)X\rangle +\int \frac{\delta _{r}\Gamma }{\delta \Phi ^{\alpha }}\left. \frac{\delta _{l}J_{\beta }}{\delta K_{\alpha }}\right| _{\Phi }\left. \frac{\delta _{l}\langle X\rangle }{\delta J_{\beta }}\right| _{K}-\int \frac{\delta _{r}\Gamma }{\delta K_{\alpha }}\left. \frac{\delta _{l}\langle X\rangle }{\delta \Phi ^{\alpha }}\right| _{K}.$$ Differentiating $(\Gamma ,\Gamma )$ with respect to $\Phi ^{\alpha}$ we get $$\frac{1}{2}\frac{\delta _{r}(\Gamma ,\Gamma )}{\delta \Phi ^{\alpha}}=\frac{1}{2}\frac{\delta _{l}(\Gamma ,\Gamma )}{\delta \Phi ^{\alpha}}=(J_{\alpha },\Gamma )=(-1)^{\varepsilon _{\alpha }}(\Gamma ,J_{\alpha }),$$ where $\varepsilon _{\alpha }$ is the statistics of $\Phi ^{\alpha }$. Using $(\Gamma ,\Gamma )=\langle (S,S)\rangle $ we finally obtain $$\left( \Gamma ,\langle X\rangle \right) =\langle (S,X)\rangle -\frac{1}{2}\langle (S,S)X\rangle +\frac{1}{2}\int (-1)^{\varepsilon _{\alpha }}\frac{\delta _{r}\langle (S,S)\rangle }{\delta \Phi ^{\alpha }}\left. \frac{\delta _{l}\langle X\rangle }{\delta J_{\alpha }}\right| _{K}. \label{allo}$$ The set of irreducible diagrams contained in $\langle A\hspace{0.01in}B\rangle $, where $A$ and $B$ are local functionals, is given by the formula $$\langle A\hspace{0.01in}B\rangle _{\Gamma }=\langle AB\rangle -\{\langle A\rangle ,\langle B\rangle \}, \label{oo}$$ where $\{X,Y\}$ are the “mixed brackets” [@BV2] $$\{X,Y\}\equiv \int \frac{\delta _{r}X}{\delta \Phi ^{\alpha }}\langle \Phi ^{\alpha }\Phi ^{\beta }\rangle \frac{\delta _{l}Y}{\delta \Phi ^{\beta }}=\int \frac{\delta _{r}X}{\delta \Phi ^{\alpha }}\frac{\delta _{r}\delta _{r}W}{\delta J_{\beta }\delta J_{\alpha }}\frac{\delta _{l}Y}{\delta \Phi ^{\beta }}=\int \left. \frac{\delta _{r}X}{\delta J_{\alpha }}\right| _{K}\frac{\delta _{l}Y}{\delta \Phi ^{\alpha }}, \label{mixed brackets}$$ $X$ and $Y$ being functionals of $\Phi $ and $K$. Indeed, $\{\langle A\rangle ,\langle B\rangle \}$ is precisely the set of diagrams in which the $A$ and $B$ insertions are connected in a one-particle reducible way. Thus, formula (\[allo\]) coincides with (\[prove0\]). Using (\[prove0\]) we also have the identity $$\frac{\partial \Gamma }{\partial \xi }-\left( \Gamma ,\langle X\rangle \right) =\left\langle \frac{\partial S}{\partial \xi }-\left( S,X\right) \right\rangle +\frac{1}{2}\left\langle \left( S,S\right) \hspace{0.01in}X\right\rangle _{\Gamma }, \label{provee}$$ which generalizes corollary \[bbugc\]. Now we switch to the background field method. We begin by generalizing theorem \[th0\]. If the action $S(\Phi ,{\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu },K,{\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu })$ is such that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent, the identity $$\llbracket \Gamma ,\Gamma \rrbracket =\langle \llbracket S,S\rrbracket \rangle$$ holds. \[thb\] *Proof*. Since $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent we have $\delta _{l}\Gamma /\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }=\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$. Using theorem \[th0\] we find $$\llbracket \Gamma ,\Gamma \rrbracket =(\Gamma ,\Gamma )+2\int \frac{\delta _{r}\Gamma }{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\frac{\delta _{l}\Gamma }{\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }}=\langle (S,S)\rangle +2\int \langle \frac{\delta _{r}S}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\rangle \frac{\delta _{l}S}{\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }}=\langle (S,S)\rangle +2\int \langle \frac{\delta _{r}S}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\frac{\delta _{l}S}{\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }}\rangle =\langle \llbracket S,S\rrbracket \rangle .$$ Next, we mention the useful identity $$\left. \frac{\delta _{l}\langle X\rangle }{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\right| _{\Phi }=\left\langle \frac{\delta _{l}X}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}\right\rangle -\left\langle \frac{\delta _{l}S}{\delta {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }^{\alpha }}X\right\rangle _{\Gamma }, \label{dera}$$ which holds for every local functional $X$. It can be proved by taking (\[r3\])–(\[r4\]) with $K\rightarrow {\mkern2mu\underline{\mkern-2mu\smash{\Phi }\mkern-2mu}\mkern2mu }$ and using (\[oo\])–(\[mixed brackets\]). Mimicking the proof of theorem \[thb\] and using (\[dera\]), it is easy to prove that theorem \[blabla\] implies the identity $$\llbracket \Gamma ,\langle X\rangle \rrbracket =\langle \llbracket S,X\rrbracket \rangle -\frac{1}{2}\langle \llbracket S,S\rrbracket X\rangle _{\Gamma }, \label{bb2}$$ for every ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$-independent local functional $X$. Thus we have the following property. The identity $$\frac{\partial \Gamma }{\partial \xi }-\llbracket \Gamma ,\langle X\rangle \rrbracket =\left\langle \frac{\partial S}{\partial \xi }-\llbracket S,X\rrbracket \right\rangle +\frac{1}{2}\langle \llbracket S,S\rrbracket X\rangle _{\Gamma } \label{proveg}$$ holds for every action $S$ such that $\delta _{l}S/\delta {\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }_{\alpha }$ is $\Phi $ independent, for every ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$-independent local functional $X$ and for every parameter $\xi $. \[cora\] If the action $S$ has the form (\[assu\]) and $X$ is also ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$ independent, applying (\[backghost\]) to (\[bb2\]) we obtain $$\llbracket \hat{\Gamma},\langle X\rangle \rrbracket =\langle \llbracket \hat{S},X\rrbracket \rangle -\frac{1}{2}\langle \llbracket \hat{S},\hat{S}\rrbracket X\rangle _{\Gamma },\qquad \llbracket \bar{S},\langle X\rangle \rrbracket =\langle \llbracket \bar{S},X\rrbracket \rangle -\langle \llbracket \bar{S},\hat{S}\rrbracket X\rangle _{\Gamma },\qquad \langle \llbracket \bar{S},\bar{S}\rrbracket X\rangle _{\Gamma }=0, \label{blablaback}$$ which imply the following statement. If $S$ satisfies the assumptions of (\[assu\]), $\llbracket \bar{S},X\rrbracket =0$ and $\llbracket \bar{S},\hat{S}\rrbracket =0$, where $X$ is a ${\mkern2mu\underline{\mkern-2mu\smash{C}\mkern-2mu}\mkern2mu }$- and ${\mkern2mu\underline{\mkern-2mu\smash{K}\mkern-2mu}\mkern2mu }$-independent local functional, then $\llbracket \bar{S},\langle X\rangle \rrbracket =0$. \[corolla\] Finally, we recall a result derived in ref. [@removal]. If $\Phi ,K\rightarrow \Phi ^{\prime },K^{\prime }$ is a canonical transformation generated by $F(\Phi ,K^{\prime })$, and $\chi (\Phi ,K)$ is a functional behaving as a scalar (that is to say $\chi ^{\prime }(\Phi ^{\prime },K^{\prime })=\chi (\Phi ,K)$), then $$\frac{\partial \chi ^{\prime }}{\partial \varsigma }=\frac{\partial \chi }{\partial \varsigma }-(\chi ,\tilde{F}_{\varsigma }) \label{thesis}$$ for every parameter $\varsigma $, where $\tilde{F}_{\varsigma }(\Phi ,K)\equiv F_{\varsigma }(\Phi ,K^{\prime }(\Phi ,K))$ and $F_{\varsigma }(\Phi ,K^{\prime })=\partial F/\partial \varsigma $. \[theorem5\] *Proof*. When we do not specify the variables that are kept constant in partial derivatives, it is understood that they are the natural variables. Thus $F$, $\Phi ^{\prime }$ and $K$ are functions of $\Phi ,K^{\prime }$, while $\chi $ and $\tilde{F}_{\varsigma }$ are functions of $\Phi ,K$ and $\chi ^{\prime }$ is a function of $\Phi ^{\prime },K^{\prime }$. It is useful to write down the differentials of $\Phi ^{\prime }$ and $K$, which are [@vanproeyen] $$\begin{aligned} \mathrm{d}\Phi ^{\prime \hspace{0.01in}\alpha } &=&\int \frac{\delta _{l}\delta F}{\delta K_{\alpha }^{\prime }\delta \Phi ^{\beta }}\mathrm{d}\Phi ^{\beta }+\int \frac{\delta _{l}\delta F}{\delta K_{\alpha }^{\prime }\delta K_{\beta }^{\prime }}\mathrm{d}K_{\beta }^{\prime }+\frac{\partial \Phi ^{\prime \hspace{0.01in}\alpha }}{\partial \varsigma }\mathrm{d}\varsigma , \nonumber \\ \mathrm{d}K_{\alpha } &=&\int \mathrm{d}\Phi ^{\beta }\frac{\delta _{l}\delta F}{\delta \Phi ^{\beta }\delta \Phi ^{\alpha }}+\int \mathrm{d}K_{\beta }^{\prime }\frac{\delta _{l}\delta F}{\delta K_{\beta }^{\prime }\delta \Phi ^{\alpha }}+\frac{\partial K_{\alpha }}{\partial \varsigma }\mathrm{d}\varsigma . \label{differentials}\end{aligned}$$ Differentiating $\chi ^{\prime }(\Phi ^{\prime },K^{\prime })=\chi (\Phi ,K)$ with respect to $\varsigma $ at constant $\Phi ^{\prime }$ and $K^{\prime }$, we get $$\frac{\partial \chi ^{\prime }}{\partial \varsigma }=\frac{\partial \chi }{\partial \varsigma }+\int \frac{\delta _{r}\chi }{\delta \Phi ^{\alpha }}\left. \frac{\partial \Phi ^{\alpha }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}+\int \frac{\delta _{r}\chi }{\delta K_{\alpha }}\left. \frac{\partial K_{\alpha }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}. \label{sigmaprimosue2}$$ Formulas (\[differentials\]) allow us to write $$\frac{\partial \Phi ^{\prime \hspace{0.01in}\alpha }}{\partial \varsigma }=-\int \frac{\delta _{l}\delta F}{\delta K_{\alpha }^{\prime }\delta \Phi ^{\beta }}\left. \frac{\partial \Phi ^{\beta }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }},\qquad \frac{\delta _{l}\delta F}{\delta K_{\alpha }^{\prime }\delta \Phi ^{\beta }}=\left. \frac{\delta _{l}K_{\beta }}{\delta K_{\alpha }^{\prime }}\right| _{\Phi ,\varsigma },$$ and therefore we have $$\frac{\delta \tilde{F}_{\varsigma }}{\delta K_{\alpha }}=\int \left. \frac{\delta _{l}K_{\beta }^{\prime }}{\delta K_{\alpha }}\right| _{\Phi ,\varsigma }\frac{\partial \Phi ^{\prime \hspace{0.01in}\beta }}{\partial \varsigma }=-\int \left. \frac{\delta _{l}K_{\beta }^{\prime }}{\delta K_{\alpha }}\right| _{\Phi ,\varsigma }\left. \frac{\delta _{l}K_{\gamma }}{\delta K_{\beta }^{\prime }}\right| _{\Phi ,\varsigma }\left. \frac{\partial \Phi ^{\gamma }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}=-\left. \frac{\partial \Phi ^{\alpha }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}. \label{div1}$$ Following analogous steps, we also find $$\frac{\delta \tilde{F}_{\varsigma }}{\delta \Phi ^{\alpha }}=\frac{\partial K_{\alpha }}{\partial \varsigma }+\int \left. \frac{\delta _{l}K_{\beta }^{\prime }}{\delta \Phi ^{\alpha }}\right| _{K,\varsigma }\frac{\partial \Phi ^{\prime \hspace{0.01in}\beta }}{\partial \varsigma },\qquad \frac{\partial K_{\alpha }}{\partial \varsigma }=\left. \frac{\partial K_{\alpha }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}-\int \frac{\delta _{l}\delta F}{\delta \Phi ^{\alpha }\delta \Phi ^{\beta }}\left. \frac{\partial \Phi ^{\beta }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }},$$ whence $$\left. \frac{\partial K_{\alpha }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}=\frac{\delta \tilde{F}_{\varsigma }}{\delta \Phi ^{\alpha }}+\int \left( \frac{\delta _{l}K_{\gamma }}{\delta \Phi ^{\alpha }}+\left. \frac{\delta _{l}K_{\beta }^{\prime }}{\delta \Phi ^{\alpha }}\right| _{K,\varsigma }\frac{\delta _{l}K_{\gamma }}{\delta K_{\beta }^{\prime }}\right) \left. \frac{\partial \Phi ^{\gamma }}{\partial \varsigma }\right| _{\Phi ^{\prime },K^{\prime }}=\frac{\delta \tilde{F}_{\varsigma }}{\delta \Phi ^{\alpha }}. \label{div2}$$ This formula, together with (\[div1\]), allows us to rewrite (\[sigmaprimosue2\]) in the form (\[thesis\]). [99]{} B.S. De Witt, Quantum theory of gravity. II. The manifestly covariant theory, Phys. Rev. 162 (1967) 1195. L.F. Abbott, The background field method beyond one loop, Nucl. Phys. B 185 (1981) 189. I.A. Batalin and G.A. Vilkovisky, Gauge algebra and quantization, Phys. Lett. B 102 (1981) 27-31; I.A. Batalin and G.A. Vilkovisky, Quantization of gauge theories with linearly dependent generators, Phys. Rev. D 28 (1983) 2567, Erratum-ibid. D 30 (1984) 508; see also S. Weinberg, *The quantum theory of fields*, vol. II, Cambridge University Press, Cambridge 1995. D. Anselmi, Renormalization of gauge theories without cohomology, Eur. Phys. J. C73 (2013) 2508, [13A1 Renormalization.com](http://renormalization.com/13a1/), arXiv:1301.7577 \[hep-th\]. G. Barnich, F. Brandt, M. Henneaux, Local BRST cohomology in the antifield formalism. I. General theorems, Commun. Math. Phys. 174 (1995) 57 and arXiv:hep-th/9405109; G. Barnich, F. Brandt, M. Henneaux, Local BRST cohomology in the antifield formalism. II. Application to Yang-Mills theory, Commun. Math. Phys. 174 (1995) 116 and arXiv:hep-th/9405194; G. Barnich, F. Brandt, M. Henneaux, General solution of the Wess-Zumino consistency condition for Einstein gravity, Phys. Rev. D 51 (1995) R1435 and arXiv:hep-th/9409104; S.D. Joglekar and B.W. Lee, General theory of renormalization of gauge invariant operators, Ann. Phys. (NY) 97 (1976) 160. D. Anselmi and M. Halat, Renormalization of Lorentz violating theories, Phys. Rev. D 76 (2007) 125011 and arXiv:0707.2480 \[hep-th\]; D. Anselmi, Weighted power counting and Lorentz violating gauge theories. I. General properties, Ann. Phys. 324 (2009) 874, [08A2 Renormalization.com](http://renormalization.com/08a2/) and arXiv:0808.3470 \[hep-th\]; D. Anselmi, Weighted power counting and Lorentz violating gauge theories. II. Classification, Ann. Phys. 324 (2009) 1058, [08A3 Renormalization.com](http://renormalization.com/08a3/) and arXiv:0808.3474 \[hep-th\]. S. Weinberg, Ultraviolet divergences in quantum theories of gravitation, in *An Einstein centenary survey*, Edited by S. Hawking and W. Israel, Cambridge University Press, Cambridge 1979. K.S. Stelle, Renormalization of higher-derivative quantum gravity, Phys. Rev. D 16 (1977) 953. E.T. Tomboulis, Superrenormalizable gauge and gravitational theories, arXiv:hep-th/9702146; L. Modesto, Super-renormalizable quantum gravity, Phys.Rev. D86 (2012) 044005 and arXiv:1107.2403 \[hep-th\]; T. Biswas, E. Gerwick, T. Koivisto and A. Mazumdar, Towards singularity and ghost free theories of gravity, Phys.Rev.Lett. 108 (2012) 031101 and arXiv:1110.5249 \[gr-qc\]; L. Modesto, Finite quantum gravity, arXiv:1305.6741 \[hep-th\]. H. Kluberg-Stern and J.B. Zuber, Renormalization of nonabelian gauge theories in a background field gauge. 1. Green functions, Phys. Rev. D12 (1975) 482; H. Kluberg-Stern and J.B. Zuber, Renormalization of nonabelian gauge theories in a background field gauge. 2. Gauge invariant operators, Phys. Rev. D 12 (1975) 3159. D. Colladay and V.A. Kostelecký, Lorentz-violating extension of the Standard Model, Phys. Rev. D58 (1998) 116002 and arXiv:hep-ph/9809521; D. Anselmi, Weighted power counting, neutrino masses and Lorentz violating extensions of the Standard Model, Phys. Rev. D79 (2009) 025017, [08A4 Renormalization.com](http://renormalization.com/08a4/) and arXiv:0808.3475 \[hep-ph\]; D. Anselmi, Standard Model without elementary scalars and high energy Lorentz violation, Eur. Phys. J. C65 (2010) 523, [09A1 Renormalization.com](http://renormalization.com/09a1/), and arXiv:0904.1849 \[hep-ph\]. S.L. Adler and W.A. Bardeen, Absence of higher-order corrections in the anomalous axial vector divergence, Phys. Rev. 182 (1969) 1517. D. Binosi and A. Quadri, Slavnov-Taylor constraints for nontrivial backgrounds, Phys. Rev. D84 (2011) 065017 and arXiv:1106.3240 \[hep-th\]; D. Binosi and A. Quadri, Canonical transformations and renormalization group invariance in the presence of nontrivial backgrounds, Phys. Rev. D85 (2012) 085020 and arXiv:1201.1807 \[hep-th\]; D. Binosi and A. Quadri, The background field method as a canonical transformation, Phys.Rev. D85 (2012) 121702 and arXiv:1203.6637 \[hep-th\]. D. Anselmi, A general field covariant formulation of quantum field theory, Eur. Phys. J. C73 (2013) 2338, [12A1 Renormalization.com](http://renormalization.com/12a1/) and arXiv:1205.3279 \[hep-th\]. D. Anselmi, A master functional for quantum field theory, Eur. Phys. J. C73 (2013) 2385, [12A2 Renormalization.com](http://renormalization.com/12a2/) and arXiv:1205.3584 \[hep-th\]. D. Anselmi, Master functional and proper formalism for quantum gauge field theory, Eur. Phys. J. C73 (2013) 2363, [12A3 Renormalization.com](http://renormalization.com/12a3/) and arXiv:1205.3862 \[hep-th\]. B.L. Voronov, P.M. Lavrov and I.V. Tyutin, Canonical transformations and the gauge dependence in general gauge theories, Sov. J. Nucl. Phys. 36 (1982) 292 and Yad. Fiz. 36 (1982) 498. P. van Nieuwenhuizen, *Supergravity*, Phys. Rept. 68 (1981) 189. S.J. Gates, M.T. Grisaru, M. Rocek and W. Siegel, [*Superspace or one thousand and one lessons in supersymmetry*]{}, Front.Phys. 58 (1983) 1-548, arXiv:hep-th/0108200. D. Anselmi, *Renormalization*, to appear at [`renormalization.com`](http://renormalization.com) D. Anselmi, More on the subtraction algorithm, Class. Quant. Grav. 12 (1995) 319, [94A1 Renormalization.com](http://renormalization.com/94a1/) and arXiv:hep-th/9407023. D. Anselmi, Removal of divergences with the Batalin-Vilkovisky formalism, Class. Quant. Grav. 11 (1994) 2181-2204, [93A2 Renormalization.com](http://renormalization.com/93a2/) and arXiv:hep-th/9309085. W. Troost, P. van Nieuwenhuizen and A. Van Proeyen, Anomalies and the Batalin-Vilkovisky Lagrangian formalism, Nucl. Phys. B 333 (1990) 727.
--- abstract: 'We numerically investigate the self-diffusion coefficient and correlation length of the rigid clusters (i.e., the typical size of the collective motions) in sheared soft athermal particles. Here we find that the rheological flow curves on the self-diffusion coefficient are collapsed by the proximity to the jamming transition density. This feature is in common with the well-established critical scaling of flow curves on shear stress or viscosity. We furthermore reveal that the divergence of the correlation length governs the critical behavior of the diffusion coefficient, where the diffusion coefficient is proportional to the correlation length and the strain rate for a wide range of the strain rate and packing fraction across the jamming transition density.' author: - Kuniyasu Saitoh - Takeshi Kawasaki bibliography: - 'diffusion\_overdamp.bib' title: Critical scaling of diffusion coefficients and size of rigid clusters of soft athermal particles under shear --- Introduction ============ *Transport properties* of soft athermal particles, e.g. emulsions, foams, colloidal suspensions, and granular materials, are important in science and engineering technology [@bird]. In many manufacturing processes, these particles are forced to flow (through pipes, containers, etc.) and the transportation of “flowing particles" is of central importance for industrial applications [@larson]. Therefore, there is a need to understand how the transport properties are affected by rheological flow properties of soft athermal particles. Recently, the rheological flow properties of soft athermal particles have been extensively studied and it has been revealed that the rheology of such particulate systems depends not only on strain rate but also on packing fraction of the particles [@rheol0; @pdf1; @rheol1; @rheol2; @rheol3; @rheol4; @rheol5; @rheol6; @rheol7; @rheol8; @rheol9; @rheol10; @rheol11; @rheol12; @rheol13]. If the packing fraction $\phi$ is lower than the so-called jamming transition density $\phi_J$, steady state stress is described by either Newtonian [@rheol0; @pdf1] or Bagnoldian rheology [@rheol1; @rheol2; @rheol3; @rheol4; @rheol5] (depending on whether particle inertia is significant or not). If the packing fraction exceeds the jamming point ($\phi>\phi_J$), one observes yield stress at vanishing strain rate [@review-rheol0]. These two trends are solely determined by the proximity to the jamming transition $|\Delta\phi|\equiv|\phi-\phi_J|$ [@rheol0] and rheological flow curves of many types of soft athermal particles have been explained by the critical scaling near jamming [@pdf1; @rheol1; @rheol2; @rheol3; @rheol4; @rheol5; @rheol6; @rheol7; @rheol8; @rheol9; @rheol10; @rheol11]. On the other hand, the mass transport or *self-diffusion* of soft athermal particles seems to be controversial. As is the rheological flow behavior on shear stress or viscosity, the diffusivity of the particles under shear is also dependent on both the strain rate and packing fraction. Its dependence on the shear rate $\dot{\gamma}$ is weakened with the increase of $\dot{\gamma}$,i.e. the diffusivity $D$ exhibits a crossover from a linear scaling $D\sim\dot{\gamma}$ to the sub-linear scaling $D\sim\dot{\gamma}^q$ at a characteristic shear rate $\dot{\gamma}_c$, where the exponent is smaller than unity, $q<1$ [@diff_shear_md7; @diff_shear_md6; @dh_md2; @diff_shear_exp2; @diff_shear_exp1; @diff_shear_md2; @diff_shear_md3; @diff_shear_md4]. For example, in molecular dynamics (MD) simulations of Durian’s bubble model in two dimensions [@diff_shear_md7; @diff_shear_md6] and frictionless granular particles in three dimensions [@dh_md2], the diffusivity varies from $D\sim\dot{\gamma}$ ($\dot{\gamma}<\dot{\gamma}_c$) to $D\sim\dot{\gamma}^{0.8}$ ($\dot{\gamma}>\dot{\gamma}_c$). These results agree with laboratory experiments of colloidal glasses under shear [@diff_shear_exp2; @diff_shear_exp1] and also suggest that the diffusivity does not depend on spatial dimensions. However, another crossover, i.e. from $D\sim\dot{\gamma}$ to $D\sim\dot{\gamma}^{1/2}$, was suggested by the studies of amorphous solids (though the scaling $D\sim\dot{\gamma}^{1/2}$ is the asymptotic behavior in rapid flows $\dot{\gamma}\gg\dot{\gamma}_c$) [@diff_shear_md2; @diff_shear_md3; @diff_shear_md4]. In addition, it was found in MD simulations of soft athermal disks that in a sufficiently small flow rate range, the diffusivity changes from $D\sim\dot{\gamma}$ ($\phi<\phi_J$) to $\dot{\gamma}^{0.78}$ ($\phi\simeq\phi_J$) [@diff_shear_md0], implying that the crossover shear rate $\dot{\gamma}_c$ vanishes as the system approaches jamming from below $\phi\rightarrow\phi_J$. Note that the self-diffusion of soft athermal particles shows a clear difference from the diffusion in glass; *no plateau* is observed in (transverse) mean square displacements (MSDs) [@diff_shear_md0; @diff_shear_md2; @diff_shear_md3; @diff_shear_md4; @diff_shear_md7; @dh_md2]. The absence of sub-diffusion can be also seen in quasi-static simulations ($\dot{\gamma}\rightarrow 0$) of soft athermal disks [@dh_qs1] and MD simulations of granular materials sheared under constant pressure [@diff_shear_md1]. Because the self-diffusion can be associated with collective motions of soft athermal particles, researchers have analyzed spatial correlations of velocity fluctuations [@rheol0] or non-affine displacements [@nafsc2] of the particles under shear. Characteristic sizes of collectively moving regions, i.e. *rigid clusters*, are then extracted as functions of $\dot{\gamma}$ and $\phi$, however, there is a lack of consensus on the scaling of the sizes. For example, the size of rigid clusters $\xi$ diverges as the shear rate goes to zero $\dot{\gamma}\rightarrow 0$ so that the power-law scaling $\xi\sim\dot{\gamma}^{-s}$ was suggested, where the exponent varies from $s=0.23$ to $0.5$ depending on numerical models and flow conditions [@dh_md2; @diff_shear_md1]. The dependence of the rigid cluster size on packing fraction is also controversial. If the system is below jamming, critical scaling of the size is given by $\xi\sim|\Delta\phi|^{-w}$, where different exponents (in the range between $0.5\le w\le 1.0$) have been reported by various simulations [@rheol0; @nafsc2; @rheol16]. In contrast, if the system is above jamming, the size becomes insensitive to the packing fraction (or exceeds the system size $L$) as only $L$ is the relevant length scale, i.e. $\xi\sim L$, in a quasi-static regime [@diff_shear_md2; @diff_shear_md3; @diff_shear_md4; @pdf1]. From a scaling argument, a relation between the diffusivity and size of rigid clusters was proposed as $$D\sim d_0\xi\dot{\gamma}~, \label{eq:rigid_cluster}$$ where $d_0$ is the particle diameter [@diff_shear_md1]. It seems that previous results above jamming,i.e. as $\dot{\gamma}$ is increased, $D/\dot{\gamma}$ changes from constant to $\dot{\gamma}^{-1/2}$ and corresponding $\xi$ undergoes from $L$ to $\dot{\gamma}^{-1/2}$, support this argument [@diff_shear_md2; @diff_shear_md3; @diff_shear_md4]. However, the link between the diffusivity and rigid clusters *below jamming* is still not clear. In this paper, we study the self-diffusion of soft athermal particles and the size of rigid clusters. The particles are driven by simple shear flows and their fluctuating motions around a mean velocity field are numerically calculated. From numerical results, we extract the diffusivity of the particles and explain its dependence on the control parameters (i.e. $\dot{\gamma}$ and $\phi$). We investigate wide ranges of the control parameters in order to unify our understanding of the diffusivity in both fast and slow flows, and both below and above jamming. Our main result is critical scaling of the diffusivity $D$, which parallels the critical scaling of the size of rigid clusters $\xi$. We find that the linear relation between the diffusivity and size \[Eq. (\[eq:rigid\_cluster\])\] holds over the whole ranges of $\dot{\gamma}$ and $\phi$ if finite-size effects are not important. In the following, we show our numerical method in Sec. \[sec:method\] and numerical results in Sec. \[sec:result\]. In Sec. \[sec:disc\], we discuss and conclude our results and outlook for future. Methods {#sec:method} ======= We perform MD simulations of two-dimensional disks. In order to avoid crystallization of the system, we randomly distribute an equal number of small and large disks (with diameters $d_S$ and $d_L=1.4d_S$) in a $L\times L$ square periodic box [@gn1]. The total number of disks is $N=8192$ and the packing fraction of the disks $\phi$ is controlled around the jamming transition density $\phi_J\simeq0.8433$ [@rheol0]. We introduce an elastic force between the disks, $i$ and $j$, in contact as $\bm{f}_{ij}^\mathrm{e}=k\delta_{ij}\bm{n}_{ij}$, where $k$ is the stiffness and $\bm{n}_{ij}\equiv\bm{r}_{ij}/|\bm{r}_{ij}|$ with the relative position $\bm{r}_{ij}\equiv\bm{r}_i-\bm{r}_j$ is the normal unit vector. The elastic force is linear in the overlap $\delta_{ij}\equiv R_i+R_j-|\bm{r}_{ij}|>0$, where $R_i$ ($R_j$) is the radius of the disk $i$ ($j$). We also add a damping force to every disk as $\bm{f}_i^\mathrm{d}=-\eta\left\{\bm{v}_i-\bm{u}(\bm{r}_i)\right\}$, where $\eta$, $\bm{v}_i$, and $\bm{u}(\bm{r})$ are the damping coefficient, velocity of the disk $i$, and external flow field, respectively. Note that the stiffness and damping coefficient determine a time scale as $t_0\equiv\eta/k$. To simulate simple shear flows of the system, we impose the external flow field $\bm{u}(\bm{r})=(\dot{\gamma}y,0)$ under the Lees-Edwards boundary condition [@lees], where $\dot{\gamma}$ is the shear rate. Then, we describe motions of the disks by overdamped dynamics [@rheol0; @rheol7; @pdf1], i.e. $\sum_{j\neq i}\bm{f}_{ij}^\mathrm{e}+\bm{f}_i^\mathrm{d}=\bm{0}$, where we numerically integrate the disk velocity $\bm{v}_i=\bm{u}(\bm{r}_i)+\eta^{-1}\sum_{j\neq i}\bm{f}_{ij}^\mathrm{el}$ with a time increment $\Delta t = 0.1t_0$. In the following, we analyze the data in a steady state, where shear strain applied to the system is larger than unity. In addition, we scale every time and length by $t_0$ and the mean disk diameter $d_0\equiv(d_S+d_L)/2$, respectively. Results {#sec:result} ======= In this section, we show our numerical results of the self-diffusion of soft athermal particles (Sec. \[sub:diff\]). We also extract rigid clusters from numerical data in order to relate their sizes to the diffusivity (Sec. \[sub:rigid\]). We explain additional data of the rheology and non-affine displacements in Appendixes. Diffusion {#sub:diff} --------- We analyze the self-diffusion of soft athermal particles by the transverse component of *mean squared displacement* (MSD) [@diff_shear_md0; @diff_shear_md1; @diff_shear_md3; @diff_shear_md4], $$\Delta(\tau)^2 = \left\langle\frac{1}{N}\sum_{i=1}^N\Delta y_i(\tau)^2\right\rangle~. \label{eq:MSD}$$ Here, $\Delta y_i(\tau)$ is the $y$-component of particle displacement and the ensemble average $\langle\dots\rangle$ is taken over different choices of the initial time (see Appendix \[sec:nona\] for the detail) [^1]. Figure \[fig:msdy\] displays the MSDs \[Eq. (\[eq:MSD\])\] with different values of (a) $\phi$ and (b) $\dot{\gamma}$. The horizontal axes are the time interval scaled by the shear rate, $\gamma\equiv\dot{\gamma}\tau$, i.e. the shear strain applied to the system for the duration $\tau$. As can be seen, every MSD exhibits a crossover to the normal diffusive behavior, $\Delta(\tau)^2\sim\dot{\gamma}\tau$ (dashed lines), around a crossover strain $\gamma=\gamma_c\simeq 1$ regardless of $\phi$ and $\dot{\gamma}$. The MSDs below jamming ($\phi<\phi_J$) monotonously increase with the increase of packing fraction, while they (almost) stop increasing once the packing fraction exceeds the jamming point ($\phi>\phi_J$) \[Fig. \[fig:msdy\](a)\]. The dependence of MSDs on the shear rate is monotonous; their heights decrease with the increase of $\dot{\gamma}$ \[Fig. \[fig:msdy\](b)\]. These trends well correspond with the fact that the non-affine displacements are amplified in slow flows of dense systems, i.e. $\dot{\gamma}t_0\ll 1$ and $\phi>\phi_J$ [@saitoh11]. In addition, different from thermal systems under shear [@rheol10; @nafsc5; @th-dh_md1], any plateaus are not observed in the MSDs. Therefore, neither “caging" nor “sub-diffusion" of the particles exists in our sheared athermal systems [@dh_md2; @dh_qs1; @dh_md1]. ![ The transverse MSDs $\Delta^2$ \[Eq. (\[eq:MSD\])\] as functions of the shear strain $\gamma\equiv\dot{\gamma}\tau$. (a) The packing fraction $\phi$ increases as indicated by the arrow and listed in the legend, where the shear rate is $\dot{\gamma}=10^{-6}t_0^{-1}$. (b) The shear rate $\dot{\gamma}$ increases as indicated by the arrow and listed in the legend, where the packing fraction is $\phi=0.84$. \[fig:msdy\]](msdy.png){width="\columnwidth"} To quantify the normal diffusion of the disks, we introduce the diffusivity (or diffusion coefficient) as [^2] $$D=\lim_{\tau\rightarrow\infty}\frac{\Delta(\tau)^2}{2\tau}~. \label{eq:D}$$ Figure \[fig:diff\](a) shows double logarithmic plots of the diffusivity \[Eq. (\[eq:D\])\] over the shear rate $D/\dot{\gamma}$, where symbols represent the packing fraction $\phi$ (as listed in the legend). The diffusivity over the shear rate increases with $\phi$. If the system is above jamming $\phi>\phi_J$, it is a monotonously decreasing function of $\dot{\gamma}$. On the other hand, if the system is below jamming $\phi<\phi_J$, it exhibits a crossover from plateau to a monotonous decrease around a characteristic shear rate, e.g. $\dot{\gamma}_0t_0\simeq 10^{-3}$ for $\phi=0.80$ [@diff_shear_md3; @diff_shear_md4]. In Appendix \[sec:rheo\], we have demonstrated *scaling collapses* of rheological flow curves [@rheol0]. Here, we also demonstrate scaling collapses of the diffusivity. As shown in Fig. \[fig:diff\](b), all the data are nicely collapsed [^3] by the scaling exponents, $\lambda=1.0$ and $\nu=4.0$. If the shear rate is smaller than a characteristic value as $\dot{\gamma}/|\Delta\phi|^\nu \lesssim 10^4$,i.e. $\dot{\gamma}<\dot{\gamma}_c\simeq 10^4|\Delta\phi|^\nu$, the data below jamming ($\phi<\phi_J$) are constant. However, the data above jamming ($\phi>\phi_J$) show the power-law decay, where the slope is approximately given by $-0.3$ (solid line). Therefore, we describe the diffusivity in a *quasi-static regime* ($\dot{\gamma}<\dot{\gamma}_c$) as $|\Delta\phi|^\lambda D/\dot{\gamma}\sim\mathcal{G}_\pm(\dot{\gamma}/|\Delta\phi|^\nu)$, where the scaling functions are given by $\mathcal{G}_-(x)\sim\mathrm{const.}$ for $\phi<\phi_J$ and $\mathcal{G}_+(x)\sim x^{-0.3}$ otherwise. On the other hand, if $\dot{\gamma}>\dot{\gamma}_c$, all the data follow a single power law (dotted line). This means that the scaling functions are given by $\mathcal{G}_\pm(x) \sim x^{-z}$ in a *plastic flow regime* ($\dot{\gamma}>\dot{\gamma}_c$), where the diffusivity scales as $D\sim\dot{\gamma}|\Delta\phi|^{-\lambda}\mathcal{G}_\pm(\dot{\gamma}/|\Delta\phi|^\nu)\sim\dot{\gamma}^{1-z}|\Delta\phi|^{\nu z-\lambda}$. Because this scaling should be independent of whether the system is below or above jamming,i.e. independent of $|\Delta\phi|$, the power-law exponent is given by $z=\lambda/\nu=1/4$ as confirmed in Fig. \[fig:diff\](b). ![ (a) The diffusivity over the shear rate, $D/\dot{\gamma}$, as a function of $\dot{\gamma}$, where $\phi$ increases as indicated by the arrow and listed in the legend. (b) *Scaling collapses* of the diffusivity, where $\Delta\phi\equiv\phi-\phi_J$. The critical exponents are given by $\lambda=1.0$ and $\nu=4.0$, where slopes of the dotted and solid lines are $-\lambda/\nu$ and $-0.3$, respectively. \[fig:diff\]](diff_coeff.png){width="\columnwidth"} In summary, the diffusivity of the disks scales as $$D \sim \begin{cases} |\Delta\phi|^{-\lambda}\dot{\gamma} & (\phi<\phi_J) \\ |\Delta\phi|^{0.3\nu-\lambda}\dot{\gamma}^{0.7} & (\phi>\phi_J) \end{cases} \label{eq:D1}$$ in the quasi-static regime ($\dot{\gamma}<\dot{\gamma}_c$) and $$D \sim \dot{\gamma}^{1-\lambda/\nu} \label{eq:D2}$$ in the plastic flow regime ($\dot{\gamma}>\dot{\gamma}_c$), where the critical exponents are estimated as $\lambda=1.0$ and $\nu=4.0$. From Eqs. (\[eq:D1\]) and (\[eq:D2\]), we find that the diffusivity below jamming ($\phi<\phi_J$) is linear in the shear rate $D\sim\dot{\gamma}$ in slow flows, whereas its dependence on the shear rate is algebraic $D\sim\dot{\gamma}^{3/4}$ in fast flows. A similar trend has been found in molecular dynamics studies of simple shear flows below jamming [@diff_shear_md0; @dh_md2; @dh_md1] and experiments of colloidal glasses under shear [@diff_shear_exp1]. In addition, the proportionality for the diffusivity below jamming diverges at the transition as $|\Delta\phi|^{-1}$ \[Eq. (\[eq:D1\])\], which we will relate to a length scale diverging as the system approaches jamming from below (Sec. \[sub:rigid\]). The diffusivity above jamming ($\phi>\phi_J$) implies the crossover from $D\sim|\Delta\phi|^{0.2}\dot{\gamma}^{0.7}$ to $\dot{\gamma}^{3/4}=\dot{\gamma}^{0.75}$, which reasonably agrees with the prior work on soft athermal disks under shear [@diff_shear_md0]. Interestingly, the crossover shear rate vanishes at the transition as $\dot{\gamma}_c\sim|\Delta\phi|^{4.0}$, which is reminiscent of the fact that the crossover from the Newtonian or yield stress to the plastic flow vanishes at the onset of jamming (see Appendix \[sec:rheo\]). Rigid clusters {#sub:rigid} -------------- We now relate the diffusivity to rigid clusters of soft athermal particles under shear. The rigid clusters represent collective motions of the particles which tend to move in the same direction [@saitoh11]. According to the literature of jamming [@rheol0; @pdf1; @corl3], we quantify the collective motions by a spatial correlation function $C(x)=\langle v_y(x_i,y_i)v_y(x_i+x,y_i)\rangle$, where $v_y(x,y)$ is the transverse velocity field and the ensemble average $\langle\dots\rangle$ is taken over disk positions and time (in a steady state). Figure \[fig:corl\] shows the normalized correlation function $C(x)/C(0)$, where the horizontal axis ($x$-axis) is scaled by the mean disk diameter $d_0$. As can be seen, the correlation function exhibits a well-defined minimum at a characteristic length scale $x=\xi$ (as indicated by the vertical arrow for the case of $\phi=0.84$ in Fig. \[fig:corl\](a)). Because the minimum is negative $C(\xi)<0$, the transverse velocities are most “anti-correlated" at $x=\xi$. Therefore, if we assume that the rigid clusters are circular, their mean diameter is comparable in size with $\xi$ [@diff_shear_md1]. The length scale $\xi$ increases with the increase of $\phi$ \[Fig. \[fig:corl\](a)\] but decreases with the increase of $\dot{\gamma}$ \[Fig. \[fig:corl\](b)\]. These results are consistent with the fact that the collective behavior is most enhanced in slow flows of dense systems [@saitoh11]. ![ Normalized spatial correlation functions of the transverse velocities $C(x)/C(0)$, where symbols are as in Fig. \[fig:msdy\]. (a) The packing fraction $\phi$ increases as indicated by the arrow and listed in the legend, where $\dot{\gamma}=10^{-6}t_0^{-1}$. The minimum of the data for $\phi=0.84$ is indicated by the vertical (gray) arrow. (b) The shear rate $\dot{\gamma}$ increases as indicated by the arrow and listed in the legend, where $\phi=0.84$. \[fig:corl\]](corl.png){width="\columnwidth"} As reported in Ref. [@rheol0], we examine critical scaling of the length scale. Figure \[fig:xi\](a) displays scaling collapses of the data of $\xi$, where the critical exponents, $\lambda=1.0$ and $\nu=4.0$, are the same with those in Fig. \[fig:diff\](b). If the shear rate is smaller than the characteristic value, i.e. $\dot{\gamma}<\dot{\gamma}_c\simeq 10^4|\Delta\phi|^\nu$, the data below jamming ($\phi<\phi_J$) exhibit plateau, whereas those above jamming ($\phi>\phi_J$) diverge with the *decrease* of shear rate. Therefore, if we assume that the data above jamming follow the power-law with the slope $-0.4$ (solid line), the length scale in the quasi-static regime ($\dot{\gamma}<\dot{\gamma}_c$) can be described as $|\Delta\phi|^\lambda\xi\sim\mathcal{J}_\pm(\dot{\gamma}/|\Delta\phi|^\nu)$ with the scaling functions, $\mathcal{J}_-(x)\sim\mathrm{const.}$ for $\phi<\phi_J$ and $\mathcal{J}_+(x)\sim x^{-0.4}$ otherwise. Note that, however, the length scale is limited to the system size $L$ \[shaded region in Fig. \[fig:xi\](a)\] and should be scaled as $\xi\sim L$ above jamming in the quasi-static limit $\dot{\gamma}\rightarrow 0$ [@pdf1; @diff_shear_md3; @nafsc2]. This means that the system size is the only relevant length scale [@nafsc0] and thus we conclude $\xi\sim L$ in slow flows of jammed systems. On the other hand, if $\dot{\gamma}>\dot{\gamma}_c$, all the data are collapsed onto a single power law \[dotted line in Fig. \[fig:xi\](a)\]. Therefore, the scaling functions are given by $\mathcal{J}_\pm(x)\sim x^{-z}$ such that the length scale scales as $\xi\sim\dot{\gamma}^{-z}|\Delta\phi|^{\nu z-\lambda}$. Because this relation is independent of $|\Delta\phi|$, the exponent should be $z=\lambda/\nu$ as confirmed in Fig. \[fig:xi\](a). ![ (a) Scaling collapses of the length scale $\xi$, where $\Delta\phi\equiv\phi-\phi_J$ and $\phi$ increases as listed in the legend. The critical exponents are $\lambda=1.0$ and $\nu=4.0$ as in Fig. \[fig:diff\](b), where slopes of the dotted and solid lines are given by $-\lambda/\nu$ and $-0.4$, respectively. The shaded region exceeds the system size $|\Delta\phi|^\lambda L/2$ for the case of $\phi=0.90$. (b) Scatter plots of the diffusivity over the shear rate $D/\dot{\gamma}$ and the length scale $\xi$, where $\phi$ increases as listed in the legend. The dotted line represents a linear relation $D/\dot{\gamma}\sim\xi$ and the shaded region exceeds the system size $L/2\simeq 44d_0$. \[fig:xi\]](xi.png){width="\columnwidth"} In summary, the length scale, or the mean size of rigid clusters, scales as $$\xi \sim \begin{cases} |\Delta\phi|^{-\lambda} & (\phi<\phi_J) \\ L & (\phi>\phi_J) \end{cases} \label{eq:xi1}$$ in the quasi-static regime ($\dot{\gamma}<\dot{\gamma}_c$) and $$\xi \sim \dot{\gamma}^{-\lambda/\nu} \label{eq:xi2}$$ in the plastic flow regime ($\dot{\gamma}>\dot{\gamma}_c$), where the critical exponents, $\lambda$ and $\nu$, are the same with those for the diffusivity \[Eqs. (\[eq:D1\]) and (\[eq:D2\])\]. The critical divergence below jamming in the quasi-static regime, i.e. $\xi\sim|\Delta\phi|^{-1}$ \[Eq. (\[eq:xi1\])\], is consistent with the result of quasi-static simulations ($\dot{\gamma}\rightarrow 0$) of sheared athermal disks [@nafsc2]. In addition, the scaling $\xi\sim\dot{\gamma}^{-1/4}$ in the plastic flow regime \[Eq. (\[eq:xi2\])\] is very close to the prior work on athermal particles under shear [@dh_md2]. From the results of the diffusivity \[Eqs. (\[eq:D1\]) and (\[eq:D2\])\] and length scale \[Eqs. (\[eq:xi1\]) and (\[eq:xi2\])\], we discuss how the rigid clusters contribute to the diffusion of the particles. The linear relation $D\sim d_0\xi\dot{\gamma}$ \[Eq. (\[eq:rigid\_cluster\])\] holds below jamming (regardless of $\dot{\gamma}$) and in the plastic flow regime (regardless of $\phi$). We stress that the divergence of the diffusivity over the shear rate in the quasi-static regime, i.e. $D/\dot{\gamma}\sim|\Delta\phi|^{-1}$ \[Eq. (\[eq:D1\])\], is caused by the diverging length scale below jamming, i.e. $\xi\sim|\Delta\phi|^{-1}$ \[Eq. (\[eq:xi1\])\]. As shown in Fig. \[fig:xi\](b), the linear relation (dotted line) well explains our results if the length scale $\xi$ is smaller than $10d_0$. If the system is above jamming, the length scale increases (more than $10d_0$) with the increase of $\phi$. However, the diffusivity over the shear rate $D/\dot{\gamma}$ starts to deviate from the linear relation (dotted line) and the length scale reaches the system size $L/2\simeq 44d_0$ (shaded region). We conclude that this deviation is caused by finite-size effects and further studies of different system sizes are necessary (as in Refs. [@diff_shear_md3; @diff_shear_md4]) to figure out the relation between $D/\dot{\gamma}$ and $\xi$ in this regime, which we postpone as a future work. Discussions {#sec:disc} =========== In this study, we have numerically investigated rheological and transport properties of soft athermal particles under shear. Employing MD simulations of two-dimensional disks, we have clarified how the rheology, self-diffusion, and size of rigid clusters vary with the control parameters,i.e. the externally imposed shear rate $\dot{\gamma}$ and packing fraction of the disks $\phi$. Our main result is the critical scaling of the diffusivity (Sec. \[sub:diff\]) and size of rigid clusters (Sec. \[sub:rigid\]), where their dependence on both $\dot{\gamma}$ and $\phi$ is reported \[Eqs. (\[eq:D1\]), (\[eq:D2\]), (\[eq:xi1\]), and (\[eq:xi2\])\]. The diffusivity has been calculated on both sides of jamming (by a single numerical protocol) to unify the understanding of self-diffusion of soft particulate systems: We found that (i) the diffusivity below jamming exhibits a crossover from the linear scaling $D\sim\dot{\gamma}$ to the power-law $D\sim\dot{\gamma}^{3/4}$. Such a crossover can be also seen in previous simulations [@diff_shear_md7; @diff_shear_md6; @dh_md2] and experiments [@diff_shear_exp2; @diff_shear_exp1]. In addition, (ii) the diffusivity below jamming diverges as $D\sim|\Delta\phi|^{-1}$ if the system is in the quasi-static regime ($\dot{\gamma}<\dot{\gamma}_c$), whereas (iii) the diffusivity (both below and above jamming) is insensitive to $\phi$ if the system is in the plastic flow regime ($\dot{\gamma}>\dot{\gamma}_c$). Note that (iv) the crossover shear rate vanishes at the onset of jamming as $\dot{\gamma}_c\sim|\Delta\phi|^{4.0}$. These results (ii)-(iv) are the new findings of this study. On the other hand, we found that (v) the diffusivity above jamming is weakly dependent on $\phi$ (as $D\sim|\Delta\phi|^{0.2}$) in the quasi-static regime and (vi) shows a crossover from $D\sim\dot{\gamma}^{0.7}$ to $\dot{\gamma}^{3/4}$. Though the result (v) is the new finding, the result (vi) contrasts with the prior studies of sheared amorphous solids and granular materials under constant pressure, where the diffusivity exhibits a crossover from $D\sim\dot{\gamma}$ to $\dot{\gamma}^{1/2}$ [@diff_shear_md1; @diff_shear_md3; @diff_shear_md4]. Because our scaling $D\sim\dot{\gamma}^{0.7}$ in the quasi-static regime is consistent with Ref. [@diff_shear_md0], where the same overdamped dynamics are used, we suppose that the discrepancy is caused by numerical models or flow conditions. We have also examined the relation between the diffusivity and typical size of rigid clusters $\xi$ (Sec. \[sub:rigid\]). Below jamming, we found the critical divergence $\xi\sim|\Delta\phi|^{-1}$ in the quasi-static regime as previously observed in quasi-static simulations ($\dot{\gamma}\rightarrow 0$) of sheared athermal disks [@nafsc2]. In the plastic flow regime, the size becomes independent of $\phi$ and scales as $\xi\sim\dot{\gamma}^{-1/4}$. This is consistent with the previous result of sheared athermal particles [@dh_md2] (and is also close to the result of thermal glasses under shear [@th-dh_md1]). Above jamming, however, the size exhibits a crossover from $\xi\sim L$ to $\dot{\gamma}^{-1/4}$ which contrasts with the crossover from $\xi\sim\mathrm{const.}$ to $\dot{\gamma}^{-1/2}$ previously reported in simulations of amorphous solids [@diff_shear_md1; @diff_shear_md3; @diff_shear_md4]. From our scaling analyses, we found that the linear relation $D\sim d_0\xi\dot{\gamma}$ \[Eq. (\[eq:rigid\_cluster\])\] holds below jamming (for $\forall\dot{\gamma}$) and in the plastic flow regime (for $\forall\phi$), indicating that the self-diffusion is enhanced by the rotation of rigid clusters [@rheol0; @diff_shear_md1]. In our MD simulations, we fixed the system size to $L\simeq 88d_0$. However, systematic studies of different system sizes are needed to clarify the relation between $D$ and $\xi\sim L$ above jamming, especially in the quasi-static limit $\dot{\gamma}\rightarrow 0$ [@diff_shear_md3; @diff_shear_md4]. In addition, our analyses are limited to two dimensions. Though previous studies suggest that the diffusivity is independent of the dimensionality [@diff_shear_md7; @diff_shear_md6; @dh_md2], a recent study of soft athermal particles reported that the critical scaling of shear viscosity depends on dimensions [@rheol15]. Therefore, it is important to check whether the critical scaling \[Eqs. (\[eq:D1\]) and (\[eq:D2\])\] is different (or not) in three-dimensional systems. Because we observed qualitative difference from the results of sheared amorphous solids and granular materials under constant pressure [@diff_shear_md1; @diff_shear_md3; @diff_shear_md4], further studies of different numerical models and flow conditions are necessary to complete our understanding of self-diffusion of soft athermal particles. Moreover, the relation between the diffusivity and shear viscosity may be interesting because it gives a Stokes-Einstein like relation for the non-equilibrium systems studied here. We thank H. Hayakawa, M. Otsuki, and S. Takada for fruitful discussions. K.S. thanks F. Radjai and W. Kob for fruitful discussions and warm hospitality in Montpellier. This work was supported by KAKENHI Grant No. 16H04025, No. 18K13464 and No. 19K03767 from JSPS. Some computations were performed at the Yukawa Institute Computer Facility, Kyoto, Japan. Rheology {#sec:rheo} ======== The rheology of soft athermal particles is dependent on both the shear rate $\dot{\gamma}$ and area fraction $\phi$ [@rheol0; @pdf1; @rheol7]. Figure \[fig:rheo\] displays our numerical results of *flow curves*, i.e. (a) the pressure $p$ and (b) shear stress $\sigma$ as functions of the shear rate $\dot{\gamma}$. Here, different symbols represent different values of $\phi$ (as listed in the legend of (a)). The pressure and shear stress are defined as $p=(\tau_{xx}+\tau_{yy})/2$ and $\sigma=-\tau_{xy}$, respectively, where the stress tensor is given by the virial expression $$\tau_{\alpha\beta}=\frac{1}{L^2}\sum_i\sum_{j~(>i)}f_{ij\alpha}^\mathrm{e}r_{ij\beta} \label{eq:stress}$$ ($\alpha,\beta=x,y$) with the $\alpha$-component of elastic force $f_{ij\alpha}^\mathrm{e}$ and the $\beta$-component of relative position $r_{ij\beta}$. As shown in Fig. \[fig:rheo\], both the pressure and shear stress exhibit the Newtonian behavior,i.e. they are proportional to the shear rate, $p\sim\dot{\gamma}$ and $\sigma\sim\dot{\gamma}$ (dotted lines), only if the area fraction is lower than the jamming transition density ($\phi<\phi_J$) and the shear rate is small enough ($\dot{\gamma}t_0\lesssim 10^{-4}$). However, a finite yield stress, $p_Y>0$ and $\sigma_Y>0$, emerges in the zero shear limit $\dot{\gamma}\rightarrow 0$ if the system is above jamming ($\phi>\phi_J$). ![ *Flow curves*, i.e. (a) the pressure $p$ and (b) shear stress $\sigma$ as functions of the shear rate $\dot{\gamma}$. The area fraction $\phi$ increases as indicated by the arrow (listed in the legend) in (a). The dotted lines represent the Newtonian behavior, i.e. (a) $p\sim\dot{\gamma}$ and (b) $\sigma\sim\dot{\gamma}$, for low area fractions, $\phi<\phi_J$, where $\phi_J\simeq 0.8433$ is the jamming transition density. \[fig:rheo\]](flow_curves.png){width="\columnwidth"} In the literature of jamming [@rheol0; @pdf1; @rheol7], rheological flow curves are collapsed by critical scaling. This means that the crossover from the Newtonian behavior ($p\sim\dot{\gamma}$ and $\sigma\sim\dot{\gamma}$) or the yield stress ($p\sim p_Y$ and $\sigma\sim\sigma_Y$) to plastic flow regime vanishes as the system approaches jamming $\phi\rightarrow\phi_J$. To confirm this trend, we collapse the data in Fig. \[fig:rheo\] by the proximity to jamming $|\Delta\phi|\equiv|\phi-\phi_J|$ as in Fig. \[fig:rheo-clp\]. Though the critical exponents are slightly different, i.e. $\kappa_p=1.1$ and $\mu_p=3.5$ for the pressure \[Fig. \[fig:rheo-clp\](a)\] and $\kappa_\sigma=1.2$ and $\mu_\sigma=3.3$ for the shear stress \[Fig. \[fig:rheo-clp\](b)\], all the data are nicely collapsed on top of each other. If the shear rate is small enough, the data below jamming ($\phi<\phi_J$) follow the lower branch, whereas the data above jamming ($\phi>\phi_J$) are almost constant. Therefore, the pressure and shear stress can be described as $p/|\Delta\phi|^{\kappa_p}\sim\mathcal{F}_\pm(\dot{\gamma}/|\Delta\phi|^{\mu_p})$ and $\sigma/|\Delta\phi|^{\kappa_\sigma}\sim\mathcal{F}_\pm(\dot{\gamma}/|\Delta\phi|^{\mu_\sigma})$ with the scaling functions, $\mathcal{F}_-(x)\sim x$ for $\phi<\phi_J$ and $\mathcal{F}_+(x)\sim\mathrm{const.}$ for $\phi>\phi_J$. On the other hand, if the shear rate is large enough, the system is in plastic flow regime, where all the data (both below and above jamming) follow a single power law (dotted lines in Fig. \[fig:rheo-clp\]). This implies that the scaling functions are given by $\mathcal{F}_\pm(x)\sim x^z$ (for both $\phi<\phi_J$ and $\phi>\phi_J$) with a power-law exponent $z$. Then, the pressure and shear stress scale as $p\sim|\Delta\phi|^{\kappa_p}\mathcal{F}_\pm(\dot{\gamma}/|\Delta\phi|^{\mu_p})\sim\dot{\gamma}^z|\Delta\phi|^{\kappa_p-\mu_p z}$ and $\sigma\sim|\Delta\phi|^{\kappa_\sigma}\mathcal{F}_\pm(\dot{\gamma}/|\Delta\phi|^{\mu_\sigma})\sim\dot{\gamma}^z|\Delta\phi|^{\kappa_\sigma-\mu_\sigma z}$, respectively. These scaling relations should be independent of whether the system is below or above jamming, i.e. independent of $|\Delta\phi|$. Thus, the power-law exponent is $z=\kappa_p/\mu_p\simeq 0.31$ for the pressure and $z=\kappa_\sigma/\mu_\sigma\simeq 0.36$ for the shear stress as confirmed in Fig. \[fig:rheo-clp\] (dotted lines). Note that the scaling collapses in Fig. \[fig:rheo-clp\] also confirm that the jamming transition density $\phi_J\simeq 0.8433$ is correct in our sheared systems [@rheol0]. ![ Scaling collapses of (a) the pressure and (b) shear stress, where $\Delta\phi\equiv\phi-\phi_J$ is the proximity to jamming. See the text for critical exponents, $\kappa_p$, $\mu_p$, $\kappa_\sigma$, and $\mu_\sigma$, where the dotted lines have the slopes (a) $\kappa_p/\mu_p$ and (b) $\kappa_\sigma/\mu_\sigma$. \[fig:rheo-clp\]](flow_curves_clp.png){width="\columnwidth"} In summary, the rheological flow properties of the disks are described as $$\begin{aligned} p &\sim& \begin{cases} |\Delta\phi|^{\kappa_p-\mu_p}\dot{\gamma} & (\phi<\phi_J) \\ |\Delta\phi|^{\kappa_p} & (\phi>\phi_J) \end{cases}~, \label{eq:pressure1} \\ \sigma &\sim& \begin{cases} |\Delta\phi|^{\kappa_\sigma-\mu_\sigma}\dot{\gamma} & (\phi<\phi_J) \\ |\Delta\phi|^{\kappa_\sigma} & (\phi>\phi_J) \end{cases}~, \label{eq:shear_stress1}\end{aligned}$$ in the quasi-static regime and $$\begin{aligned} p &\sim& \dot{\gamma}^{\kappa_p/\mu_p}~, \label{eq:pressure2} \\ \sigma &\sim& \dot{\gamma}^{\kappa_\sigma/\mu_\sigma}~, \label{eq:shear_stress2}\end{aligned}$$ in the plastic flow regime. The critical exponents are estimated as $\kappa_p=1.1$, $\mu_p=3.5$, $\kappa_\sigma=1.2$, and $\mu_\sigma=3.3$. In Eqs. (\[eq:pressure1\]) and (\[eq:shear\_stress1\]), the Newtonian behavior is given by $p\sim|\Delta\phi|^{-2.4}\dot{\gamma}$ and $\sigma\sim|\Delta\phi|^{-2.1}\dot{\gamma}$, where the exponents are comparable to those for viscosity divergence below jamming [@rheol7]. The yield stress vanishes as $p_Y\sim|\Delta\phi|^{1.1}$ and $\sigma_Y\sim|\Delta\phi|^{1.2}$ when the system approaches jamming from above \[Eqs. (\[eq:pressure1\]) and (\[eq:shear\_stress1\])\], which is consistent with the previous study of two-dimensional bubbles under shear [@pdf1]. The scaling in the plastic flow regime, $p\sim\dot{\gamma}^{0.31}$ and $\sigma\sim\dot{\gamma}^{0.36}$ \[Eqs. (\[eq:pressure2\]) and (\[eq:shear\_stress2\])\], is close to the prior work on sheared athermal disks [@rheol14], indicating *shear thinning* as typical for particulate systems under shear [@larson]. Non-affine displacements {#sec:nona} ======================== The self-diffusion of soft athermal particles is also sensitive to both $\dot{\gamma}$ and $\phi$. Because our system is homogeneously sheared (along the $x$-direction), the self-diffusion is represented by fluctuating motions of the disks around a mean flow. In our MD simulations, the mean velocity field is determined by the affine deformation as $\dot{\gamma}y\bm{e}_x$, where $\bm{e}_x$ is a unit vector parallel to the $x$-axis. Therefore, subtracting the mean velocity field from each disk velocity $\bm{u}_i(t)$, we introduce non-affine velocities as $\Delta\bm{u}_i(t)=\bm{u}_i(t)-\dot{\gamma}y_i\bm{e}_x$ ($i=1,\dots,N$). *Non-affine displacements* are then defined as the time integrals $$\Delta\bm{r}_i(\tau) = \int_{t_a}^{t_a+\tau}\Delta\bm{u}_i(t)dt~, \label{eq:non-affine}$$ where $\tau$ is the time interval. Note that the initial time $t_a$ can be arbitrary chosen during a steady state. It is known that the non-affine displacements \[Eq. (\[eq:non-affine\])\] are sensitive to the rheological flow properties (Sec. \[sec:rheo\]) [@saitoh11]. Their magnitude significantly increases if the packing fraction exceeds the jamming point. In addition, their spatial distributions become more “collective" (they tend to align in the same directions with neighbors) with the decrease of the shear rate. This means that the self-diffusion is also strongly dependent on both the shear rate and density. Especially, the collective behavior of the non-affine displacements implies the growth of rigid clusters in slow flows $\dot{\gamma}t_0\ll 1$ of jammed systems $\phi>\phi_J$, where the yield stress $\sigma\sim\sigma_Y$ is observed in the flow curves (Fig. \[fig:rheo\]). [^1]: The MSDs defined by the *total* non-affine displacements show quantitatively the same results (data are not shown). [^2]: We define the diffusivity \[Eq. (\[eq:D\])\] as the slope of the MSD \[Eq. (\[eq:MSD\])\] in the normal diffusive regime $\gamma=\dot{\gamma}\tau>1$, where we take sample averages of $\Delta(\tau)^2/2\tau$ as $D/\dot{\gamma}\equiv <\Delta(\gamma)^2/2\gamma>$ in the range between $1<\gamma<10^2$. [^3]: The data for the highest shear rate, $\dot{\gamma}=10^{-1}t_0^{-1}$, is removed from the scaling collapses in Figs. \[fig:diff\](b) and \[fig:xi\](a).
--- abstract: 'A general method is proposed for predicting the asymptotic percolation threshold of networks with bottlenecks, in the limit that the sub-net mesh size goes to zero. The validity of this method is tested for bond percolation on filled checkerboard and “stack-of-triangle" lattices. Thresholds for the checkerboard lattices of different mesh sizes are estimated using the gradient percolation method, while for the triangular system they are found exactly using the triangle-triangle transformation. The values of the thresholds approach the asymptotic values of $0.64222$ and $0.53993$ respectively as the mesh is made finer, consistent with a direct determination based upon the predicted critical corner-connection probability.' author: - 'Amir Haji-Akbari' - 'Robert M. Ziff' bibliography: - 'HajiAkbariZiffv3.bib' title: 'Percolation in Networks with Voids and Bottlenecks\' --- \[sec:Introduction\]Introduction\ ================================= Percolation concerns the formation of long-range connectivity in random systems [@Stauffer]. It has a wide range of application in problems in physics and engineering, including such topics as conductivity and magnetism in random systems, fluid flow in porous media [@Sukop2002], epidemics and clusters in complex networks [@GoltsevDorogovtsevMendes08], analysis of water structure [@BernabeiEtAl08], and gelation in polymer systems [@YilmazGelirAlverogluUysal08]. To study this phenomenon, one typically models the network by a regular lattice made random by independently making sites or bonds occupied with probability $p$. At a critical threshold $p_c$, for a given lattice and percolation type (site, bond), percolation takes place. Finding that threshold exactly or numerically to high precision is essential to studying the percolation problem on a particular lattice, and has been the subject of numerous works over the years (recent works include Refs. [@Lee08; @RiordanWalters07; @Scullard06; @ScullardZiff06; @ZiffScullard06; @ScullardZiff08; @Parviainen07; @QuintanillaZiff07; @NeherMeckeWagner08; @WiermanNaorCheng05; @JohnerGrimaldiBalbergRyser08; @KhamforoushShamsThovertAdler08; @Ambrozic08; @Kownacki08; @FengDengBlote08; @Wu06; @MajewskiMalarz07; @WagnerBalbergKlein06; @TarasevichCherkasova07; @HakobyanPapouliaGrigoriu07; @BerhanSastry07]). In this paper we investigate the percolation characteristics of networks with bottlenecks. That is, we consider models in which we increase the number of internal bonds within a sub-net while keeping the number of contact points between sub-nets constant. We want to find how $p_c$ depends upon the mesh size in the sub-nets and in particular how it behaves as the mesh size goes to zero. Studying such systems should give insight on the behavior of real systems with bottlenecks, like traffic networks, electric power transmission networks, and ecological systems. It is also interesting from a theoretical point of view because it interrelates the percolation characteristics of the sub-net and the entire network. ![image](squareFig1.eps) ![image](triFig2.eps) An interesting class of such systems includes lattices with an ordered series of vacated areas within them. Examples include the filled checkerboard lattices (Fig. \[fig:Checkerboard\_finite\]) and the “stack-of-triangles" (Fig. \[fig:strg\_finite\]). The latter can be built by partitioning the triangular lattice into triangular blocks of dimension $L$, and alternately vacating those blocks. These internal blocks of length $L$ correspond to the sub-nets, which contact other sub-nets through the three contact points at their corners. The checkerboard lattice is the square-lattice analog of the stack-of-triangles lattice, where sub-nets are $L \times L$ square lattices which contact the other sub-nets via four contact points. Note, for the stack-of-triangles sub-nets, we also use the $L \times L$ designation, here to indicate $L$ bonds on the base and the sides. The problem of finding the bond percolation threshold can be solved exactly for the stack-of-triangles lattice because it fits into a class of self-dual arrangements of triangles, and the triangle-triangle transformation (a generalization of the star-triangle transformation) can be used to write down equations for its percolation threshold [@Ziff_CellDualCell; @ChayesLei06]. This approach leads to an algebraic equation which can be solved using numerical root-finding methods. However due to lack of self-duality in the filled checkerboard lattices, no exact solution can be obtained for their thresholds. It is of interest and of practical importance to investigate the limiting behavior of systems with sub-nets of an infinite number of bonds, i.e., systems where the size of sub-nets is orders of magnitude larger than the size of a single bond in the system, or equivalently, where the mesh size of the lattice compared to the sub-net size becomes small. Due to reduced connectivity, these systems will percolate at a *higher* occupation probability than a similar regular lattice. The limiting percolation threshold for infinite sub-nets is counter-intuitively non-unity, and is argued to be governed by the connectedness of contact points to the infinite percolating clusters within sub-nets. This argument leads to a simple criterion linking the threshold to the probability that the corners connect to the giant cluster in the center of the sub-net. In this work, the limiting threshold value is computed for bond percolation on the stack-of-triangles and filled checkerboard lattices using this new criterion. Percolation thresholds are also found for a series of lattices of finite sub-net sizes. For the stack-of-triangles lattices, most percolation thresholds are evaluated analytically using the triangle-triangle transformation method, while for filled checkerboard lattices, the gradient percolation method [@Ziff_Sapoval] is used. The limiting values of $0.53993$ and $0.64222$ are found for percolation thresholds of stack-of-triangles and checkerboard lattices respectively, which are both in good agreement with the values extrapolated for the corresponding lattices of finite sub-net sizes. We note that there are some similarities between this work and studies done on the fractal Sierpiński gaskets (triangular) [@YuYao1988] and carpets (square), but in the case of the Sierpiński models, the sub-nets are repeated in a hierarchical fashion while here they are not. For the Sierpiński gasket, which is effectively all corners, the percolation threshold is known to be 1 [@GefenAharonyShapirMandelbrot84]. For Sierpiński gaskets of a finite number of generations, the formulae for the corner connectivities can be found exactly through recursion [@TaitelbaumHavlinGrassbergerMoenig90], while here they cannot. Recently another hierarchical model with bottlenecks, the so-called Apollonian networks, which are related to duals of Sierpinski networks, has also been introduced [@AutoMoreiraHerrmannAndrade08]. In this model, the percolation threshold goes to zero as the system size goes to infinity. \[sec:theory\]Theory\ ===================== Let $p$ be the probability that a bond in the system is occupied. Consider a network with sub-nets of infinitely fine mesh, each individually percolating (in the sense of forming “infinite" clusters but not necessarily connecting the corners) at $p_{c,s}$, and denote the overall bond percolation threshold of the entire network to be $p_{c,n}$. It is obvious that $p_{c,s}<p_{c,n}$, due to reduced connectivity in the entire network compared to connectivity in individual sub-nets. For $p_{c,s} < p < p_{c,n}$, an infinite cluster will form within each sub-net with probability $1$. However, the entire network will not percolate, because a sufficient number of connections has not yet been established between the contact points at the corners and central infinite clusters. Now we construct an auxiliary lattice by connecting the contact points to the center of each subnet, which represents the central infinite cluster contracted into a single site. The occupation probability of a bond on this auxiliary lattice is the probability that the contact point is connected to the central infinite cluster of the sub-net. Percolation of this auxiliary lattice is equivalent to the percolation of the entire network. That is, if this auxiliary lattice percolates at a threshold $p_{c,a}$, the percolation threshold of the entire network will be determined by: $$\begin{aligned} \label{eq:bottleneck_general} P_{\infty,{\rm corner}}(p_{c,n})=p_{c,a}\end{aligned}$$ where $P_{\infty,{\rm corner}}(p)$ gives the probability that the corner of the sub-net is connected to the central infinite cluster given that the single occupation probability is $p$. In general no analytical expression exists for $P_{\infty,{\rm corner}}(p)$, even for simple lattices such as the triangular and square lattices, and $P_{\infty,{\rm corner}}(p)$ must be evaluated by simulation. ![\[fig:stack\_of\_triangles\] (Color online.) Stack-of-triangles lattice and its auxiliary lattice. The filled blue (dark) triangles represent the sub-net, and the yellow honeycomb lattice represents the effective auxiliary lattice.](Str_LatticeFig3.eps) \[sec:theory:stack\_of\_triangles\]Stack-of-Triangles Lattice ------------------------------------------------------------- Fig. \[fig:stack\_of\_triangles\] shows a limiting stack-of-triangles lattice where each shaded triangle represents a sub-net of infinitely many bonds. The contact points are the corners of the triangular sub-nets. As shown in Fig. \[fig:stack\_of\_triangles\], the auxiliary lattice of the stack-of-triangles lattice is the honeycomb lattice, which percolates at ${p_{c,a}=1-2\sin\left(\pi/18\right)\approx0.652704}$ [@SykesEssam1964]. Thus the asymptotic percolation threshold $p_{c,n}$ of the stack-of-triangles will be determined by: $$\begin{aligned} \label{eq:bottleneck_str} P_{\infty,{\rm corner}}(p_{c,n})=1-2\sin{\frac{\pi}{18}} \ .\end{aligned}$$ Because the stack-of-triangles lattice is made up of triangular cells in a self-dual arrangement, its percolation threshold can be found exactly using the triangle-triangle transformation [@Ziff_CellDualCell; @ChayesLei06]. Denoting the corners of a single triangular sub-net with $A$, $B$ and $C$, the percolation threshold of the entire lattice is determined by the solution of the following equation: $$\begin{aligned} \label{eq:dual} P(ABC)=P(\overline{ABC})\end{aligned}$$ where ${P(ABC)}$ is the probability that $A$, $B$ and $C$ are all connected, and ${P(\overline{ABC})}$ is the probability that none of them are connected. Eq. (\[eq:dual\]) gives rise to an algebraic equation which can be solved for the exact percolation threshold of the lattices of different sub-net sizes. \[sec:theory:checkerboard\]Filled Checkerboard Lattice ------------------------------------------------------ Unlike the stack-of-triangles lattice, there is no exact solution for percolation threshold of the checkerboard lattice for finite sub-nets because no duality argument can be made for such lattices. However once again an auxiliary lattice approach can be used to find a criterion for the asymptotic value of percolation threshold. Fig. \[fig:checkerboad\_aux\] depicts the corresponding auxiliary lattice for a checkerboard lattice, which is simply the square lattice with double bonds in series. This lattice percolates at ${p_{c,a}= 1/\sqrt{2} \approx{0.707107}}$. Thus for the infinite sub-net ${p_{c,n}}$ will be determined by: $$\begin{aligned} \label{eq:bottleneck_checkerboard} P_{\infty,{\rm corner}}(p_{c,n})= \frac{1}{\sqrt{2}}\end{aligned}$$ It is interesting to note that there exists another regular lattice — the “martini" lattice — for which the bond threshold is also exactly $1/\sqrt{2}$ [@ZiffScullard06]. However, that lattice does not appear to relate to a network construction as the double-square lattice does. ![\[fig:checkerboad\_aux\](Color online.) Auxiliary lattice of the checkerboard lattice. The blue (dark) colored areas represent the subnets, and the double-bond square lattice (diagonals) represents the auxiliary lattice.](Checkerboard_auxFig4.eps) \[sec:methods\]Methods ====================== \[sec:methods:Pc\_finite\]Percolation Threshold of Systems of finite-sized sub-nets ----------------------------------------------------------------------------------- For the checkerboard lattice, we estimate the bond percolation thresholds using the gradient percolation method  [@Ziff_Sapoval]. In this method, a gradient of occupation probability is applied to the lattice, such that bonds are occupied according to the local probability determined by this gradient. A self-avoiding hull-generating walk is then made on the lattice according to the rule that an occupied bond will reflect the walk while a vacant bond will be traversed by the walk. For a finite gradient, this walk can be continued infinitely by replicating the original lattice in the direction perpendicular to the gradient using periodic boundary conditions. Such a walk will map out the boundary between the percolating and non-percolating regions, and the average value of occupation probability during the walk will be a measure of the percolation threshold. Because all bonds are occupied or vacated independent of each other, this average probability can be estimated as [@RossoGouyetSapoval1986]: $$\begin{aligned} \label{eq:Pc_gradient} p_c=\frac{N_{occ}}{N_{occ}+N_{vac}}\end{aligned}$$ It is particularly straightforward to implement this algorithm to bond percolation on a square lattice, and the checkerboard lattice can be simulated by making some of the square-lattice bonds permanently vacant. Walks are carried out in a horizontal-vertical direction and the original lattice is rotated $45^{\circ}$. We applied this approach to checkerboard lattices of different block sizes. Fig. \[fig:chkrbrd\_2b2\] and Fig. \[fig:chkrbrd\_4b4\] show the corresponding setups for lattices with $2\times2$ and $4\times4$ vacancies, where the lattice bonds are represented as dashed diagonal lines and solid horizontal and vertical lines show where the walk goes. Circles indicate the centers of permanently vacant bonds. It should be emphasized that permanently vacated bonds are not counted in Eq. (\[eq:Pc\_gradient\]) even if they are visited by the walk. The percolation threshold of stack-of-triangles lattices of finite sub-net size were calculated using Eq. (\[eq:dual\]). If the occupation probability is $p$ and $q = 1 - p$, one can express $P(ABC)$ and $P(\overline{ABC})$ as: $$\begin{aligned} P(ABC)&=&\sum_{i=0}^{3n(n+1)/2} \phi(n,i)p^iq^{3n(n+1)/2-i} \label{eq:phi_n_i}\\ P(\overline{ABC})&=&\sum_{i=0}^{3n(n+1)/2}\psi(n,i)p^i q^{3n(n+1)/2-i} \label{eq:psi_n_i}\end{aligned}$$ where $n$ denotes the number of bonds per side of the sub-net, $\phi(n,i)$ denotes the number of configurations of an $n\times n$ triangular block with precisely $i$ occupied bonds where the $A$, $B$ and $C$ are connected to each other and $\psi(n,i)$ denotes the number of configurations where none of these points are connected. There appears to be no closed-form combinatorial expression for $\phi(n,i)$ and $\psi(n,i)$, and we determined them by exhaustive search of all possible configurations. ![\[fig:chkrbrd\_2b2\]Representation of checkerboard lattices for simulation with gradient method. The original bond lattice is represented by dashed diagonal lines, while lattice on which the walk goes is vertical and horizontal. Open circles mark bonds that are permanently vacant.](checkerboard2x2Fig5.eps){width="2.5"} ![\[fig:chkrbrd\_4b4\]Checkerboard lattice with $4\times4$ vacancies, with description the same as in Fig. \[fig:chkrbrd\_2b2\].](checkerboard4x4Fig6.eps){width="2.5"} \[sec:methods:estimate\_Pinf\]Estimation of ${P_{\infty,{\rm corner}}}$ ----------------------------------------------------------------------- As mentioned in Section \[sec:theory\], the asymptotic value of percolation threshold ${p_{c,n}}$ can be calculated using Eq. (\[eq:bottleneck\_checkerboard\]). However there is no analytical expression for ${P_{\infty,{\rm corner}}(p)}$, hence it must be characterized by simulation. In order to do that, the size distribution of clusters connected to the corner must be found for different values of $p>p_{c,s}$. Cluster sizes are defined in terms of the number of sites in the cluster. In order to isolate the cluster connected to the corner, a first-in-first-out Leath or growth algorithm is used starting from the corner. In FIFO algorithm, the neighbors of every unvisited site are investigated before going to neighbors of the neighbors, so that clusters grow in a circular front. Compared to last-in-first-out algorithm used in recursive programming, this algorithm performs better for ${p{\ge}p_{c,s}}$ because it explores the space in a more compact way. At each run, the size of the cluster connected to the corner is evaluated using the FIFO growth algorithm. In order to get better statistics, clusters with sizes between ${2^i}$ and ${2^{i+1}-1}$ are counted to be in the $i-$th bin. Because simulations are always run on a finite system, there is an ambiguity on how to define the infinite cluster. However, when ${p{\ge}p_{c,s}}$, the infinite cluster occupies almost the entire lattice, and the finite-size clusters are quite small on average. This effect becomes more and more profound as $p$ increases, and the expected number of clusters in a specific bin becomes smaller and smaller. Consequently, larger bins will effectively contain no clusters, except the bin corresponding to cluster sizes comparable to the size of the entire system. Thus there is no need to set a cutoff value for defining an infinite cluster. Fig. \[fig:cluster\_size\] depicts the size distribution of clusters connected to the corner obtained for $1024 \times 1024$ triangular lattice at (a): $p=0.40$ and (b): $p=0.55$ after ${10^4}$ independent runs. As it is observed, there is a clear gap between bins corresponding to small clusters and the bin corresponding to the spanning infinite cluster even for small values of $p$, which clearly demonstrates that the largest nonempty bin corresponds to infinite percolating clusters connected to the corner. The fraction of such clusters connected to the corner is an estimate of $P_{\infty,{\rm corner}}(p)$. In the simulations, we used the four-offset shift-register random-number generator R(471,1586,6988,9689) described in Ref. [@Ziff98]. ![image](Cluster_size_0_40Fig7a.eps) ![image](Cluster_size_0_55Fig7b.eps) \[sec:results\]Results and Discussion ===================================== \[sec:results:gradient\]Gradient Percolation Data ------------------------------------------------- The gradient percolation method was used to estimate the bond percolation threshold of checkerboard lattices of five different sub-net sizes, i.e., $2\times2$, $4\times4$, $8\times8$, $16\times16$ and $32\times32$. For each lattice, six values of the gradient were used, and simulations were run for $10^{10}$ to $10^{12}$ steps for each gradient value in order to assure that the estimated percolation thresholds are accurate to at least five significant digits. The gradient was applied at an angle of $45^{\circ}$ relative to the original lattice. Figures \[fig:pc\_2b2\]- \[fig:pc\_8b8\] depict typical simulation results. Measured percolation thresholds for finite gradients were extrapolated to estimate the percolation threshold as $L\rightarrow\infty$. Our simulations show that $p_c$ fits fairly linearly when plotted against $1/L$. Table \[table:pc\_checkerboard\] gives these estimated percolation thresholds. ![image](pc_2b2Fig8a.eps) ![image](pc_4b4Fig8b.eps) ![image](pc_8b8Fig8c.eps) Sub-net size Estimated $p_{c,n}$ -------------- -- -- -- -- -- --------------------------- $1\times1$ $0.5^{a}$ $2\times2$ $0.596303\pm0.000001^{b}$ $4\times4$ $0.633685\pm0.000009^{b}$ $8\times8$ $0.642318\pm0.000005^{b}$ $16\times16$ $0.64237\pm0.00001^{b}$ $32\times32$ $0.64219\pm0.00002^{b}$ $\vdots$ $\vdots$ $\infty$ $0.642216\pm0.00001^{c}$ : \[table:pc\_checkerboard\] Percolation threshold for checkerboard lattices of different sub-net sizes: $^a$Exact result, $^{b}$from gradient percolation simulations, $^c$from corner simulations using Eq. (\[eq:bottleneck\_checkerboard\]). \[section:results:dual\]Percolation Threshold of The Stack-of-triangles Lattice ------------------------------------------------------------------------------- As mentioned in Sections \[sec:theory:stack\_of\_triangles\] and \[sec:methods:Pc\_finite\], the percolation threshold of stack-of-triangles lattice can be determined by Eq. (\[eq:dual\]). Table \[table:polynomial\] summarizes the corresponding polynomial expressions and their relevant roots for lattices having $1$, $2$, $3$ and $4$ triangles per edge. These polynomials give the $\phi(n,i)$ and $\psi(n,i)$ in Eqs. (\[eq:phi\_n\_i\]) and (\[eq:psi\_n\_i\]) for $n=1,2,3,4$ and $i=0,1,\ldots,3n(n+1)/2$. We show $p_0 = P(\overline{ABC})$, $p_2 = P(AB\overline{C})$ (the probability that a given pair of vertices are connected together and not connected to the third vertex), and $p_3 = P(ABC)$. These quantities satisfy $p_0 + 3 p_2 + p_3 = 1$. Then we use Eq. (\[eq:dual\]) to solve for $p_{c,n}$ numerically. We also show in Table \[table:polynomial\] the values of $p_0$, $p_2$ and $p_3$ evaluated at the $p_{c,n}$. Interestingly, as $n$ increases, $p_0$ at first increases somewhat but then tends back to its original value at $n= 1$, reflecting the fact that the connectivity of the infinitely fine mesh triangle is identical to that of the critical honeycomb lattice, which is identical to the connectivity of the simple triangular lattice according to the usual star-triangle arguments. It is not possible to perform this exact enumeration for larger sub-nets, so we used gradient percolation method to evaluate $p_c$ for $5\times5$. (To create the triangular bond system on a square bond lattice, alternating horizontal bonds are made permanently occupied.) The final threshold results are summarized in Table \[table:pc\_stack\_of\_triangles\]. --------- ---------------------------------------- -------------------------------------------------------------------------------------------------------- Sub-net $1\times1$ (simple triangular lattice) $2\times2$ (3 “up" triangles or 9 bonds per sub-net)\ ${p_3=P(ABC)}$ & ${p^3+3p^2q}$ & ${9p^4q^5+57p^5q^4+63p^6q^3+33p^7q^2+9p^8q+p^9}$\ $p_2 =P\left(AB\overline{C}\right)$ & ${pq^2}$ & ${p^2q^7+10p^3q^6+32p^4q^5+22p^5q^4+7p^6q^3+p^7q^2}$\ ${p_0=P(\overline{ABC})}$ & ${(p+q)^3-p_3-3p_2}$ & ${(p+q)^9-p_3-3p_2}$\ $p_c$ &$0.34729635533$ & $0.47162878827$\ ${p_0(p_c)=p_3(p_c)}$ & $0.27806614328$ & $0.28488908000$\ ${p_2(p_c)}$ & $0.14795590448$ & $0.14340728000$\ --------- ---------------------------------------- -------------------------------------------------------------------------------------------------------- --------- ------------------------------------------------------------------------------------------------------------------------------------ Sub-net $3 \times 3$ (6 “up" triangles or $18$ bonds per sub-net)\ ${p_3=P(ABC)}$ & ${29p^6q^{12}+468p^7q^{11}+3015p^8q^{10}+9648p^9q^9+16119p^{10}q^8+17076p^{11}q^7+12638p^{12}q^6}$\ & ${+6810p^{13}q^5+2694p^{14}q^4+768p^{15}q^3+150p^{16}q^2+18p^{17}q+p^{18}}$\ $p_2 =P\left(AB\overline{C}\right)$ & ${p^3q^{15}+21p^4q^{14}+202p^5q^{13}+1125p^6q^{12}+3840p^7q^{11}+7956p^8q^{10}+9697p^9q^9}$\ & ${+7821p^{10}q^8+4484p^{11}q^7+1879p^{12}q^6+572p^{13}q^5+121p^{14}q^4+16p^{15}q^3+p^{16}q^2}$\ ${p_0=P(\overline{ABC})}$ & ${(p+q)^{18}-p_3-3p_2}$\ $p_c$ &$0.50907779266$\ ${p_0(p_c)=p_3(p_c)}$ & $0.28322276251$\ ${p_2(p_c)}$ & $0.14451815833$\ --------- ------------------------------------------------------------------------------------------------------------------------------------ --------- ---------------------------------------------------------------------------------------------------------------------------- Sub-net $4 \times 4$ ($10$ “up" triangles or $30$ bonds per sub-net)\ ${p_3=P(ABC)}$ & ${99p^8q^{22}+2900p^9q^{21}+38535p^{10}q^{20}+305436p^{11}q^{19}+1598501p^{12}q^{18}}$\ & ${+5790150p^{13}q^{17}+14901222p^{14}q^{16}+27985060p^{15}q^{15}+39969432p^{16}q^{14}}$\ & ${+45060150p^{17}q^{13}+41218818p^{18}q^{12}+31162896p^{19}q^{11}+19685874p^{20}q^{10}}$\ & ${+10440740p^{21}q^9+4647369p^{22}q^8+1727208p^{23}q^7+530552p^{24}q^6+132528p^{25}q^5}$\ & ${+26265p^{26}q^4+3976p^{27}q^3+432p^{28}q^2+30p^{29}q+p^{30}}$\ $p_2 =P\left(AB\overline{C}\right)$ & ${p^4q^{26}+36p^5q^{25}+613p^6q^{24}+6533p^7q^{23}+48643p^8q^{22}+267261p^9q^{21}}$\ & ${+1114020p^{10}q^{20}+3563824p^{11}q^{19}+8766414p^{12}q^{18}+16564475p^{13}q^{17}}$\ & ${+24187447p^{14}q^{16}+27879685p^{15}q^{15}+25987202p^{16}q^{14}+19980934p^{17}q^{13}}$\ & ${+12843832p^{18}q^{12}+6950714p^{19}q^{11}+3170022p^{20}q^{10}+1212944p^{21}q^9}$\ & ${+385509p^{22}q^8+100140p^{23}q^7+20744p^{24}q^6+3300p^{25}q^5+379p^{26}q^4+28p^{27}q^3+p^{28}q^2}$\ ${p_0=P(\overline{ABC})}$ & ${(p+q)^{30}-p_3-3p_2}$\ $p_c$ &$0.52436482243 $\ ${p_0(p_c)=p_3(p_c)}$ & $0.28153957013$\ ${p_2(p_c)}$ & $0.14564028658$\ --------- ---------------------------------------------------------------------------------------------------------------------------- Sub-net size Estimated $p_c$ -------------- -- -- -- -- -- ---------------------------- $1\times1$ $0.347296355^{a}$ $2\times2$ $0.471628788^{a}$ $3\times3$ $0.509077793^{a}$ $4\times4$ $0.524364822^{a}$ $5\times5$ $0.5315976\pm0.000001^{b}$ $\vdots$ $\vdots$ $\infty$ $0.53993\pm0.00001^c$ : \[table:pc\_stack\_of\_triangles\] Percolation threshold for stack-of-triangles lattices of different sub-net sizes: $^a$From Eq. (\[eq:dual\]) using exact expressions for $p_0$ and $p_3$ from Table \[table:polynomial\], $^b$from gradient simulation, $^c$Corner simulation using Eq. (\[eq:bottleneck\_str\]). \[sec:results:estimate\_Pinf\]Estimation of ${P_{\infty,{\rm corner}}(p)}$ -------------------------------------------------------------------------- ### \[sec:results:estimate\_Pinf:checkerb\]Square Lattice The cluster growth algorithm was used to estimate ${P_{\infty,{\rm corner}}(p)}$ for different values of $p$. Simulations were run on a $2048\times2048$ square lattice. For each value of $p>1/2$, $10^5$ independent runs were performed and $P_{\infty,{\rm corner}}$ was estimated by considering the fraction of clusters falling into the largest nonempty bin as described in Section \[sec:methods:estimate\_Pinf\]. Fig. \[fig:p\_inf\_chkrbrd\] demonstrates the resulting curve for the square lattice. In order to solve Eq. (\[eq:bottleneck\_checkerboard\]), a cubic spline with natural boundary conditions was used for interpolation, and an initial estimate of ${p_{c,n}}$ was obtained to be $0.6432$. The standard deviation of $P_{\infty,{\rm corner}}(p)$ scales as $O(1/\sqrt{N})$ where $N$ is the number of independent simulation used for its estimation, so that $N=10^5$ will give us an accuracy in $P_{\infty,{\rm corner}}(p)$ of about two significant figures. In order to increase the accuracy in our estimate, further simulations were performed at the vicinity of $p=0.6432$ for $N=10^{10}$ trials with a lower cut-off size, and $p_{c,n}$ was found to be $0.642216\pm0.00001$. This number is in good agreement with percolation thresholds given in Table  \[table:pc\_checkerboard\]. Note that $p_{c,n}$ for the $16\times16$ sub-net checkerboard lattice actually overshoots the value 0.642216 for the infinite sub-net and then drops to the final value. This non-monotonic behavior is surprising at first and presumably is due to some interplay between the various corner connection probabilities that occurs for finite system. At the threshold $p_{c,n} = 0.642216$, we found that the number of corner clusters containing $s$ sites for large $s$ behaves in the expected way for supercritical clusters [@Stauffer] $n_s \sim a \exp(-b s^{1/2})$ with $\ln a = -7.0429$ and $b = -0.8177$. ![\[fig:p\_inf\_chkrbrd\]${P_{\infty,{\rm corner}}(p)}$ for the square lattice.](P_inf_checkerboardFig9.eps) ### \[sec:results:estimate\_Pinf:stotr\]Triangular lattice The cluster growth algorithm was applied to find the size distribution of clusters connected to the corner of a $1024\times1024$ triangular lattice. For each value of $p$, $10^4$ independent runs were performed and $P_{\infty,{\rm corner}}(p)$ was evaluated. Fig. \[fig:p\_inf\_strg\] depicts the results. The root of Eq. (\[eq:bottleneck\_str\]) was determined by cubic spline to be around $0.539$. Further simulations were performed around this value with $N=10^{10}$ runs for each $p$, yielding $p_{c,n}=0.539933\pm0.00001$. This value is also in good agreement with values given in Table  \[table:polynomial\] and shows fast convergence as sub-net size increases. ![\[fig:p\_inf\_strg\]${P_{\infty,{\rm corner}}(p)}$ for the triangular lattice.](P_inf_strgFig10.eps) Discussion {#sec:conclusion} ========== We have shown that the percolation threshold of checkerboard and stack-of-triangle systems approach values less than 1 as the mesh spacing in the sub-nets goes to zero. In that limit, the threshold can be found by finding the value of $p$ such that the probability a corner vertex is connected to the infinite cluster $P_{\infty,{\rm corner}}$ equals $1/\sqrt{2}$ and $1 - 2 \sin (\pi/18)$, respectively, based upon the equivalence with the double-bond square and bond honeycomb lattices. The main results of our analysis and simulations are summarized in Tables \[table:pc\_checkerboard\] and \[table:pc\_stack\_of\_triangles\]. For the case of the checkerboard, we notice a rather interesting and unexpected situation in which the threshold $p_{c,n}$ slightly overshoots the infinite-sub-net value and then decreases as the mesh size increases. The threshold here is governed by a complicated interplay of connection probabilities for each square, and evidently for intermediate sized systems it is somewhat harder to connect the corners than for larger ones, and this leads to a larger threshold. In the case of the triangular lattice, where there are fewer connection configurations between the three vertices of one triangle (namely, just $p_0$, $p_2$ and $p_3$), the value of $p_{c,n}$ appears to grow monotonically. To illustrate the general behavior of the systems, we show a typical critical cluster for the $8\times8$ checkerboard system in Fig. \[Pict8x8squaresBW\]. It can be seen that the checkerboard squares the cluster touches are mostly filled, since the threshold $p_{c,n} = 0.642318$ is so much larger than the square lattice’s threshold $p_{c,s} = 0.5$. In Fig. \[hajisquaredensity\] we show the average density of “infinite" (large) clusters in a single $64\times64$ square at the checkerboard criticality of $p_{c,n} = 0.642216$, in which case the density drops to $1/\sqrt{2}$ at the corners. In Fig. \[haji4density\] we show the corresponding densities conditional on the requirement that the cluster simultaneously touches all four corners, so that the density now goes to 1 at the corners and drops to a somewhat lower value in the center because not every site in the system belongs to the spanning cluster. Similar plots can be made of clusters touching 1, 2, or 3 corners. At the sub-net critical point $p_{c,s}$, the first two cases can be solved exactly and satisfy a factorization condition [@SimmonsKlebanZiff07; @SimmonsZiffKleban08], but this result does not apply at the higher $p_{c,n}$. The ideas discussed in this paper apply to any system with regular bottlenecks. Another example is the kagomé lattice with the triangles filled with a finer-mesh triangular lattice; this system is studied in Ref. [@ZiffGu]. Acknowledgments =============== This work was supported in part by the National Science Foundation Grants No. DMS-0553487 (RMZ). The authors also acknowledge the contribution of UROP (Undergraduate Reseach Opportunity Program) student Hang Gu for his numerical determination of $p_{c,n}$ for the triangular lattice of sub-net size $5\times5$, and thank Christian R. Scullard for helpful discussions concerning this work.
--- abstract: 'We present here, for the first time, a 2D study of the overshoot convective mechanism in nova outbursts for a wide range of possible compositions of the layer underlying the accreted envelope. Previous surveys studied this mechanism only for solar composition matter accreted on top of carbon oxygen (C-O) white dwarfs. Since, during the runaway, mixing with carbon enhances the hydrogen burning rates dramatically, one should question whether significant enrichment of the ejecta is possible also for other underlying compositions (He, O, Ne, Mg), predicted by stellar evolution models. We simulated several non-carbon cases and found significant amounts of those underlying materials in the ejected hydrogen layer. Despite large differences in rates, time scales and energetics between the cases, our results show that the convective dredge up mechanism predicts significant enrichment in all our non-carbon cases, including helium enrichment in recurrent novae. The results are consistent with observations.' date: Released 2012 June 28 title: 'Convective overshoot mixing in Nova outbursts - The dependence on the composition of the underlying white dwarf' --- \[firstpage\] convection-hydrodynamics,binaries:close-novae,stars:abundances Introduction {#intro} ============ Almost all classical and recurrent novae for which reliable abundance determinations exist show enrichment (relative to solar composition) in heavy elements and/or helium. It is now widely accepted that the source for such enrichment is dredge up of matter from the underlying white dwarf to the accreted envelope. A few mechanisms for such mixing were proposed to explain the observations : Mixing by a diffusion layer, for which diffusion during the accretion phase builds a layer of mixed abundances ([@pk84; @kp85; @IfMc91; @IfMc92; @FuI92]); Mixing by shear instability induced by differential rotation during the accretion phase [@Du77; @kt78; @Macd83; @lt87; @ks87; @ks89]; Mixing by shear gravity waves breaking on the white dwarf surface in which a resonant interaction between large-scale shear flows in the accreted envelope and gravity waves in the white dwarf’s core can induce mixing of heavy elements into the envelope [@Rosner01; @alex02; @Alex2D] and finally - Mixing by overshoot of the convective flow during the runaway itself [@woo86; @saf92; @sha94; @gl95; @glt97; @Ker2D; @Ker3D; @glt2007; @Cas2010; @Cas2011a; @Cas2011b]. In this work we focus on the last of these mechanisms that proved efficient for C-O white dwarfs. Mixing of carbon from the underlying layer significantly enhances the hydrogen burning rate. The enhanced burning rate drives higher convective fluxes, inducing more mixing [@gl95; @glt97; @glt2007; @Cas2010; @Cas2011a; @Cas2011b]. Therefore, the fact that the underlying layer is rich in C is a critical feature of all the overshoot convective models that have been analyzed up to this work. According to the theory of stellar evolution for single stars, we expect the composition of the underlying white dwarf to be C-O for masses in the range $0.5-1.1 M\sun$ and ONe(Mg) for more massive white dwarfs [@GpGb01; @dom02; @Gb02]. Observations show enrichment in helium, CNO, Ne, Mg and heavier elements [@sst86; @Gehrz98; @Gehrz08; @Ili02]. For recurrent novae, helium enrichment can achieve levels of $40-50\%$ [@web1987; @anu2000; @diaz2010]. High helium abundances can simply be explained as the ashes of hydrogen burning during the runaway [@Hernanz2008], but one can not exclude the possibility that the source of He enrichment is dredge up from an underlying helium layer. We therefore found it essential to study nova outbursts for which the composition of the underlying layer is different from C-O. The models studied here extend the work we publish in the past. As a first step we study here the runaway of the accreted hydrogen layer on top of a single white dwarf, changing only its composition. Having a fixed mass and radius we can compare the timescales, convective flow, energetics and dredge up in the different cases. A more comprehensive study which varies the white dwarf’s mass with compositions is left to future work (CO, ONe(Mg) or He rich). The present study is limited to 2D axially symmetric configurations. The well known differences between 2D and 3D unstable flows can yield uncertainties of few percents on our results, but can not change the general trends, as previous studies showed reasonable agreement between 2D and 3D simulations with regard to integral quantities, although larger differences persist in the local structure ( [@Cas2011b]). We therefore regard our present results as a good starting point to more elaborated 3D simulations. In the next section we describe the tools and the setup of the simulations. In subsequent sections we describe the results for each initial composition and then summarize our conclusions. Tools and initial configurations {#tools} ================================ All the 2D simulations presented in this work start from a 1D hydrostatic configurations, consisting of a $1.147 M\sun$ CO core with an outer layer of $~10^{-4}\,{\ensuremath{M_{\odot}}}$ composed of CO, ONe(mg) or helium, according to the studied case as we explained in the introduction. The original core was built as a hydrostatic CO politrop that cooled by evolution to the desired central temperature ($2\times10^7 {\rm K}$). The 1D model evolves using Lagrangian coordinates and does not include any prescription for mixing at the bottom of the envelope. The core accretes matter with solar abundance and the accreted matter is compressed and heated. Once the maximal temperature at the base of the accreted envelope reaches a temperature of $9\times10^7 {\rm K}$, the whole accreted envelope $3.4\times10^{-5}\,{\ensuremath{M_{\odot}}}$ and most of the underlying zone $4.7\times10^{-5}\,{\ensuremath{M_{\odot}}}$ , are mapped onto a 2D grid, and the simulations continue to runaway and beyond using the 2D hydro code VULCAN-2D [@liv93]. This total mass of $8.1\times10^{-5}\,{\ensuremath{M_{\odot}}}$ is refered as the total computed envelope mass. Using same radial zoning in the 1D grid and in its 2D counterpart, the models preserve hydrostatic equilibrium, accurate to better than one part in ten thousand. Since the configurations are unstable, non radial velocity components develop very quickly from the small round-off errors of the code, without introducing any artificial initial perturbation. Further computational details of the 2D simulations are as follows. The inner boundary is fixed, with assumed zero inward luminosity. The outer boundary follows the expansion of the envelope taking advantage of the arbitrary Lagrangian-Eulerian (ALE) semi-Lagrangian option of the VULCAN code whereas the burning regions of the grid at the base of the hydrogen rich envelope are purely Eulerian. More details are presented in Glasner et al. (2005, 2007). The flexibility of the ALE grid enables us to model the burning zones at the bottom of the hydrogen rich envelope with very delicate zones in spite of the post-runaway radial expansion of the outer layers. The typical computational cell at the base of the envelope, where most of the burning takes place, is a rectangular cell with dimensions of about $ 1.4 km \times 1.4 km $ . Reflecting boundary conditions are imposed at the lateral boundaries of the grid. Gravity is taken into account as a point source with the mass of the core, and the self gravity of the envelope is ignored. The reaction network includes 15 elements essential for the hydrogen burning in the CNO cycle, it includes the isotopes: H, He, He Be, B, C, C, N, N, N, O, O, O , O and F. Results ======= In order to compare with 1D models and with previous studies we present here five basic configurations: 1\) The outburst of the 1D original model without any overshoot mixing. 2\) An up to date model with an underlying C-O layer. 3\) A model with a Helium underlying layer. 4\) A model with an underlying O(Ne) layer. 5\) A toy model with underlying Mg. This model demonstrates the effects of possible mixing of hydrogen with $^{24}$Mg on the runaway. (In a realistic model the amounts of Mg in the core are expected to be a few percents ([@GpGb01; @siess06]), but higher amounts can be found in the very outer layers of the core ([@berro94] ). The computed models are listed in Table \[tab:models\]. In the next sections we present the energetics and mixing results for each of the models in the present survey. Model $Underlying$ $T_{max}$ $Q_{max}$ $remarks$ ------- -------------- ----------- ------------ ---------------------------------------- m12 - $2.05 $ $ 4.0 $ 1D m12ad CO $2.45 $ $ 1000.0 $ - m12ag He $ - $ $ - $ - m12dg He $ - $ $ - $ $T_{base}=1.5\times10^8 K$ m12al O $ - $ $ - $ - m12bl O $ - $ $ - $ $T_{base}=1.05\times10^8 K$ m12cl O $ - $ $ - $ $T_{base}=1.125\times10^8 K$ m12dl O $2.15 $ $ 82.4 $ $T_{base}=1.22\times10^8 K$ m12kk O $ - $ $ - $ Mg rates m12jj O $2.46 $ $1000.0 $ Mg rates+ $T_{base}=1.125\times10^8 K$ : Parameters of the Simulated Initial Configurations[]{data-label="tab:models"} $T_{max}$; maximal achieved temperature \[$10^8$ Kelvin\]. $Q_{max}$; maximal achieved energy generation rate \[$ 10^{42} erg/sec$\] The 1D original model --------------------- The initial (original) model, composed of a $1.147 M\sun$ degenerate core, accretes hydrogen rich envelope at the rate of $1.0 \times 10^{-10} {\ensuremath{M_{\odot}}}/year$. When the temperature at the base of the accreted envelope reaches $9\times10^7 {\rm K}$ the accreted mass is $3.4\times10^{-5}\,{\ensuremath{M_{\odot}}}$. We define this time as t=0, exceptional test cases are commented in Table \[tab:models\]. Convection sets in for the original model a few days earlier, at $t=-10^6 sec$, when the base temperature is $3\times10^7 {\rm K}$. The 1D convective model assumes no overshoot mixing; therefore, convection has an effect only on the heat transport and abundances within the convective zone. Without the overshoot mixing burning rates are not enhanced by overshoot convective mixing of CO rich matter. As a reference for the 2D simulations, we evolved the 1D model all through the runaway phase. The time to reach the maximal energy production rate is about $ 2400 sec $, the maximal achieved temperature is $2.05\times10^8 K$, the maximal achieved burning rate is $4.0\times10^{42} erg/sec$ and the total nuclear energy generated up to the maximum production rate (integrated over time) is $0.77\times10^{46} erg $. The C-O underlying layer ------------------------ We summarize here the main results of the underlying C-O case (computed already in [@glt2007] and repeated here), which is the most energetic case. A comparison of the history of the burning rate (Fig. \[co\]) with Figure 3 in [@glt2007] confirms that our current numerical results agree with those of our earlier publication. In this figure (Fig. \[co\]) we also present the amount of mixing at various stages of the runaway. The main effects of the convective underlying dredge up are: - The convective cells are small at early stages with moderate velocities of a few times $ 10^{6} $ cm/sec. As the energy generation rate increases during the runaway, the convective cells merge and become almost circular. The size of the cells is comparable to the height of the entire envelope, i.e. a few pressure scale heights. The velocity magnitude within these cells, when the burning approaches the peak of the runaway, is a few times $ 10^{7} $ cm/sec. - The shear convective flow is followed by efficient mixing of C-O matter from the core to the accreted solar abundant envelope. The amount of C-O enrichment increases as the burning becomes more violent and the total amount of mixing is above $30\%$ (Fig. \[co\]). - Mixing induces an enhanced burning rate, relative to the non mixing 1D case, by more than an order of magnitude. The maximum rate grows from $ 4.0\times10^{42} erg/sec $ to $ 1000.0\times 10^{42} erg/sec $. The enhanced rates raise the burning temperature and shorten the time required to reach the maximal burning rate. The maximal achieved temperature increases from $ 2.05\times 10^8 K$ to $ 2.45\times 10^8 K$ and the rise time to maximum burning decreases from $ 2440 sec $ to $ 140 sec $. The total energy production rates of the 1D and the 2D simulations are given in Fig. \[co\]. The enhanced burning rate in the 2D case will give rise, at later stages of the outburst, to an increase in the kinetic energy of the ejecta. Unfortunately, since the hydro solver time step is restricted by the Courant condition we can not run the 2D models through to the coasting phase. A consideration of the integrated released energy at the moment of maximal burning rate reveals that the burning energy grows from $ 0.77\times 10^{46} erg$ in the 1D model to $ 1.45\times 10^{46} erg$ in the 2D model, a factor of 2. Another interesting feature of the 2D C-O simulations is the appearance of fluctuations, observed during the initial stages of the runaway (Fig. \[co\]). Such fluctuations are not observed in the 1D model. The fluctuations are a consequence of the mixing of the hot burning envelope matter with the cold underlying white dwarf matter. The mixing has two effects. The first is cooling as we mix hot matter with cold matter. The second is heating by the enhancement of the reaction rate. It is apparent that in this case, after a small transient, the heating by enhanced reaction rates becomes dominant and the runaway takes place on a short timescale. For other underlying compositions the effect is a bit more complicated. We discuss this issue in the next subsections. The Helium underlying layer --------------------------- ![Log of the total energy production rate for all the helium models compared to the 1D model. The model with initial temperature higher than the default temperature of $ 9\times10^7 K $ was shifted in time by 1760.0 seconds. ](v-q-helium.eps){width="84mm"} \[Qhelium\] The helium enrichment, observed in recurrent nova (without enrichment by heavier isotopes) and in other classical nova was mentioned in the past as an obstacle to the underlying convective overshoot mechanism ([@lt87]). Helium is the most abundant end product of the hydrogen burning in nova outbursts and it does not enhance the hydrogen burning in any way. In recurrent novae, helium may be accumulated upon the surface of the white dwarfs. If so, can the dredge-up mechanism lead to the observed helium enrichment ? We examine this question using the case m12ag in Table \[tab:models\]. Energetically, as expected, the model follows exactly the 1D model (Fig. \[Qhelium\]). The slow rise of the burning rate in this case makes the 2D simulation too expensive. To overcome this problem we artificially ’jump in time’ by jumping to another helium model at a later stage of the runaway, in which the 1D temperature is $1.22\times10^8 {\rm K}$. The rise time of this model is much shorter, and by adjusting its time axis to that of the 1D model the two curves in Fig. \[Qhelium\] may be seen to coincide. The fluctuations of the 2D curve are absent in its 1D counterpart, both because the latter has no convection and because the 1D simulation is performed using implicit algorithm with much larger time steps. The precise way in which the 2D evolution agrees with the 1D evolution increases our confidence in the validity of the 2D simulations.       The convective flow is indeed moderate but overshoot mixing is observed at a certain level (Fig. \[Xhelium\]). In the first (slower) phase (m12ag in Table \[tab:models\]) the level of mixing is small, converging to about $10\%$. In the later (faster) phase (m12dg in Table \[tab:models\]) the mixing rate increases in step with the increasing burning rate and the higher velocities in the convective cells. However, as the rates are still low relative to the C-O case, the added amount of mixing is again about $10\%$, summing up to a total amount of about $20\%$. Since we begin with matter of solar composition, the total He mass fraction at the end of the second phase exceeds $40 \%$. The color maps of the absolute value of the velocity in the 2D models at different times along the development of the runaway are presented in Fig. \[fig:flow-he\]. In the first slower phase (model m12ag Table \[tab:models\]), the burning rate is low and grows mildly with time. The convective velocities are converging to a value of a few $ 10^{6} $ cm/sec and the cell size is only a bit larger than a scale height. In the second (faster) phase (model m12dg Table \[tab:models\]), the burning rate is somewhat higher and is seen to grow with time. The convective velocities are increasing with time up to a value of about $1.5\times10^{7} $ cm/sec. The convective cells in the radial direction converge with time to an extended structure of a few scale heights. The ONe(Mg) underlying layer ---------------------------- The rate of proton capture by oxygen is much slower and less energetic than the capture by carbon but it still has an enhancing effect relative to the 1D model without mixing (Fig. \[Qoxygen\]). This can be well overstood once we notice that for the initial temperature of the 2D model i.e. $ 9.0\times 10^7 K$ the energy generation rate by proton capture by oxygen is more than three orders of magnitude lower than the energy generation rate by carbon capture (Fig. \[fig:Qcapture\]). We can also observe that the energy generation rate for capture by oxygen stays much smaller than the energy generation rate for capture by carbon for the entire range of temperature relevant to nova outbursts. For this range, we also notice that the capture by Ne is lower by an order of magnitude than the capture rate by O. Being interested only in the energetics, convective flow and mixing by dredge up, we choose to seperate varibles and study the case of a pure O underlying layer as a test case (ignoring any possible Mg that is predicted by evolutionary codes). We computed the model for an integrated time of 300 seconds (about one million time steps). The trend is very clear and we can extrapolate and predict a runaway a bit earlier and more energetic than the 1D case. Again, we could not continue that 2D simulation further due to the very low burning rates and a small hydrodynamical time step. As before, we computed three different phases, where each of them starts from a different 1D model along its evolution. The maximal 1D temperatures at the base of the burning shell, when mapped to the 2D grid, are : $ 1.05\times 10^8 K$, $ 1.125\times 10^8 K$ and $ 1.22\times 10^8 K$ respectively (Table \[tab:models\]). All three phases start with a transient related to the buildup of the convective flow. In Fig. \[Qoxygen\] we shifted the curves of burning rates in time in a way that permits a continuous line to be drawn. Along this continuous line, evolution proceeds faster towards a runaway than the 1D burning rate, also shown in Fig. \[Qoxygen\]. ![Log of the total energy production rate for all the oxygen models compared to the 1D model and to the CO model. The models with initial temperature higher than the default temperature of $ 9\times10^7 K $ were shifted in time in order to produce a smooth continuous line.[]{data-label="Qoxygen"}](v-q-oxygen-new.eps){width="84mm"} ![Log of the energy generation rate for proton capture on: C, O, Ne, and Mg, the rate is calculated for $\rho=1000.0$ gr/cc.[]{data-label="fig:Qcapture"}](Qcapture.eps){width="84mm"} We computed the last phase, with an initial base temperature of $ 1.22\times 10^8 K$ , for 350 sec until it reached a maximum. The maximal achieved temperature is $ 2.15\times 10^8 K$ and the maximal achieved burning rate is $ 82.4\times 10^{42} erg/sec $. As expected, this case lies somewhere between the 1D case and the C-O model. The convective flow resembles the C-O case, a significant feature being the strong correlation between the burning rate and the convective velocities. Most importantly for the case of an underlying oxygen layer, dredge up of substantial amounts of matter from the core into the envelope occurs in all our simulations. The trends are about the same as for the C-O models (Fig. \[Xoxygen\]). The correlation of the amount of mixing with the intensity of the burning is easily observable. Approximate models for Mg underlying layer {#mgreac} ------------------------------------------ Nova outbursts on massive ONe(Mg) white dwarfs are expected to be energetic fast nova [@sst86; @Gehrz98; @Gehrz08; @Ili02]. The problem we face is whether, in absence of the enhancement by C, the overshoot mixing mechanism can generate such energetic outbursts by mixing the solar abundance accreted matter with the underlying ONe(Mg) core. Based upon examination of the energy generation rate of proton capture reactions $(p,\gamma)$ on C, O, Ne , and Mg the results shown in Fig. \[fig:Qcapture\] make it evident that only Mg can compete with C in the range of temperatures relevant to nova runaways. Therefore, in spite of the fact that the abundance of Mg in the core sums up to only a few percent ([@GpGb01; @siess06]), the high capture rate might compensate for the low abundances and play an important role in the runaway. Furthermore, previous studies show that in the outer parts of the core, the parts important for our study, Mg is more abundant and can represent up to about $25 \%$ ([@berro94]). Restricting ourselves to the reaction network that includes only 15 elements we assume, as a demonstration, an artificial case of a homogeneous underlying layer with only one isotope (Mg). For this homogeneous layer model we replaced the energy generation rate of proton capture by O with the values of proton capture by Mg Fig. \[fig:Qcapture\]. To check our simplified network, we present in Fig. \[fig:Mg-rates\] the rates computed by a 216 elements network and our modified 15 elements rates, both for a mixture of $90\%$ solar matter with $10\%$ Mg. The difference is much smaller than the difference inside the big network between this mixture and a mixture of $90\%$ solar matter with $10\%$ C-O core matter. Therefore our simplified network is a good approximation regarding energy production rates. ![Log of the total energy production rate for $\rho=1000.0$ gr/cc of a mixture that contains $90\%$ solar matter mixed with $10\%$ of Mg. Red:15 elements net used for the 2D model; Blue: The rates given by a full net of 216 elements. The Black line gives the rates of $90\%$ solar matter mixed with $10\%$ of CO core matter (see text). []{data-label="fig:Mg-rates"}](Mg-rates.eps){width="84mm"} The crossing of the curves in Fig. \[fig:Mg-rates\] reveals a striking and very important feature - at temperatures less than $1.3\times10^{8}$K a mixture of $10\%$ carbon and $90\%$ solar compostion burns roughly ten times faster than a mixture of $10\%$ magnesium with the same solar composition gas. Above that temperature the rates exchange places and magnesium enhancement dominates C-O enhancement. This emphasizes the importance of a proper treatment of the effects of the $^{24}$Mg abundance in explosive burning on ONe(Mg) white dwarfs. In a future work, we intend to study more realistic models with the inclusion of a detailed reaction network. In Fig. \[Qmagnesium\] we present the total burning rates in our toy model, together with the rates of previous models. As can be expected, the enhancement of the burning in this toy model, relative to the 1D model, is indeed observed. However, the development of the runaway although faster than in the underlying O model is still much slower than the rise time of the C-O model. This result is easily overstood via the discussion above, as the initial burning temperature in the model is only ( $9\times10^7 {\rm K}$). At that temperature, the energy generation rate for proton capture on Mg is lower by almost three orders of magnitude relative to the energy generation rate by proton capture on C. The rates are about the same when the temperature is $1.3\times10^8 {\rm K}$ and from there on the magnesium capture rate is much higher. In order to demonstrate that this is indeed the case we calculated another 2D magnesium model in which the initial maximal 1D temperature was $1.125\times10^8 {\rm K}$. The rise time of this model is very short, even shorter than the rise time of the C-O model. One should regard those two simulations as two phases of one process - slow and fast. The maximal achieved temperature is $ 2.45\times 10^8$K and the maximal achieved burning rate is $ 1000.0\times 10^{42} erg/sec $, similar to the C-O case. ![Log of the total energy production rate for all the magnesium models compared to the CO model the 1D model and the oxygen model. The model with initial temperature higher than the default temperature of $ 9\times10^7 K $ is not shifted in time (see text). []{data-label="Qmagnesium"}](v-q-magnesium.eps){width="84mm"}       In order to better understand the convective flow for the case with underlying Mg we generated color maps of the absolute value of the velocity (speed) in the 2D models at different times along the development of the runaway (Fig. \[fig:flow-mg\]). The two Mg cases show extremely different behavior. In the first case (model m12kk Table \[tab:models\]), the burning rate is low and it grows mildly with time. The convective velocities are converging to a value of a few $ 10^{6} $ cm/sec and the cell size is only a bit bigger than a scale height. In the second case (model m12jj Table \[tab:models\]), the burning rate is high and it grows rapidly with time. The convective velocities are increasing with time up to a value of a few $10^{7} $ cm/sec. The convective cells in the radial direction converge to a structure of a few scale heights. In accordance with our previous cases, the magnesium toy model dredge up substantial amounts of matter from the core to the envelope. There is a one to one correlation with the convective velocities. The amount of mixing at the slow initial stages (model m12kk) are small and tend to converge to a few percents. The amount of mixing at the late fast stages grows rapidly with time (Fig. \[Xmagnesium\]). We present here only the general trend detailed results will be presented in a forthcoming study. conclusions {#conclu} =========== We present here, for the first time, detailed 2D modeling of nova eruptions for a range of possible compositions beneath the accreted hydrogen layer. The main conclusion to be drawn from this study is that **[significant enrichment (around $30 \%$) of the ejected layer, by the convective drege-up mechanism, is a common feature of the entire set of models, regardless of the composition of the accreting white dwarf]{} . On the other hand, the burning rates and therefore the time scales of the runaway depend strongly on the composition of the underlying layers. There is also a one to one correlation between the burning rate, the velocities in the convective flow, and the amount of temporal mixing. Therefore, second order differences in the final enrichment are expected to depend on the underlying composition. Specific results for each case are as follows :** a\) Since the energy generation rate for the capture of protons by C is high for the entire temperature range prevailing both in the ignition of the runaway and during the runaway itself, the underlying carbon layer accelerates the ignition and gives rise to C-O enrichment in the range of the observed amounts. b\) For the densities and temperatures prevailing in nova outbursts helium is an inert isotope. Therefore, it does not play any role in the enhancement of the runaway. Nevertheless, we demonstrate that once the bottom of the envelope is convective, the shear flow induces substantial amounts of mixing with the underlying helium. The eruption in those cases is milder, with a lower burning rate. For recurrent nova, where the timescales are too short for the diffusion process to play a significant role, the observed helium enrichment favor the underlying convection mechanism as the dominant mixing mechanism. Future work dealing with more realistic core masses (1.35-1.4 solar masses) for recurrent novae will give better quantitative predictions that will enable us to confront our results with observational data. c\) The energy generation rate for the capture of a proton by O is much lower than that of the capture by C for the entire temperature range prevailing in the ignition of the runaway and during the runaway itself. Underlying oxygen, whenever it is present, is thus expected to make only a minor contribution to the enhancement of the runaway. As a result the time scale of the runaway in this case is much larger than that of the C-O case. Still, the final enrichment of the ejecta is above $40 \% $, (Fig. \[Xoxygen\]). The energy generation rate by the capture of a proton by Ne is even lower than that of capture rate by O. We therefore expect Ne to make again only a minor contribution to the enhancement of the runaway, but with substantial mixing. d\) Nova outbursts on massive ONe(Mg) white dwarfs are expected to be energetic fast nova. In this survey we show that for the range of temperatures relevant for the nova runaway the only isotope that can compete with the C as a source for burning enhancement by overshoot mixing is Mg. From our demonstrating toy model, we can speculate that even small amounts of Mg present at the high stages of the runaway can substantially enhance the burning rate, leading to a faster runaway with a significant amount of mixing. The relationship between the amount of Mg in the ONe(Mg) core, the steepness of the runaway, and the amount of mixing in this case are left to future studies. Acknowledgments =============== We thank the referee for his comments which helped us in clarifying our arguments in the revised version of the paper. Ami Glasner, wants to thank the Department of Astronomy and Astrophysics at the University of Chicago for the kind hospitality during his visit to Chicago, where part of this work was done. This work is supported in part at the University of Chicago by the National Science Foundation under Grant PHY 02-16783 for the Frontier Center “Joint Institute for Nuclear Astrophysics” (JINA), and in part at the Argonne National Laboratory by the U.S. Department of Energy, Office of Nuclear Physics, under contract DE-AC02-06CH11357. natexlab\#1[\#1]{} Alexakis,A., Young,Y.-N. and Rosner,R. 2002,[*Phys.Rev.E*]{} ,65,026313. Alexakis,A., Calder,A.C., Heger,A., Brown,E.F., Dursi,L.J., Truran,J.W., Rosner,R., Lamb,D.Q., Timmes,F.X., Fryxell,B., Zingale,M., Ricker,P.M., & Olson,K. 2004, ApJ, 602, 931 Anders,E. & Grevesse,N. 1989, Geochim. Cosmochim. Acta,53,197 Anupama,G.C. & Dewagan,G.C. 2000, ApJ, 119,1359 Calder,A.C., Alexakis,A., Dursi,L.J., Rosner,R. , Truran, J. W., Fryxell,B., Ricker,P., Zingale,M., Olson,K.,Timmes,F.X. & MacNeice 2002, in [*Classical Nova Explosions* ]{} , ed. M. Hernanz & J. Jose (aip confrence proceedings; Melville,New York), p.134. Casanova, J.,Jose, J.,Garcia-Berro,E.,Calder,A. & Shore,S.N. 2010, A&A,513,L5 Casanova, J.,Jose, J.,Garcia-Berro,E.,Calder,A. & Shore,S.N. 2011a, A&A,527,A5 Casanova, J.,Jose, J.,Garcia-Berro,E., Shore,S.N. & Calder,A. 2011b, Nature10520. Diaz,M.,P., Williams,R.,E., Luna,G.,J., Moraes,M. & Takeda, L. 2010, ApJ, 140,1860 Domingues,I., Staniero,O., Isern,J., & Tornambe,A. & MacNeice 2002, in [*Classical Nova Explosions* ]{} , ed. M. Hernanz & J. Jose (aip confrence proceedings; Melville,New York), p.57. Durisen,R.,H. 1977, ApJ, 213,145 Eggleton,P.P. 1971, , 151, 351 Fujimoto,M.Y. 1988, A&A, 198,163. Fujimoto,M.Y.,Iben, I.Jr. 1992, ApJ, 399,646 Garcia-Berro,E. & Iben,I. 1994, ApJ,434,306 Garcia-Berro,E., Gil-Pons,P., & Truran,J.,W. & MacNeice 2002, in [*Classical Nova Explosions* ]{} , ed. M. Hernanz & J. Jose (aip confrence proceedings; Melville,New York), p.62. Gehrz, R.D.,Truran,J.W., Williams,R.E. & Starrfield, S. 1998,PASP,110,3 Gehrz, R.D.,Woodward,C.E., Helton, L.A.,Polomski,E.F., Hayward,T.L., Houck,J.R.,Evans,A., Krauter, J.,Shore, S.N., Starrfield, S., Truran,J.W.,Schwarz, G.J. & Wagner, R.M. 2008, ApJ, 672,1167 Gil-Pons,P. & Garcia-berro,E. 2001, A&A, 375,87. Glasner,S.A. & Livne,E. 1995, , 445,L149 Glasner,S.A., Livne,E., & Truran,J.W. 1997, ApJ, 475,754 Glasner,S.A., Livne,E., & Truran,J.W. 2005,, 625,347 Glasner,S.A., Livne,E., & Truran,J.W. 2007,, 665,1321 Hernanz,M. & Jose,J. 2008, New Astronomy Reviews,52,386. Iben, I.Jr., Fujimoto,M.Y., & MacDonald, J. 1991, ApJ, 375,L27 Iben, I.Jr., Fujimoto,M.Y., & MacDonald, J. 1992, ApJ, 388,521 Iliadis,C., Champagne, A.,Jose, J.,Starrfield, S., & Tupper,P.. 2002, ApJS, 142,105 Kercek Hillebrandt & Truran. 1998, , 337, 379 Kercek Hillebrandt & Truran. 1999, , 345, 831 Kippenhahn,R., & Thomas,H.-C., 1978, A&A, 63,625 Kovetz,A., & Prialnik, D. 1985 ApJ, 291,812. Kutter,G.S. & Sparks,W.M. 1987, ApJ, 321, 386 Kutter,G.S. & Sparks,W.M. 1989, ApJ, 340, 985 Livio, M. & Truran, J. W. 1987, ApJ, 318, 316 Livne, E. 1993, ApJ, 412, 634 MacDonald, J. 1983, ApJ, 273, 289 Meakin,C.A., & Arnett,D. 2007, ApJ, 667, 448 Prialnik, D. & Kovetz,A. 1984 ApJ, 281,367 Rosner,R., Alexakis,A., ,Young,Y.-N.,Truran,J.W., and Hillebrandt,W. 2001, ApJ, 562,L177. Shankar,A., Arnett,D., & Fryxell,B.A. 1992, ApJ, 394,L13 Shankar,A. & Arnett,D. 1994, ApJ, 433,216 Shore,S.,N. 1992, [*Astrophysical Hydrodynamics*]{}, (Wiley:Darmstadt). Siess, L. 2006, , 448, 231 Sparks,W.M. & Kutter,G.S. 1987, ApJ, 321, 394 Starrfield,S. Sparks,W.M. & Truran,J.W., 1986, , 303,L5 Truran,J.W. & Livio,M. 1986, ApJ, 308, 721. Truran, J. W. 1990, in [*Physics of Classical Novae*]{}, ed. A. Cassatella & R. Viotti (Berlin: Springer), 373 Webbink,R.,F., Livio,M., Truran,J.W. & Orio,M. 1987, ApJ, 314,653 Woosley,S.E. 1986 [*Nucleosynthesis and Chemical Evolution*]{}, ed. B.Hauck, A.Maeder & G.Magnet, (Geneva Observatory. Sauverny Switzerland) \[lastpage\]
--- abstract: 'We analyze heat and charge transport through a single-level quantum dot coupled to two BCS superconductors at different temperatures to first order in the tunnel coupling. In order to describe the system theoretically, we extend a real-time diagrammatic technique that allows us to capture the interplay between superconducting correlations, strong Coulomb interactions and nonequilibrium physics. We find that a thermoelectric effect can arise due to the superconducting proximity effect on the dot. In the nonlinear regime, the thermoelectric current can also flow at the particle-hole symmetric point due to a level renormalization caused by virtual tunneling between the dot and the leads. The heat current through the quantum dot is sensitive to the superconducting phase difference. In the nonlinear regime, the system can act as a thermal diode.' author: - Mathias Kamp - Björn Sothmann title: 'Phase-dependent heat and charge transport through superconductor-quantum dot hybrids' --- \[sec:intro\]Introduction ========================= Understanding, manipulating and managing heat flows at the nanoscale is of crucial importance for modern electronics where Joule heating constitutes a major nuisance in the operation of computer chips. Heat transport can occur via electrons [@giazotto_opportunities_2006], phonons [@li_colloquium:_2012] and photons [@meschke_single-mode_2006; @ronzani_tunable_2018]. A promising direction to achieve control over thermal transport by electrons is phase-coherent caloritronics [@martinez-perez_coherent_2014; @fornieri_towards_2017] in superconducting circuits. Phase-coherent caloritronics is based on the observation that not only the charge current depends on the phase difference across the junction via the Josephson effect [@josephson_possible_1962] but that also the heat current is sensitive to the phase difference [@maki_entropy_1965; @maki_entropy_1966; @guttman_phase-dependent_1997; @guttman_thermoelectric_1997; @guttman_interference_1998; @zhao_phase_2003; @zhao_heat_2004]. The phase-dependent contribution to the heat current arises from Andreev like processes where an incident electronlike quasiparticle above the superconducting gap is reflected as a holelike quasi particle and vice versa. Recently, phase-coherent heat transport in superconducting circuits has been observed experimentally [@giazotto_josephson_2012]. The possibility to control heat currents via magnetic fields has led to a number of proposals for phase-coherent caloritronic devices such as heat interferometers [@giazotto_phase-controlled_2012; @martinez-perez_fully_2013] and diffractors [@giazotto_coherent_2013; @guarcello_coherent_2016], thermal rectifiers [@giazotto_thermal_2013; @martinez-perez_efficient_2013; @fornieri_normal_2014; @fornieri_electronic_2015], transistors [@giazotto_proposal_2014; @fornieri_negative_2016], switches [@sothmann_high-efficiency_2017] and circulators [@hwang_phase-coherent_2018], thermometers [@giazotto_ferromagnetic-insulator-based_2015; @guarcello_non-linear_2018] as well as heat engines [@marchegiani_self-oscillating_2016; @hofer_autonomous_2016; @vischi_coherent_2018] and refrigerators [@solinas_microwave_2016; @marchegiani_-chip_2017]. Experimentally, heat interferometers [@giazotto_josephson_2012; @fornieri_nanoscale_2016; @fornieri_0_2017], the quantum diffraction of heat [@martinez-perez_quantum_2014], thermal diodes [@martinez-perez_rectification_2015] and a thermal router [@timossi_phase-tunable_2018] have been realized so far. Apart from potential applications in caloritronic and thermal logic [@paolucci_phase-tunable_2018], phase-coherent heat transport can also serve as a diagnostic tool that allows one, e.g., to probe the existence of topological Andreev bound states [@sothmann_fingerprint_2016]. ![\[fig:model\]Schematic sketch of our setup. A single-level quantum dot is tunnel coupled to two superconducting electrodes at temperatures $T_\text{L}$ and $T_\text{R}$.](Systemsetup.pdf){width="\columnwidth"} So far, the theoretical and experimental investigation of phase-coherent heat transport has been restricted to systems such as tunnel barriers and point contacts where the effects of electron-electron interactions can be neglected. While such setups already offer a lot of interesting physics, this raises the question of how Coulomb interactions can affect phase-dependent heat currents. In this paper, we address this important question by analyzing phase-coherent heat and charge transport through a thermally biased hybrid structure consisting of a strongly interacting single-level quantum dot tunnel coupled to superconducting electrodes, cf. Fig. \[fig:model\]. Superconductor-quantum dot hybrids have received a lot of attention, see Ref. [@de_franceschi_hybrid_2010] and [@martin-rodero_josephson_2011] for recent reviews on experiments and theory, respectively. In particular, there are investigations of the Josephson effect through quantum dots [@van_dam_supercurrent_2006; @jarillo-herrero_quantum_2006; @jorgensen_critical_2007; @baba_superconducting_2015; @szombati_josephson_2016; @probst_signatures_2016], multiple Andreev reflections [@levy_yeyati_resonant_1997; @buitelaar_multiple_2003; @cuevas_full_2003; @nilsson_supercurrent_2011; @rentrop_nonequilibrium_2014; @hwang_hybrid_2016], the interplay between superconducting correlations and the Kondo effect [@clerk_loss_2000; @buitelaar_quantum_2002; @avishai_superconductor-quantum_2003; @eichler_even-odd_2007; @lopez_josephson_2007; @karrasch_josephson_2008], the generation of unconventional superconducting correlations in quantum dots [@sothmann_unconventional_2014; @kashuba_majorana_2017; @weiss_odd-triplet_2017; @hwang_odd-frequency_2017], Cooper pair splitting [@recher_andreev_2001; @hofstetter_cooper_2009; @herrmann_carbon_2010; @hofstetter_finite-bias_2011; @das_high-efficiency_2012; @schindele_near-unity_2012] and the generation of Majorana fermions [@leijnse_parity_2012; @sothmann_fractional_2013; @fulga_adaptive_2013; @deng_majorana_2016]. Thermoelectric effects in superconductor-quantum dot hybrids have been studied in the absence of Coulomb interactions [@kleeorin_large_2016]. Here, we use a superconductor-quantum dot hybrid as a playground to investigate the interplay between superconductivity, strong Coulomb interactions and thermal nonequilibrium. Compared to tunnel junctions, quantum dots offer additional tunability of their level position by gate voltages. We extend a real-time diagrammatic approach [@konig_zero-bias_1996; @konig_resonant_1996; @schoeller_transport_1997; @konig_quantum_1999; @governale_real-time_2008; @governale_erratum:_2008] to describe thermally-driven transport which allows us to treat Coulomb interactions exactly and to perform a systematic expansion in the tunnel coupling between the dot and the superconducting leads. It allows for a treatment of superconducting correlations induced on the dot via the proximity effect and captures renormalization effects due to virtual tunneling which affect transport already in lowest order of perturbation theory. We evaluate charge and heat currents both in linear and nonlinear response. In particular, we find a thermoelectric effect in the vicinity of the particle-hole symmetric point which arises from the proximity effect. Furthermore, our device can act as an efficient thermal diode in nonlinear response. The paper is organized as follows. In Sec. \[sec:model\], we introduce the model of our setup. The real-time diagrammatic transport theory used to investigate transport is introduced in Sec. \[sec:method\]. We present the results of our analysis in Sec. \[ssec:linear\] for the linear and in Sec. \[ssec:nonlinear\] for the nonlinear transport regime. Conclusions are drawn in Sec. \[sec:conclusion\]. \[sec:model\]Model ================== We consider a single-level quantum dot weakly tunnel coupled to two conventional superconducting electrodes. Both superconductors are kept at the same chemical potential $\mu=0$ but at different temperatures $T_\text{L}$ and $T_\text{R}$ resulting in a nonequilibrium situation. The system is described by the total Hamiltonian $$H=\sum_{\eta=\text{L,R}}\left(H_\eta+H_{\text{tun},\eta}\right)+H_\text{dot},$$ where $\eta$ denotes the left (L) and right (R) superconductor. The superconducting leads are characterized by the mean-field BCS Hamiltonian $$\label{eq:BCS} H_\eta=\sum_{{\mathbf{k}}\sigma} \varepsilon_{\eta{\mathbf{k}}} a^\dagger_{\eta{\mathbf{k}}\sigma} a_{\eta{\mathbf{k}}\sigma}+\Delta_\eta e^{i\phi_\eta }\sum_{{\mathbf{k}}}a_{\eta -{\mathbf{k}}{\uparrow}}a_{\eta {\mathbf{k}}{\downarrow}}+\text{H.c.},$$ where $a_{\eta{\mathbf{k}}\sigma}^\dagger$ ($a_{\eta{\mathbf{k}}\sigma}$) denotes the creation (annihilation) operator of an electron with momentum ${\mathbf{k}}$, spin $\sigma$ and kinetic energy $\varepsilon_{\eta {\mathbf{k}}}$ in lead $\eta$. The second term on the right-hand side of Eq. describes the BCS pair interaction on a mean-field level. The two superconducting order parameters are characterized by their absolute value $\Delta_\eta$ and their phase $\phi_\eta$. The temperature dependence of $\Delta_\eta$ is determined by the solution of the self-consistency equation for the order parameter which can be found only numerically. However, it can be approximated with an accuracy of better than 2% by $$\Delta_\eta(T_\eta)=\Delta_{0} \tanh \left(1.74 \sqrt{\frac{T_{c}}{T_\eta}-1}\right),$$ in the whole temperature range from 0 to the critical temperature $T_{c}$. The latter is connected to the superconducting order parameter at zero temperature via ${k_\text{B}}T_{c}\approx 0.568 \Delta_{0}$. The single-level quantum dot is described by the Hamiltonian $$H_\text{dot}=\sum_\sigma \varepsilon c_\sigma^\dagger c_\sigma+U c_{\uparrow}^\dagger c_{\uparrow}c_{\downarrow}^\dagger c_{\downarrow}.$$ While the first term describes the energy of the dot level $\varepsilon$ that can be tuned by applying a gate voltage, the second term denotes the Coulomb interaction that has to be supplied in order to occupy the dot with two electrons at the same time. We remark that the dot spectrum is particle-hole symmetric at $\varepsilon=-U/2$. For later convenience, we introduce the detuning $\delta=2\varepsilon+U$ from the particle-hole symmetric point. The tunneling Hamiltonian which couples the dot to the superconducting leads is given by $$H_\text{tun}=\sum_{\eta {\mathbf{k}}\sigma}t_\eta a_{\eta{\mathbf{k}}\sigma}^\dagger c_\sigma+\text{H.c.}$$ Here, $t_\eta$ denotes a tunnel matrix element which we assume to be energy and momentum independent. It is connected to the tunnel coupling strength $\Gamma_\eta=2\pi|t_\eta|^2\rho_\eta$ where $\rho_\eta$ denotes the density of states of lead $\eta$ in the normal state. \[sec:method\]Real-time diagrammatic transport theory ===================================================== In order to describe transport through the quantum-dot setup, we make use of a real-time diagrammatic technique [@konig_zero-bias_1996; @konig_resonant_1996; @schoeller_transport_1997; @konig_quantum_1999] for systems with superconducting leads with a finite gap [@governale_real-time_2008; @governale_erratum:_2008]. It allows us to treat nonequilibrium physics, superconducting correlations and strong Coulomb interactions exactly while performing a systematic expansion in the dot-lead couplings. In the following, we are going to extend this diagrammatic framework to allow for the calculation of thermally-driven charge and heat currents through quantum dot-superconductor hybrids on equal footing. The central idea of the diagrammatic approach is to integrate out the noninteracting leads and to describe the remaining quantum dot system by its reduced density matrix. The reduced density matrix $\rho_\text{red}$ has matrix elements $P^{\chi_1}_{\chi_2}={\langle \chi_1|}\rho_\text{red}{|\chi_2\rangle}$. For the system under investigation, the nonvanishing density matrix elements are given by the probability to find the quantum dot empty, $P_0$, occupied with a single electron with spin $\sigma$, $P_\sigma$, or doubly occupied, $P_d$. Furthermore, the coupling to the superconductors gives rise to finite off-diagonal density matrix elements $P^d_0$ and $P^0_d$ that describe the coherent superposition of the dot being empty and occupied with two electrons. The generation of these coherent superpositions is a hallmark of the superconducting proximity effect on the quantum dot. The time evolution of the reduced density matrix is given by the generalized master equation which in the stationary limit reads $$0=-i(E_{\chi_1}-E_{\chi_2})P^{\chi_1}_{\chi_2}+\sum_{\chi_1'\chi_2'}W^{\chi_1\chi_1'}_{\chi_2\chi_2'}P^{\chi_1'}_{\chi_2'},$$ where $E_\chi$ is the energy of the many-body dot state $\chi$. The first term describes the coherent evolution of the dot states. The second term arises due to the dissipative coupling to the superconductors. The generalized transition rates $W^{\chi_1\chi_1'}_{\chi_2\chi_2'}$ are obtained from irreducible self-energy diagrams of the dot propagator on the Keldysh contour [@governale_real-time_2008; @governale_erratum:_2008], cf. also Appendix \[app:RTD\] for a detailed explanation of the connection between diagrams and physical processes. By expanding both the density matrix elements as well as the generalized transition rates up to first order in the tunnel couplings, we find that the coherent superpositions $P^0_d$ and $P^d_0$ are finite to lowest order in $\Gamma_\eta$ only if the empty and doubly occupied dot states are nearly degenerate, $\delta\lesssim\Gamma_\eta$ [@sothmann_influence_2010]. For this reason, we are going to restrict ourselves to the analysis of transport in the vicinity of the particle-hole symmetric point to first order in the tunnel coupling in the following. The generalized master equation can be brought into a physically intuitive form by introducing the probabilities to find the dot occupied with an even and odd number of electrons, $${\mathbf{P}}=\left(\begin{array}{c} P_\text{e}\\P_\text{o} \end{array}\right)=\left(\begin{array}{c} P_0+P_d \\ P_{\uparrow}+P_{\downarrow}\end{array}\right),$$ as well as a pseudospin degree of freedom that characterizes the coherences between empty and doubly occupied dot and, thus, the superconducting proximity effect on the quantum dot $$\begin{aligned} I_x&=\frac{P^0_d+P^d_0}{2},\\ I_y&=i\frac{P^0_d-P^d_0}{2},\\ I_z&=\frac{P_0-P_d}{2}.\end{aligned}$$ The generalized master equation can be decomposed into one set of equations that arises from the time evolution of the dot occupations and another set due to the pseudospin. The former is given by $$\label{eq:MEP} 0 =\sum_\eta \left[\left(\begin{array}{cc} -Z^-_\eta & Z^+_\eta \\ Z^-_\eta & -Z^+_\eta \end{array}\right){\mathbf{P}} + \left(\begin{array}{c} 4X^-_\eta \\ -4X^-_\eta \end{array}\right){\mathbf{I}}\cdot {\mathbf{n}}_\eta\right],$$ where $$X^\pm_\eta=\pm\frac{\Gamma_\eta}{\hbar}\frac{\Delta_\eta \Theta(U/2-\Delta_\eta)}{\sqrt{(U/2)^2-\Delta_\eta^2}}f_\eta(\pm U/2),$$ $$Z^\pm_\eta=\frac{\Gamma_\eta}{\hbar}\frac{U\Theta(U/2-\Delta_\eta)}{\sqrt{(U/2)^2-\Delta_\eta^2}}f_\eta(\pm U/2),$$ with the Fermi function $f_\eta(\omega)=[\exp(\omega/({k_\text{B}T}_\eta))+1]^{-1}$. ${\mathbf{n}}_\eta=(\cos\phi_\eta,\sin\phi_\eta,0)$ denotes a unit vector whose direction is determined by the phase of the superconducting order parameters. Interestingly, in Eq.  the dot occupations are coupled to the pseudospin degree of freedom. This is in direct analogy to the case of a quantum dot weakly coupled to ferromagnetic electrodes where the dot occupations are linked to the spin accumulation in the dot [@konig_interaction-driven_2003; @braun_theory_2004]. The second set of equations is given by a Bloch-type equation for the pseudospin, $$ 0 =\left(\frac{d{\mathbf{I}}}{dt}\right)_\text{acc}-\frac{{\mathbf{I}}}{\tau_\text{rel}}+{\mathbf{I}}\times{\mathbf{B}}.$$ The first term, $$\left(\frac{d{\mathbf{I}}}{dt}\right)_\text{acc}=\sum_\eta \left(X^-_\eta P_\text{e}+X^+_\eta P_\text{o}\right){\mathbf{n}}_\eta,$$ describes the accumulation of pseudospin on the dot due to tunneling in and out of electrons. The second term characterizes the relaxation of the pseudospin due to electron tunneling on a time scale given by $\tau_\text{rel}^{-1}=\sum_\eta Z^-_\eta$. Finally, the last term gives rise to a precession of the pseudospin in an effective exchange field, $${\mathbf{B}}=B_\text{L}{\mathbf{n}}_\text{L}+B_\text{R}{\mathbf{n}}_\text{R}+\delta {\mathbf{e}}_z,$$ which arises from virtual charge fluctuations on the dot as well as from a detuning away from the particle-hole symmetric point. The exchange field contribution from the two leads is given by $$\label{eq:Bex} B_\eta=\frac{2\Gamma_\eta}{\pi\hbar}\int'd\omega\frac{\Delta_\eta\Theta(|\omega|-\Delta_\eta)}{\sqrt{\omega^2-\Delta_\eta^2}}\frac{f_\eta(\omega)}{\omega+U/2}\operatorname{sign}\omega,$$ where the prime indicates the principal value. The integral can be solved analytically as an infinite sum over Matsubara frequencies, see Appendix \[app:Bex\] for details. The interplay of pseudospin accumulation, pseudospin relaxation and pseudospin precession in the exchange field leads to a nontrivial pseudospin dynamics on the dot which acts back on the dot occupations via Eq. . It is this nontrivial pseudospin behavior that gives rise to interesting transport properties of the system under investigation. The charge on the quantum dot is related to the $z$ component of the pseudospin via $Q_\text{dot}=e(1-2I_z)$. This allows us to connect the time evolution of $I_z$ directly to the charge current flowing between the dot and lead $\eta$ via $$\label{eq:Ic} I^e_\eta=-2e(Z_\eta^-I_z-I_xB_{\eta,y}+I_yB_{\eta,x}).$$ We remark that the real-time diagrammatic approach conserves charge currents automatically. Therefore, we define $I^e=I^e_\text{L}=-I^e_\text{R}$ in the following. In analogy to the charge, we can relate the average dot energy to the probability to find the dot with an odd occupation, $E_\text{dot}=-UP_\text{o}/2$, to derive for the heat current between the dot and lead $\eta$ $$I^h_\eta=-\frac{U}{2}\left(Z^+_\eta P_\text{o}-Z^-_\eta P_\text{e}+4X^-_\eta {\mathbf{I}}\cdot{\mathbf{n}}_\eta\right).$$ We remark that in the absence of any bias voltage there is no Joule heating and, hence, heat and energy currents are equal to each other. This implies that heat currents are conserved such that we can define $I^h=I^h_\text{L}=-I^h_\text{R}$. \[sec:results\]Results ====================== In this section, we are going to analyze the charge and heat currents flowing through the system in response to an applied temperature bias. We will first focus on the linear-response regime and then turn to a discussion of nonlinear transport. \[ssec:linear\]Linear response ------------------------------ For the sake of concreteness, we consider a symmetric quantum-dot setup. To this end, we define the temperatures of the superconducting leads as $T_\eta=T+\Delta T_\eta$ with the reference temperature $T$ and the temperature bias $\Delta T_\text{L}=-\Delta T_\text{R}\equiv \Delta T/2$. The tunnel couplings are chosen equal, $\Gamma_\text{L}=\Gamma_\text{R} \equiv\Gamma/2$. Furthermore, we assume that the two superconducting order parameters have the same absolute value, $\Delta_\text{L}(T)=\Delta_\text{R}(T)=\Delta$, and set their phases as $\phi_\text{L}=-\phi_\text{R}\equiv\phi/2$. To zeroth order in $\Delta T$, i.e., in thermal equilibrium the occupation probabilities of the dot are given by Boltzman factors $P_\chi^{(0)}\propto e^{-E_\chi/{k_\text{B}T}}$. At the same time, the pseudospin accumulation on the dot vanishes exactly. In consequence, there is no charge and heat current flowing through the system. Since we consider only tunnel events that are first order in the tunnel coupling, there is no supercurrent through the quantum dot [@governale_real-time_2008]. The latter would manifest itself as a phase-dependent equilibrium contribution to the charge current. It requires, however, the coherent transfer of Cooper pairs through the dot and, hence, higher order tunnel processes. ![\[fig:Iclin\]Linear-response charge current $I^e$ as a function of (a) phase difference $\phi$ and (b) detuning $\delta$. Parameters are $U=4{k_\text{B}T}$ and $\Delta=1.75{k_\text{B}T}$.](Icharge_lin_phi.pdf "fig:"){width="\columnwidth"} ![\[fig:Iclin\]Linear-response charge current $I^e$ as a function of (a) phase difference $\phi$ and (b) detuning $\delta$. Parameters are $U=4{k_\text{B}T}$ and $\Delta=1.75{k_\text{B}T}$.](Icharge_lin_delta.pdf "fig:"){width="\columnwidth"} A finite temperature bias $\Delta T$ generates a finite pseudospin accumulation on the dot. To first order in $\Delta T$ the accumulation is along the direction ${\mathbf{n}}_\text{L}-{\mathbf{n}}_\text{R}$, i.e., a finite pseudospin component $I^{(1)}_y$ is generated due to nonequilibrium tunneling of electrons. The magnitude of the pseudospin accumulation is limited by the pseudospin relaxation term $-{\mathbf{I}}/\tau_\text{rel}$. In addition, the effective exchange field ${\mathbf{B}}$ gives rise to a precession of the accumulated pseudospin and leads to finite pseudospin components $I^{(1)}_x$ and $I^{(1)}_z$. According to Eq. , the pseudospin accumulation leads to a finite charge current given by $$\label{eq:Ielin} I^e=-e\frac{2B_0 X_1^-Z_0^-\sin^2\frac{\phi}{2}}{Z_0^-\frac{\delta}{\hbar}+2[(Z_0^-)^2+B_0^2\cos^2\frac{\phi}{2}]\tan\beta}\frac{\Delta T}{T}.$$ Here, we introduced the expansions $$\begin{aligned} X^\pm_\eta&=X^\pm_0+X^\pm_1\frac{\Delta T_\eta}{T}+\mathcal O(\Delta T_\eta^2),\\ Z^\pm_\eta&=Z^\pm_0+Z^\pm_1\frac{\Delta T_\eta}{T}+\mathcal O(\Delta T_\eta^2),\\ B_\eta&=B_0+B_1\frac{\Delta T_\eta}{T}+\mathcal O(\Delta T_\eta^2),\end{aligned}$$ as well as the angle $\beta=\arctan (I^{(1)}_y/I^{(1)}_x)$ which can be written as $$\tan\beta=\frac{2\hbar}{\delta Z^-_0}\left[(Z_0^-)^2-4(X_0^-)^2\cos^2\frac{\phi}{2}\right].$$ The thermoelectric charge current Eq.  arises in the vicinity of the particle-hole symmetric point. It relies crucially on the superconducting proximity effect and the resulting pseudospin accumulation on the dot because the Fermi functions in the generalized transition rates $W^{\chi_1\chi_1'}_{\chi_2\chi_2'}$ are evaluated at the particle-hole symmetric point $\delta=0$ and, therefore, do not lead to any thermoelectric effect. It is, thus, the pseudospin accumulation that introduces a nontrivial $\delta$ dependence into the master equation via the effective exchange field ${\mathbf{B}}$. In consequence, the thermoelectric current vanishes for $\Delta\to0$, i.e., in the absence of superconductivity in the leads. In Fig. \[fig:Iclin\] (a), the charge current is shown as a function of the phase difference $\phi$. At zero phase difference, the charge current vanishes independently of the detuning $\delta$ because there is no pseudospin accumulation on the quantum dot. In contrast, at $\phi=\pi$ the charge current becomes maximal due to the strong pseudospin accumulation on the dot. Figure \[fig:Iclin\] (b) shows the charge current as a function of the detuning $\delta$. For $\delta=0$ the charge current vanishes due to particle-hole symmetry. For positive (negative) values of the detuning the charge current takes positive (negative) values indicating electron (hole) transport. The maximal current occurs for a phase difference of $\phi=\pi$ and detuning $\delta=\pm2\hbar Z^-_0$ and takes the value $I^e=-(e B_0 X_1^-\Delta T)/(2Z_0^- T)$. The maximum current is exponentially suppressed in $U/({k_\text{B}T})$ due to the requirement of thermally excited quasiparticles. At the same time, it is *not* enhanced by the divergence of the superconducting density of states close to the gap. For large detunings, the strong exchange field along the $z$ direction averages out the pseudospin accumulation along the $x$ and $y$ direction. As a consequence, the charge current tends to zero. ![\[fig:Ihline\]Linear-response heat current $I^h$ as a function of (a) phase difference $\phi$ and (b) detuning $\delta$. Parameters as in Fig. \[fig:Iclin\].](Iheat_lin_phi.pdf "fig:"){width="\columnwidth"} ![\[fig:Ihline\]Linear-response heat current $I^h$ as a function of (a) phase difference $\phi$ and (b) detuning $\delta$. Parameters as in Fig. \[fig:Iclin\].](Iheat_lin_delta.pdf "fig:"){width="\columnwidth"} The heat current driven by a finite temperature bias $\Delta T$ is given by $$I^h=-\frac{U}{2}\left(Z^+_1+4I^{(1)}_yX_0^-\sin\frac{\phi}{2}\right)\frac{\Delta T}{T}.$$ It consists of two contributions. The first one is independent of the phase difference $\phi$ and depends only on the tunnel coupling $\Gamma$, the Coulomb interaction $U$ and the superconducting order parameter $\Delta$. In contrast, the second contributions is sensitive to the phase difference $\phi$ and, thus, gives rise to a phase-coherent flow of heat which arises from the superconducting proximity effect on the dot. In consequence, it vanishes in the limit $\Delta\to0$. Interestingly, the phase-dependent part of the heat current is proportional to $I^{(1)}_y$, i.e., it provides in principle direct information about the pseudospin accumulation on the dot. We remark that just like the charge current the heat current is also exponentially suppressed in $U/{k_\text{B}T}$. At the same time, however, it is enhanced by the increased superconducting density of states close to the gap. Hence, for the system heat currents in units of $\Gamma U/\hbar$ tend to be much larger than charge currents in units of $e\Gamma/\hbar$. The phase dependence of the heat current is shown in Fig. \[fig:Ihline\](a). At $\phi=0$, the heat current is maximal and takes the value $I^h=-UZ_1^+ \Delta T/(2T)$. The minimal heat current occurs at $\phi=\pi$ since $X^0_-$ is negative while the pseudospin accumulation $I^{(1)}_y$ is positive. This $\phi$ dependence of the thermal conductance differs from that of a tunneling Josephson junction which exhibits a maximum of the thermal conductance at $\phi=\pi$ [@maki_entropy_1965; @maki_entropy_1966]. It rather resembles the phase-dependent thermal conductance of a transparent or topological Josephson junction which also has a minimum at $\phi=\pi$ [@zhao_phase_2003; @zhao_heat_2004; @sothmann_fingerprint_2016]. The ratio between the minimal and maximal heat current is given by $1-4\Delta^2/U^2$, i.e., it can be maximized by tuning the superconducting gap via the average temperature to be close to the Coulomb energy $U$. At the same time, this is also the regime where the relative modulation of the heat current becomes largest. The $\delta$ dependence of the heat current is depicted in Fig. \[fig:Ihline\](b). The largest modulation of the heat current occurs for $\delta=0$. In this case, the exchange field component along the $z$ axis vanishes which would otherwise reduce $I^{(1)}_y$ and thus the modulation amplitude. For the same reason, the modulation of the thermal conductance is strongly suppressed for large detunings $\delta\gg\Gamma$. \[ssec:nonlinear\]Nonlinear response ------------------------------------ ![\[fig:Icnonlinear\]Charge current $I^e$ in units of $10^{-3}e\Gamma/\hbar$ as a function of phase difference $\phi$ and detuning $\delta$. The red line indicates a vanishing charge current. Parameters are $\Delta_0=2.32{k_\text{B}T}_\text{L}$, $U=5{k_\text{B}T}_\text{L}$, $\Gamma_\text{R}=4\Gamma_\text{L}$ and $T_\text{R}=T_\text{L}/2$.](Icharge_nonlin.pdf){width="\columnwidth"} We now turn to a discussion of transport in the nonlinear regime where a large temperature bias is applied across the system. The resulting charge current is shown as a function of phase difference and detuning in Fig. \[fig:Icnonlinear\]. Interestingly, for a phase difference $\phi\neq 0,\pi$, there is a finite charge current at the particle-hole symmetric point $\delta=0$. This finite thermoelectric effect can be understood as follows. If the dot is empty (doubly occupied), electrons can virtually tunnel on (off) the dot and back. These virtual tunneling events give rise to a renormalization of the dot level energies which is captured by the real-time diagrammatic technique. Importantly, in the presence of Coulomb interactions, the renormalization is different for the empty and doubly occupied state and, thus, can break particle-hole symmetry effectively. Hence, similarly to charge transport in quantum-dot spin valves [@konig_interaction-driven_2003; @braun_theory_2004; @hell_spin_2015], thermoelectric effects in superconductor-quantum dot hybrids constitute an important case where interaction-induced renormalization effects have a drastic impact on transport properties. Using Eq.  the condition for a vanishing current can be cast into the compact form $$\frac{Z^-_\text{L}}{Z^-_\text{R}}=\frac{B_\text{L}\sin\left(\varphi-\frac{\phi}{2}\right)}{B_\text{R}\sin\left(\varphi+\frac{\phi}{2}\right)},$$ where $\varphi$ denotes the $\delta$-dependent angle between the pseudospin and the $x$ axis. It illustrates the interplay between pseudospin relaxation and precession that influences the nonlinear charge current in a nontrivial way and is indicated by the red line in Fig. \[fig:Icnonlinear\]. The nonlinear heat current behaves qualitatively similar to the linear-response case, i.e., it exhibits a minimum at phase difference $\phi=\pi$ and detuning $\delta=0$. We remark that the amplitude of the heat current oscillation is reduced in the nonlinear regime because the heat current at $\phi=\pi$ increases stronger with the temperature bias than the heat current at $\phi=0$. ![\[fig:heatdiode\]Nonlinear heat current as a function of the asymmetry $a$. Parameters are $\Delta_0=2.32 {k_\text{B}T}_\text{L}$, $U=4.64{k_\text{B}T}_\text{L}$, $\delta=10\Gamma$ and $T_\text{R}=0.1T_\text{L}$.](Iheat_asym.pdf){width="\columnwidth"} In the nonlinear regime, an asymmetric quantum-dot setup with $\Gamma_\text{L}\neq\Gamma_\text{R}$ can act as a thermal diode where the heat currents in the forward and backward direction are different. To discuss this effect in more detail, we introduce the asymmetry of tunnel couplings as $a=(\Gamma_\text{L}-\Gamma_\text{R})/(\Gamma_\text{L}+\Gamma_\text{R})$. The heat current in the forward direction is given by $I^h(a)$ while in the backward direction it is given by $I^h(-a)$. This definition is equivalent to denoting the forward (backward) direction as the one for which $T_\text{L}>T_\text{R}$ ($T_\text{L}<T_\text{R}$) at fixed tunnel couplings as long as $\Delta_{0,\text{L}}=\Delta_{0,\text{R}}$. Figure \[fig:heatdiode\] shows the nonlinear heat current as a function of the asymmetry parameter $a$. For negative values of $a$, the heat current increases with $a$ while for positive values of $a$ it has a pronounced maximum. This nontrivial dependence on $a$ is most pronounced when the Coulomb energy is slightly larger than the superconducting gap. Since the heat current is not an even function of $a$, the system can rectify heat with a large heat current in the forward direction and a small heat current in the backward direction. For the chosen parameters we find that rectification efficiencies $I^h(a)/I^h(-a)\approx50$ can be achieved at the maximum forward heat current. In order to understand the mechanism behind the thermal rectification, let us first consider the case of a single-level quantum dot coupled to two normal metal electrodes. At the particle-hole symmetric point, the heat current depends on the tunnel couplings via $\Gamma_\text{L}\Gamma_\text{R}/(\Gamma_\text{L}+\Gamma_\text{R})$. Hence, the heat current is an even function of the asymmetry $a$, $I^h(+a)=I^h(-a)$, such that thermal rectification does not occur. For the superconducting system, the dependence of the heat current on the tunnel barriers is modified by the BCS density of states and is given by $$\frac{\Gamma_\text{L}\Gamma_\text{R}}{\Gamma_\text{L}\sqrt{U^2-4\Delta_\text{R}^2}+\Gamma_\text{R}\sqrt{U^2-4\Delta_\text{L}^2}}.$$ Hence, due to the temperature dependence of the superconducting gap the heat current exhibits a nontrivial dependence on the asymmetry $a$ which forms the basis of the heat rectification mechanism. In addition, the coherent pseudospin dynamics of the dot can enhance the thermal diode effect for a finite phase difference $\phi$. As can be seen in Fig. \[fig:heatdiode\] it can increase the rectification efficiency by nearly a factor of 4 if the tunnel coupling asymmetry is adjusted to maximize the heat current in the forward direction. We remark that the enhancement of the rectification efficiency comes at the price of a slightly reduced heat current in the forward direction compared to the case $\phi=0$. \[sec:conclusion\]Conclusions ============================= We have analyzed thermally-driven transport through a superconductor-quantum dot hybrid in the sequential tunneling regime. We find that in linear response a finite thermoelectric effect can be generated close to the particle-hole symmetric point due to the superconducting proximity effect on the dot. In addition, there is a phase-dependent heat current through the quantum dot which in linear response is sensitive to the pseudospin accumulation in the dot, i.e., it provides direct access to information about the proximity effect on the dot. In nonlinear response, an interaction-induced level renormalization due to virtual tunneling gives rise to a finite thermoelectric response at the particle-hole symmetric point. Furthermore, the system can act as a thermal diode which is based on the temperature-dependence of the superconducting gap as well as the superconducting proximity effect. Finally, we comment on potential experimental realizations of our proposal. For superconducting electrodes based on Al, the zero-temperature gap is given by , while the critical temperature is . Hence, the device should be operated at temperatures around while the Coulomb interaction should be of the order of . Assuming furthermore tunnel couplings of the order of , we estimate charge currents of the order of and heat currents of the order of which are both within the reach of present experimental technology [@fornieri_nanoscale_2016; @timossi_phase-tunable_2018; @dutta_thermal_2017]. We thank Fred Hucht for valuable discussions and Stephan Weiss and Sun-Yong Hwang for feedback on the manuscript. We acknowledge financial support from the Ministry of Innovation NRW via the “Programm zur Förderung der Rückkehr des hochqualifizierten Forschungsnachwuchses aus dem Ausland”. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. \[app:RTD\]Real-time diagrammatics ================================== ![\[fig:diagrams\]Diagrams corresponding to different transitions in our setup system. Horizontal lines describe the forward and backward propagation of the dot on the Keldysh contour. Dots indicate tunneling vertices. Dashed lines correspond to tunneling lines which arise from Wick contractions of reservoir operators. Due to the presence of superconducting leads there are both normal (a), (b) and anomalous (c), (d) tunneling lines.](Diagrams-1.pdf "fig:"){width=".2\textwidth"} ![\[fig:diagrams\]Diagrams corresponding to different transitions in our setup system. Horizontal lines describe the forward and backward propagation of the dot on the Keldysh contour. Dots indicate tunneling vertices. Dashed lines correspond to tunneling lines which arise from Wick contractions of reservoir operators. Due to the presence of superconducting leads there are both normal (a), (b) and anomalous (c), (d) tunneling lines.](Diagrams-2.pdf "fig:"){width=".2\textwidth"} ![\[fig:diagrams\]Diagrams corresponding to different transitions in our setup system. Horizontal lines describe the forward and backward propagation of the dot on the Keldysh contour. Dots indicate tunneling vertices. Dashed lines correspond to tunneling lines which arise from Wick contractions of reservoir operators. Due to the presence of superconducting leads there are both normal (a), (b) and anomalous (c), (d) tunneling lines.](Diagrams-3.pdf "fig:"){width=".2\textwidth"} ![\[fig:diagrams\]Diagrams corresponding to different transitions in our setup system. Horizontal lines describe the forward and backward propagation of the dot on the Keldysh contour. Dots indicate tunneling vertices. Dashed lines correspond to tunneling lines which arise from Wick contractions of reservoir operators. Due to the presence of superconducting leads there are both normal (a), (b) and anomalous (c), (d) tunneling lines.](Diagrams-4.pdf "fig:"){width=".2\textwidth"} In this Appendix, we discuss the connection between real-time diagrams and the underlying physical processes. For a details on the diagrammatic theory for superconducting systems we refer the reader to Ref. [@governale_erratum:_2008]. Real-time diagrams consist of horizontal lines describing the forward and backward propagation of the quantum dot along the Keldysh contour. Dots on the Keldysh contour correspond to tunneling vertices where an electron is created (annihilated) on the dot and annihilated (created) in one of the superconductors. When we integrate out the noninteracting lead degrees of freedom, pairs of tunneling vertices get connected by tunneling lines. In superconducting systems, two different types of tunneling lines arise (i) normal lines which connect a vertex that creates an electron on the dot with a vertex that annihilates a dot electron and (ii) anomalous lines where two vertices that both annihilate (create) a dot electron are connected. The anomalous lines arise because the BCS Hamiltonian is diagonalized by Bogoliubov quasipartices which are superpositions of electrons and holes. Physically, they describe Andreev reflection processes where two electrons on the dot are created (annihilated) while a Cooper pair in the superconductor is annihilated (created). Let us now focus on first order diagrams as depicted in Fig. \[fig:diagrams\]. Diagrams such as the one in Fig. \[fig:diagrams\](a) describe the transition between two diagonal density matrix elements. They correspond to the usual transition rates that are obtained via Fermi’s golden rule in conventional rate equation approaches. Diagrams such as shown in Fig. \[fig:diagrams\](b) yield the diagonal elements of the rate matrix. While in rate equation approaches they are typically set by hand to be $W_{\chi,\chi}^{\chi,\chi}=-\sum_{\chi'\neq\chi}W_{\chi',\chi}^{\chi',\chi}$ in order to ensure the conservation of probability, they appear naturally in the diagrammatic framework and, thus, provide an additional consistency check of the results. In superconducting systems, additional diagrams involving anomalous tunneling lines such as the ones depicted in Fig. \[fig:diagrams\](c) and (d) appear. They give rise to finite off-diagonal density matrix elements describing coherent superpositions of the dot being empty and doubly occupied and, hence, capture the superconducting proximity effect on the quantum dot. We emphasize that the proximity effect occurs already in first order in the tunnel coupling via these diagrams as they give rise to the coherent transfer of a Cooper pair between the dot and the superconductor. However, the proximity effect on the dot does not give rise to a supercurrent through the system in first order. A finite supercurrent relies on the coherent coupling between the two superconducting leads which for our setup can occur only in second- and higher-order processes. This is different in the case of a simple superconducting tunnel junctions where a finite supercurrent occurs already in first order [@josephson_possible_1962]. Diagrams such as Fig. \[fig:diagrams\](d) give rise to a level renormalization of the empty and doubly occupied state relative to each other and, thus, contribute to the exchange field in Eq. . \[app:Bex\]Exchange field integral ================================== The integral appearing in the expression for the exchange field  can be solved analytically by performing the substitution $\omega\operatorname{sign}\omega=\Delta\cosh\alpha$. Subsequently, the residue theorem can be applied to the rectangle with corner points $(-R,R,R+2\pi i, -R+2\pi i)$ and taking the limit $R\to\infty$. While the contribution from the vertical edges vanishes, the top and bottom edge yield identical contributions. This allows us to express the exchange field integral as the infinite sum $$B_\eta=\sum_{n=0}^\infty8\Gamma_\eta {k_\text{B}T}_\eta\frac{U}{[4(2n+1)^2\pi^2{k_\text{B}T}_\eta^2+U^2]}\frac{\Delta_\eta}{\sqrt{(2n+1)\pi^2{k_\text{B}T}_\eta^2+\Delta_\eta^2}}.$$ For our numerical results, we have evaluated the sum by taking into account the first 10.000 summands. [87]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/RevModPhys.78.217) [****,  ()](\doibase 10.1103/RevModPhys.84.1045) [****,  ()](\doibase 10.1038/nature05276) [ ()](\doibase 10.1038/s41567-018-0199-4) [****,  ()](\doibase 10.1007/s10909-014-1132-6) [****,  ()](\doibase 10.1038/nnano.2017.204) [****,  ()](\doibase 10.1016/0031-9163(62)91369-0) [****,  ()](\doibase 10.1103/PhysRevLett.15.921) [****,  ()](\doibase 10.1103/PhysRevLett.16.258) [****,  ()](\doibase 10.1103/PhysRevB.55.3849) [****,  ()](\doibase 10.1103/PhysRevB.55.12691) [****,  ()](\doibase 10.1103/PhysRevB.57.2717) [****,  ()](\doibase 10.1103/PhysRevLett.91.077003) [****,  ()](\doibase 10.1103/PhysRevB.69.134503) [****,  ()](\doibase 10.1038/nature11702) [****,  ()](\doibase 10.1063/1.4750068) [****,  ()](\doibase doi:10.1063/1.4794412) [****,  ()](\doibase 10.1103/PhysRevB.88.094506) [****,  ()](\doibase 10.1103/PhysRevB.94.054522) [****,  ()](\doibase 10.1063/1.4846375) [****,  ()](\doibase doi:10.1063/1.4804550) [****,  ()](\doibase 10.1063/1.4875917) [****,  ()](\doibase 10.1063/1.4915899) [****,  ()](\doibase 10.1063/1.4893443) [****,  ()](\doibase 10.1103/PhysRevB.93.134508) [****,  ()](\doibase 10.1088/1367-2630/aa60d4) [****,  ()](\doibase 10.1103/PhysRevApplied.10.044062) [****,  ()](\doibase 10.1103/PhysRevApplied.4.044016) [ ()](http://arxiv.org/abs/1807.03186), [****,  ()](\doibase 10.1103/PhysRevApplied.6.054014) [****,  ()](\doibase 10.1103/PhysRevB.94.235420) [ ()](http://arxiv.org/abs/1806.01568),  [****,  ()](\doibase 10.1103/PhysRevB.93.224521) [****,  ()](\doibase 10.1209/0295-5075/124/48005) [****, ()](\doibase 10.1038/nnano.2015.281) [****,  ()](\doibase 10.1038/nnano.2017.25) [****,  ()](\doibase 10.1038/ncomms4579) [****,  ()](\doibase 10.1038/nnano.2015.11) [****,  ()](\doibase 10.1021/acs.nanolett.7b04906) [****,  ()](\doibase 10.1103/PhysRevApplied.10.024003) [****,  ()](\doibase 10.1103/PhysRevB.94.081407) [****,  ()](\doibase 10.1038/nnano.2010.173) [****,  ()](\doibase 10.1080/00018732.2011.624266) [****,  ()](\doibase 10.1038/nature05018) [****,  ()](\doibase 10.1038/nature04550) [****,  ()](\doibase 10.1021/nl071152w) [****,  ()](\doibase 10.1063/1.4936888) [****,  ()](\doibase 10.1038/nphys3742) [****,  ()](\doibase 10.1103/PhysRevB.94.155445) [****,  ()](\doibase 10.1103/PhysRevB.55.R6137) [****,  ()](\doibase 10.1103/PhysRevLett.91.057005) [****,  ()](\doibase 10.1103/PhysRevLett.91.187001) [****,  ()](\doibase 10.1021/nl203380w) [****,  ()](\doibase 10.1103/PhysRevB.89.235110) [****,  ()](\doibase 10.1088/1367-2630/18/9/093024) [****, ()](\doibase 10.1103/PhysRevB.61.9109) [****,  ()](\doibase 10.1103/PhysRevLett.89.256801) [****,  ()](\doibase 10.1103/PhysRevB.67.041301) [****,  ()](\doibase 10.1103/PhysRevLett.99.126602) [****,  ()](\doibase 10.1103/PhysRevB.75.045132) [****,  ()](\doibase 10.1103/PhysRevB.77.024517) [****,  ()](\doibase 10.1103/PhysRevB.90.220501) [****,  ()](\doibase 10.1103/PhysRevB.95.174516) [****,  ()](\doibase 10.1103/PhysRevB.96.064529) [****,  ()](\doibase 10.1103/PhysRevB.98.161408) [****,  ()](\doibase 10.1103/PhysRevB.63.165314) [****,  ()](\doibase 10.1038/nature08432) [****,  ()](\doibase 10.1103/PhysRevLett.104.026801) [****, ()](\doibase 10.1103/PhysRevLett.107.136801) [****,  ()](\doibase 10.1038/ncomms2169) [****,  ()](\doibase 10.1103/PhysRevLett.109.157002) [****,  ()](\doibase 10.1103/PhysRevB.86.134528) [****,  ()](\doibase 10.1088/1367-2630/15/8/085018) [****,  ()](\doibase 10.1088/1367-2630/15/4/045020) [****,  ()](\doibase 10.1126/science.aaf3961) [****,  ()](\doibase 10.1038/srep35116) [****,  ()](\doibase 10.1103/PhysRevLett.76.1715) [****,  ()](\doibase 10.1103/PhysRevB.54.16820) @noop [**]{}, Habilitation thesis (, ) @noop [**]{} (, , ) [****,  ()](\doibase 10.1103/PhysRevB.77.134513) [****,  ()](\doibase 10.1103/PhysRevB.78.069902) [****,  ()](\doibase 10.1103/PhysRevB.82.205314) [****,  ()](\doibase 10.1103/PhysRevLett.90.166602) [****, ()](\doibase 10.1103/PhysRevB.70.195345) [****,  ()](\doibase 10.1103/PhysRevB.91.195404) [****,  ()](\doibase 10.1103/PhysRevLett.119.077701)
**MARKET DEPTH AND PRICE DYNAMICS: A NOTE** FRANK H. WESTERHOFF *University of Osnabrueck, Department of Economics* *Rolandstrasse 8, D-49069 Osnabrueck, Germany* *e-mail: fwesterho@oec.uni-osnabrueck.de* Abstract: This note explores the consequences of nonlinear price impact functions on price dynamics within the chartist-fundamentalist framework. Price impact functions may be nonlinear with respect to trading volume. As indicated by recent empirical studies, a given transaction may cause a large (small) price change if market depth is low (high). Simulations reveal that such a relationship may create endogenous complex price fluctuations even if the trading behavior of chartists and fundamentalists is linear. Keywords: Econophysics; Market Depth; Price Dynamics; Nonlinearities; Technical and Fundamental Analysis. Introduction ============ Interactions between heterogeneous agents, so-called chartists and fundamentalists, may generate endogenous price dynamics either due to nonlinear trading rules or due to a switching between simple linear trading rules.$^{1,2}$ Overall, multi-agent models appear to be quite successful in replicating financial market dynamics.$^{3,4}$ In addition, this research direction has important applications. On the one hand, understanding the working of financial markets may help to design better investment strategies.$^{5}$ On the other hand, it may facilitate the regulation of disorderly markets. For instance, Ehrenstein shows that the imposition of a low transaction tax may stabilize asset price fluctuations.$^{6}$ Within these models, the orders of the traders typically drive the price via a log linear price impact function: Buying orders shift the price proportionally up and selling orders shift the price proportionally down. Recent empirical evidence suggests, however, that the relationship between orders and price adjustment may be nonlinear. Moreover, as reported by Farmer et al., large price fluctuations occur when market depth is low.$^{3,7}$ Following this observation, our goal is to illustrate a novel mechanism for endogenous price dynamics. We investigate – within an otherwise linear chartist-fundamentalist setup – a price impact function which depends nonlinearly on market depth. To be precise, a given transaction yields a larger price change when market depth is low than when it is high. Simulations indicate that such a relationship may lead to complex price movements. The dynamics may be sketched as follows. The market switches back and forth between two regimes. When liquidity is high, the market is relatively stable. But low price fluctuations indicate only weak trading signals and thus the transactions of speculators decline. As liquidity decreases, the price responsiveness of a trade increases. The market becomes unstable and price fluctuations increase again. The remainder of this note is organized as follows. Section 2 sketches the empirical evidence on price impact functions. In section 3, we present our model, and in section 4, we discuss the main results. The final section concludes. Empirical Evidence ================== Financial prices are obviously driven by the orders of heterogeneous agents. However, it is not clear what the true functional form of price impact is. For instance, Farmer proposes a log linear price impact function for theoretical analysis while Zhang develops a model with nonlinear price impact.$^{8,9}$ His approach is backed up by empirical research that documents a concave price impact function. According to Hasbrouck, the larger is the order size, the smaller is the price impact per trade unit.$^{10}$ Also Kempf and Korn, using data on DAX futures, and Plerou et al., using data on the 116 most frequently traded US stocks, find that the price impact function displays a concave curvature with increasing order size, and flattening out at larger values.$^{11,12}$ Weber and Rosenow fitted a concave function in the form of a power law and obtained an impressive correlation coefficient of 0.977.$^{13}$ For a further theoretical and empirical debate on the possible shape of the price impact function with respect to the order size see Gabaix et al., Farmer and Lillo and Plerou et al.$^{14-16}$ But these results are currently challenged by an empirical study which is crucial for this note. Farmer et al. present evidence that price fluctuations caused by individual market orders are essentially independent of the volume of the orders.$^{7}$ Instead, large price fluctuations are driven by fluctuations in liquidity, i.e. variations in the market’s ability to absorb new orders. The reason is that even for the most liquid stocks there can be substantial gaps in the order book. When such a gap exists next to the best price – due to low liquidity – even a small new order can remove the best quote and trigger a large price change. These results are supported by Chordia, Roll and Subrahmanyam who also document that there is considerable time variation in market wide liquidity and Lillo, Farmer and Mantenga who detect that higher capitalization stocks tend to have smaller price responses for the same normalized transaction size.$^{17,18}$ Note that the relation between liquidity and price impact is of direct importance to investors developing trading strategies and to regulators attempting to stabilize financial markets. Farmer et al. argue, for instance, that agents who are trying to transact large amounts should split their orders and execute them a little at a time, watching the order book, and taking whatever liquidity is available as it enters.$^{7}$ Hence, when there is a lot of volume in the market, they should submit large orders. Assuming a concave price impact function would obviously lead to quite different investment decisions. Ehrenstein, Westerhoff and Stauffer demonstrate, for instance, that the success of a Tobin tax depends on its impact on market depth.$^{19}$ Depending on the degree of the nonlinearity of the price impact function, a transaction tax may stabilize or destabilize the markets. The Model ========= Following Simon, agents are boundedly rational and display a rule-governed behavior.$^{20}$ Moreover, survey studies reveal that financial market participants rely strongly on technical and fundamental analysis to predict prices.$^{21,22}$ Chartists typically extrapolate past price movements into the future. Let $P$ be the log of the price. Then, their orders may be expressed as $$D^C_t = a(P_t -P_{t-1}),$$ where $a$ is a positive reaction coefficient denoting the strength of the trading. Accordingly, technical traders submit buying orders if prices go up and vice versa. In contrast, fundamentalists expect the price to track its fundamental value. Orders from this type of agent may be written as $$D^F_t = b(F-P_t).$$ Again, $b$ is a positive reaction coefficient, and $F$ stands for the log of the fundamental value. For instance, if the asset is overvalued, fundamentalists submit selling orders. As usual, excess buying drives the price up and excess selling drives it down so that the price adjustment process may be formalized as $$P_{t+1} = P_t + A_t(wD^C_t + (1-w)D^F_t),$$ where $w$ indicates the fraction of chartists and $(1-w)$ the fraction of fundamentalists. The novel idea is to base the degree of price adjustmen $A$ on a nonlinear function of the market depth.$^{23}$ Exploiting that given excess demand has a larger (smaller) impact on the price if the trading volume is low (high), one may write $$A_t = \frac{c}{(|wD^C_t|+|(1-w) D^F_t|)^d}.$$ The curvature of $A$ is captured by $d\geq 0$, while $c>0$ is a shift parameter. For $d=0$, the price adjustment function is log-linear.$^{1,3}$ In that case, the law of motion of the price, derived from combining (1) to (4), is a second-order linear difference equation which has a unique steady state at $$P_{t+1} = P_t = P_{t-1} = F.$$ Rewriting Schur’s stability conditions, the fixed point is stable for $$0<c<\left\{\begin{array}{ll} \displaystyle\frac{1}{aw} & \mbox{for~~} w> \displaystyle \frac{b}{4a +b}\\ \displaystyle \frac{2}{b(1-w)-2aw}\qquad & \mbox{else} \end{array}.\right.$$ However, we are interested in the case where $d>0$. Combining (1)-(4) and solving for $P$ yields $$P_{t+1} = P_t + c \frac{wa(P_t - P_{t-1}) + (1-w) b(F-P_t)}{(|wa(P_t-P_{t-1})|+|(1-w)b(F-P_t)|)^d},$$ which is a two-dimensional nonlinear difference equation. Since (7) precludes closed analysis, we simulate the dynamics to demostrate that the underlying structure gives rise to endogenous deterministic motion. Some Results ============ Figure 1 contains three bifurcation diagrams for $0<d<1$ and $w=0.7$ (top), $w=0.5$ (central) and $w=0.3$ (bottom). The other parameters are fixed at $a=b=c=1$ and the log of the fundamental value is $F=0$. We increase $d$ in 500 steps. In each step, $P$ is plotted from $t=$1001-1100. Note that bifurcation diagrams are frequently used to illustrate the dynamic properties of nonlinear systems. Figure 1 suggests that if $d$ is small, there may exist a stable equilibrium. For instance, for $w=0.5$, prices converge towards the fundamental value as long as $d$ is smaller than around 0.1. If $d$ is increased further, the fixed point becomes unstable. In addition, the range in which the fluctuations take place increases too. Note also that many different types of bifurcation occur. Our model generates the full range of possible dynamic outcomes: fixed points, limit cycles, quasi periodic motion and chaotic fluctuations. For some parameter combinations coexisting attractors emerge. Comparing the three panels indicates that the higher the fraction of chartists, the less stable the market seems to be. To check the robustness of endogenous motion, figure 2 presents bifurcation diagrams for $0 < a <2$ (top), $0< b < 2$ (central) and $0< c < 2$ (bottom), with the remaining parameters fixed at $a=b=c=1$ and $d=w=0.5$. Again, complicated movements arise. While chartism seems to destabilize the market, fundamentalism is apparently stabilizing. Naturally, a higher price adjustment destabilizes the market as well. Overall, many parameter combinations exist which trigger complicated motion.[^1] t Let us finally explore what drives the dynamics. Figure 3 shows the dynamics in the time domain for $a=0.85$, $b=c=1$, and $d=w=0.5$. The first, second and third panel present the log of the price $P$, the price adjustment $A$ and the trading volume $V$ for 150 observations, respectively. Visual inspection reveals that the price circles around its fundamental value without any tendency to converge. Nonlinear price adjustment may thus be an endogenous engine for volatility and trading volume. Note that when trading volume drops the price adjustment increases and price movements are amplified. However, the dynamics does not explode since a higher trading volume leads again to a decrease in the price adjustment. Finally, figure 4 displays the price (top panel) and the trading volume (bottom panel) for 5000 observations $(a = 0.25,~b = 1,~c = 50,~d = 2$ and $w=0.5)$. As can be seen, the dynamics may become quite complex. Remember that trading volume increases with increasing price changes (orders of chartists) and/or increasing deviations from fundamentals (orders of fundametalists). In a stylized way, the dynamics may thus be sketched as follows: Suppose that trading volume is relatively low. Since the price adjustment $A$ is strong, the system is unstable. As the trading becomes increasingly hectic, prices start to diverge from the fundamental value. At some point, however, the trading activity has become so strong that, due to the reduction of the price adjustment $A$, the system becomes stable. Afterwards, a period of convergence begins until the system jumps back to the unstable regime. This process continually repeats itself but in an intricate way. Conclusions =========== When switching between simple linear trading rules and/or relying on nonlinear strategies, interactions between heterogeneous agents may cause irregular dynamics. This note shows that changes in market depth also stimulate price changes. The reason is that if market liquidity goes down, a given order obtains a larger price impact. For a broad range of parameter combinations, erratic yet deterministic trajectories emerge since the system switches back and forth between stable and unstable regimes. [**References**]{} 1. D. Farmer and S. Joshi, [*Journal of Economic Behavior and Organizations*]{} [**49**]{}, 149 (2002). 2. T. Lux and M. Marchesi, [*International Journal of Theoretical and Applied Finance*]{} [**3**]{}, 675 (2000). 3. R. Cont and J.-P. Bouchaud, [*Macroeconomic Dynamics*]{} [**4**]{}, 170 (2000). 4. D. Stauffer, [*Advances in Complex Systems*]{} [**4**]{}, 19 (2001). 5. D. Sornette and W. Zhou, [*Quantitative Finance*]{} [**2**]{}, 468 (2002). 6. G. Ehrenstein, [*International Journal of Modern Physics C*]{} [**13**]{}, 1323 (2002). 7. D. Farmer, L. Gillemot, F. Lillo, S. Mike and A. Sen, What Really Causes Large Price Changes?, SFI Working Paper, 04-02-006, 2004. 8. D. Farmer, [*Industrial and Corporate Change*]{} [**11**]{}, 895 (2002). 9. Y.-C. Zhang, [*Physica A*]{} [**269**]{}, 30 (1999). 10. J. Hasbrouck, [*Journal of Finance*]{} [**46**]{}, 179 (1991). 11. A. Kempf and O. Korn, [*Journal of Financial Markets*]{} [**2**]{}, 29 (1999). 12. V. Plerou, P. Gopikrishnan, X. Gabaix and E. Stanley, [*Physical Review E*]{} [**66**]{}, 027104, 1 (2002). 13. P. Weber and B. Rosenow, Order Book Approach to Price Impact, Preprint cond-mat/0311457, 2003. 14. X. Gabaix, P. Gopikrishnan, V. Plerou and E. Stanley, [*Nature*]{} [**423,**]{} 267 (2003). 15. D. Farmer and F. Lillo, [*Quantitative Finance*]{} [**4**]{}, C7 (2004). 16. V. Plerou, P. Gopikrishnan, X. Gabaix and E. Stanley, [*Quantitative Finance*]{} [**4**]{}, C11 (2004). 17. T. Chordia, R. Roll and A. Subrahmanyam, [*Journal of Finance*]{} [**56**]{}, 501 (2001). 18. F. Lillo, D. Farmer and R. Mantegna, [*Nature*]{} [**421**]{}, 129 (2003). 19. G. Ehrenstein, F. Westerhoff and D. Stauffer D., Tobin Tax and Market Depth, Preprint cond-mat/0311581, 2001. 20. H. Simon, [*Quarterly Journal of Economics*]{} [**9**]{}, 99 (1955). 21. M. Taylor and H. Allen, [*Journal of International Money and Finance*]{} [**11**]{}, 304 (1992). 22. Y.-H. Lui and D. Mole, [*Journal of International Money and Finance*]{} [**17**]{}, 535 (1998). 23. D. Sornette and K. Ide, [*International Journal of Modern Physiscs C*]{} [**14**]{}, 267 (2003). [^1]: To observe permanent fluctuations only small variations in $A$ are needed. Suppose that $A$ takes two values centered around the upper bound of the stability condition $X$, say $X-Y$ and $X+Y$, depending on whether trading volume is above or below a certain level $Z$. Such a system obviously produces nonconvergent but also nonexplosive fluctuations for arbitrary values of $Y$ and $Z$.
--- abstract: 'Directed graphical models provide a useful framework for modeling causal or directional relationships for multivariate data. Prior work has largely focused on identifiability and search algorithms for directed acyclic graphical (DAG) models. In many applications, feedback naturally arises and directed graphical models that permit cycles occur. In this paper we address the issue of identifiability for general directed cyclic graphical (DCG) models satisfying the Markov assumption. In particular, in addition to the faithfulness assumption which has already been introduced for cyclic models, we introduce two new identifiability assumptions, one based on selecting the model with the fewest edges and the other based on selecting the DCG model that entails the maximum number of d-separation rules. We provide theoretical results comparing these assumptions which show that: (1) selecting models with the largest number of d-separation rules is strictly weaker than the faithfulness assumption; (2) unlike for DAG models, selecting models with the fewest edges does not necessarily result in a milder assumption than the faithfulness assumption. We also provide connections between our two new principles and minimality assumptions. We use our identifiability assumptions to develop search algorithms for small-scale DCG models. Our simulation study supports our theoretical results, showing that the algorithms based on our two new principles generally out-perform algorithms based on the faithfulness assumption in terms of selecting the true skeleton for DCG models.' bibliography: - 'reference\_DCG.bib' --- [****]{} --------------------------------------------- -- Gunwoong Park$^1$Garvesh Raskutti$^{1,2,3}$ --------------------------------------------- -- ---------------------------------------------------------------------- $^1$ Department of Statistics, University of Wisconsin-Madison $^2$ Department of Computer Science, University of Wisconsin-Madison $^3$ Wisconsin Institute for Discovery, Optimization Group ---------------------------------------------------------------------- **Keywords:** Directed graphical Models, Identifiability, Faithfulness, Feedback loops. Introduction {#SecInt} ============ A fundamental goal in many scientific problems is to determine causal or directional relationships between variables in a system. A well-known framework for representing causal or directional relationships are directed graphical models. Most prior work on directed graphical models has focused on directed acyclic graphical (DAG) models, also referred to as Bayesian networks which are directed graphical models with no directed cycles. One of the core problems is determining the underlying DAG $G$ given the data-generating distribution $\mathbb{P}$. A fundamental assumption in the DAG framework is the *causal Markov condition* (CMC) (see e.g., [@lauritzen1996graphical; @Spirtes2000]). While the CMC is broadly assumed, in order for a directed graph $G$ to be identifiable based on the distribution $\mathbb{P}$, additional assumptions are required. For DAG models, a number of identifiability and minimality assumptions have been introduced [@Glymour1987; @Spirtes2000] and the connections between them have been discussed [@Zhang2013]. In particular, one of the most widely used assumptions for DAG models is the *causal faithfulness condition* (CFC) which is sufficient for many search algorithms. However the CFC has been shown to be extremely restrictive, especially in the limited data setting [@Uhler2013]. In addition two minimality assumptions, the P-minimality and SGS-minimality assumptions have been introduced. These conditions are weaker than the CFC but do not guarantee model identifiability [@Zhang2013]. On the other hand, the recently introduced sparsest Markov representation (SMR) and frugality assumptions [@forster2015frugal; @Raskutti2013; @van2013ell] provide an alternative that is milder than the CFC and is sufficient to ensure identifiability. The main downside of the [SMR]{} and frugality assumptions relative to the CFC is that the [SMR]{} and frugality assumptions are sufficient conditions for model identifiability only when exhaustive searches over the DAG space are possible [@Raskutti2013], while the CFC is sufficient for polynomial-time algorithms [@Glymour1987; @Spirtes1991; @Spirtes2000] for learning equivalence class of sparse graphs. While the DAG framework is useful in many applications, it is limited since feedback loops are known to often exist (see e.g., [@Richardson1996; @Richardson1995]). Hence, directed graphs with directed cycles [@Spirtes2000] are more appropriate to model such feedback. However learning directed cyclic graphical (DCG) models from data is considerably more challenging than learning DAG models [@Richardson1996; @Richardson1995] since the presence of cycles poses a number of additional challenges and introduces additional non-identifiability. Consequently there has been considerably less work focusing on directed graphs with feedback both in terms of identifiability assumptions and search algorithms. [@Spirtes1995] discussed the CMC, and [@Richardson1996; @Richardson1995] discussed the CFC for DCG models and introduced the polynomial-time cyclic causal discovery (CCD) algorithm [@Richardson1996] for recovering the Markov equivalence class for DCGs. Recently, [@claassen2013learning] introduced the FCI$+$ algorithm for recovering the Markov equivalence class for sparse DCGs, which also assumes the CFC. As with DAG models, the CFC for cyclic models is extremely restrictive since it is more restrictive than the CFC for DAG models. In terms of learning algorithms that do not require the CFC, additional assumptions are typically required. For example [@mooij2011causal] proved identifiability for bivariate Gaussian cyclic graphical models with additive noise which does not require the CFC while many approaches have been studied for learning graphs from the results of interventions on the graph (e.g., [@hyttinen2010causal; @hyttinen2012causal; @hyttinen2012learning; @hyttinen2013experiment; @hyttinen2013discovering]). However, these additional assumptions are often impractical and it is often impossible or very expensive to intervene many variables in the graph. This raises the question of whether milder identifiability assumptions can be imposed for learning DCG models. In this paper, we address this question in a number of steps. Firstly, we adapt the [SMR]{} and frugality assumptions developed for DAG models to DCG models. Next we show that unlike for DAG models, the adapted [SMR]{} and frugality assumptions are not strictly weaker than the CFC. Hence we consider a new identifiability assumption based on finding the Markovian DCG entailing the maximum number of d-separation rules (MDR) which we prove is strictly weaker than the CFC and recovers the Markov equivalence class for DCGs for a strict superset of examples compared to the CFC. We also provide a comparison between the [MDR]{}, [SMR]{} and frugality assumptions as well as the minimality assumptions for both DAG and DCG models. Finally we use the [MDR]{} and [SMR]{} assumptions to develop search algorithms for small-scale DCG models. Our simulation study supports our theoretical results by showing that the algorithms induced by both the [SMR]{} and [MDR]{} assumptions recover the Markov equivalence class more reliably than state-of-the art algorithms that require the CFC for DCG models. We point out that the search algorithms that result from our identifiability assumptions require exhaustive searches and are not computationally feasible for large-scale DCG models. However, the focus of this paper is to develop the weakest possible identifiability assumption which is of fundamental importance for directed graphical models. The remainder of the paper is organized as follows: Section \[SecPriorWork\] provides the background and prior work for identifiability assumptions for both DAG and DCG models. In Section \[SecSMRFrugality\] we adapt the [SMR]{} and frugality assumptions to DCG models and provide a comparison between the [SMR]{} assumption, the CFC, and the minimality assumptions. In Section \[SecMaxDSep\] we introduce our new [MDR]{} principle, finding the Markovian DCG that entails the maximum number of d-separation rules and provide a comparison of the new principle to the CFC, [SMR]{}, frugality, and minimality assumptions. Finally in Section \[SecSimulation\], we use our identifiability assumptions to develop a search algorithm for learning small-scale DCG models, and provide a simulation study that is consistent with our theoretical results. Prior work on directed graphical models {#SecPriorWork} ======================================= In this section, we introduce the basic concepts of directed graphical models pertaining to model identifiability. A directed graph $G = (V,E)$ consists of a set of vertices $V$ and a set of directed edges $E$. Suppose that $V=\{1,2,\dots ,p\}$ and there exists a random vector $(X_1, X_2,\cdots,X_p)$ with probability distribution $\mathbb{P}$ over the vertices in $G$. A directed edge from a vertex $j$ to $k$ is denoted by $(j,k)$ or $j\to k$. The set $\mbox{pa}(k)$ of *parents* of a vertex $k$ consists of all nodes $j$ such that $(j,k)\in E$. If there is a directed path $j\to \cdots \to k$, then $k$ is called a *descendant* of $j$ and $j$ is an *ancestor* of $k$. The set $\mbox{de}(k)$ denotes the set of all descendants of a node $k$. The *non-descendants* of a node $k$ are $\mbox{nd}(k) = V\setminus (\{k\}\cup \mbox{de}(k))$. For a subset $S\subset V$, we define $\mbox{an}(S)$ to be the set of nodes $k$ that are in $S$ or are ancestors of a subset of nodes in $S$. Two nodes that are connected by an edge are called *adjacent*. A triple of nodes $(j,k,\ell)$ is an *unshielded triple* if $j$ and $k$ are adjacent to $\ell$ but $j$ and $k$ are not adjacent. An unshielded triple $(j,k,\ell)$ forms a *v-structure* if $j\to \ell$ and $k \to \ell$. In this case $\ell$ is called a *collider*. Furthermore, let $\pi$ be an undirected path $\pi$ between $j$ and $k$. If every collider on $\pi$ is in $\mbox{an}(S)$ and every non-collider on an undirected path $\pi$ is not in $S$, an undirected path $\pi$ from $j$ to $k$ *d-connects* $j$ and $k$ given $S \subset V\setminus\{j,k\}$ and $j$ is *d-connected* to $k$ given $S$. If a directed graph $G$ has no undirected path $\pi$ that d-connects $j$ and $k$ given a subset $S$, then $j$ is *d-separated* from $k$ given $S$: For disjoint sets of vertices $j, k \in V$ and $S \subset V \setminus\{j,k\}$, $j$ is *d-connected* to $k$ given $S$ if and only if there is an undirected path $\pi$ between $j$ and $k$, such that - If there is an edge between $a$ and $b$ on $\pi$ and an edge between $b$ and $c$ on $\pi$, and $b \in S$, then $b$ is a collider between $a$ and $c$ relative to $\pi$. - If $b$ is a collider between $a$ and $c$ relative to $\pi$, then there is a descendant $d$ of $b$ and $d \in S$. Finally, let $X_j {\protect\mathpalette{\protect\independenT}{\perp}}X_k \mid X_S$ with $S \subset V\setminus\{j, k\}$ denote the conditional independence (CI) statement that $X_j$ is conditionally independent (as determined by $\mathbb{P}$) of $X_k$ given the set of variables $X_S = \{ X_{\ell} \mid \ell \in S\}$, and let $X_j {\!\perp\!\!\!\!\not\perp\!}X_k \mid X_S$ denote conditional dependence. The *Causal Markov condition* associates CI statements of $\mathbb{P}$ with a directed graph $G$. \[Def:CMC\] A probability distribution $\mathbb{P}$ over a set of vertices $V$ satisfies the *Causal Markov condition* with respect to a (acyclic or cyclic) graph $G = (V, E)$ if for all $(j, k, S)$, $j$ is d-separated from $k$ given $S \subset V \setminus \{j,k\}$ in $G$, then $$\begin{aligned} X_j {\protect\mathpalette{\protect\independenT}{\perp}}X_k \mid X_S ~~\textrm{ according to $\mathbb{P}$}. \end{aligned}$$ The CMC applies to both acyclic and cyclic graphs (see e.g., [@Spirtes2000]). However not all directed graphical models satisfy the CMC. In order for a directed graphical model to satisfy the CMC, the joint distribution of a model should be defined by the *generalized factorization* [@Lauritzen1990]. \[Def:GenFac\] The joint distribution of $X_S$, $f(X_S)$ *factors according to directed graph* $G$ with vertices $V$ if and only if for every subset $S$ of $V$, $$f(X_{\mbox{an}(S)}) = \prod_{j \in \mbox{an}(S)} g_j (X_{j},X_{\mbox{pa}(j)})$$ where $g_j$ is a non-negative function. [@Spirtes1995] showed that the generalized factorization is a necessary and sufficient condition for directed graphical models to satisfy the CMC. For DAG models, $g_j(\cdot)$’s must correspond to a conditional probability distribution function whereas for graphical models with cycles, $g_j(\cdot)$’s need only be non-negative functions. As shown by [@Spirtes1995], a concrete example of a class of cyclic graphs that satisfy the factorization above is structural linear DCG equation models with additive independent errors. We will later use linear DCG models in our simulation study. In general, there are many directed graphs entailing the same d-separation rules. These graphs are *Markov equivalent* and the set of Markov equivalent graphs is called a *Markov equivalence class* (MEC) [@Richardson1995; @udea1991equivalence; @Spirtes2000; @verma1992algorithm]. For example, consider two 2-node graphs, $G_1: X_1 \rightarrow X_2$ and $G_2: X_1 \leftarrow X_2$. Then both graphs are Markov equivalent because they both entail no d-separation rules. Hence, $G_1$ and $G_2$ belong to the same [MEC]{} and hence it is impossible to distinguish two graphs by d-separation rules. The precise definition of the [MEC]{} is provided here. Two directed graphs $G_1$ and $G_2$ are *Markov equivalent* if any distribution which satisfies the CMC with respect to one graph satisfies the CMC with respect to the other, and vice versa. The set of graphs which are Markov equivalent to $G$ is denoted by $\mathcal{M}(G)$. The characterization of Markov equivalence classes is different for DAGs and DCGs. For DAGs, [@udea1991equivalence] developed an elegant characterization of Markov equivalence classes defined by the *skeleton* and *v-structures*. The skeleton of a DAG model consists of the edges without directions. However for DCGs, the presence of feedback means the characterization of the [MEC]{} for DCGs is considerably more involved. [@Richardson1996] provides a characterization. The presence of directed cycles changes the notion of adjacency between two nodes. In particular there are *real* adjacencies that are a result of directed edges in the DCG and *virtual* adjacencies which are edges that do not exist in the data-generating DCG but can not be recognized as a non-edge from the data. The precise definition of real and virtual adjacencies are as follows. \[Def:Adj\] Consider a directed graph $G = (V,E)$. - For any $j, k \in V$, $j$ and $k$ are *really adjacent* in $G$ if $j \rightarrow k$ or $j \leftarrow k$. - For any $j, k \in V$, $j$ and $k$ are *virtually adjacent* if $j$ and $k$ have a common child $\ell$ such that $\ell$ is an ancestor of $j$ or $k$. Note that a virtual adjacency can only occur if there is a cycle in the graph. Hence, DAGs have only real edges while DCGs can have both real edges and virtual edges. Figure \[Fig:Sec2a\] shows an example of a DCG with a virtual edge. In Figure \[Fig:Sec2a\], a pair of nodes $(1,4)$ has a virtual edge (dotted line) because the triple $(1,4,2)$ forms a v-structure and the common child $2$ is an ancestor of $1$. This virtual edge is created by the cycle, $1 \rightarrow 2 \rightarrow 3 \rightarrow 1$. \(A) at (1.5,1.1)[$1$]{}; (B) at (0.35,0.55) [$3$]{}; (C) at (1.5,0) [$2$]{}; (D) at (3,0.0) [$4$]{}; (B) edge \[bend right = -25, shorten &gt;=1pt, shorten &lt;=1pt \] node\[above\] [ ]{} (A); (C) edge \[bend right = -25, shorten &gt;=1pt, shorten &lt;=1pt\] node\[above\] [ ]{} (B); (D) edge \[bend right = 0, shorten &gt;=1pt, shorten &lt;=1pt\] node\[above\] [ ]{} (C); (A) edge \[bend right = -25, shorten &gt;=1pt, shorten &lt;=1pt\] node\[above\] (C); (A) to node\[above right\] [virtual]{} (D); Virtual edges generate different types of relationships involving unshielded triples: (1) an unshielded triple $(j,k,\ell)$ (that is $j-\ell-k$) is called a *conductor* if $\ell$ is an ancestor of $j$ or $k$; (2) an unshielded triple $(j,k,\ell)$ is called a *perfect non-conductor* if $\ell$ is a descendant of the common child of $j$ and $k$; and (3) an unshielded triple $(j,k,\ell)$ is called an *imperfect non-conductor* if the triple is not a conductor or a perfect non-conductor. Intuitively, the concept of (1) a conductor is analogous to the notion of a non v-structure in DAGs because for example suppose that an unshielded triple $(j,k,\ell)$ is a conductor, then $j$ is d-connected to $k$ given any set $S$ which does not contain $\ell$. Moreover, (2) a perfect non-conductor is analogous to a v-structure because suppose that $(j,k,\ell)$ is a perfect non-conductor, then $j$ is d-connected to $k$ given any set $S$ which contains $\ell$. However, there is no analogous notion of an imperfect non-conductor for DAG models. We see throughout this paper that this difference creates a major challenge in inferring DCG models from the underlying distribution $\mathbb{P}$. As shown by [@Richardson1994] (Cyclic Equivalence Theorem), a necessary (but not sufficient) condition for two DCGs to belong to the same [MEC]{} is that they share the same real plus virtual edges and the same (1) conductors, (2) perfect non-conductors and (3) imperfect non-conductors. However unlike for DAGs, this condition is not sufficient for Markov equivalence. A complete characterization of Markov equivalence is provided in [@Richardson1994; @Richardson1995] and since it is quite involved, we do not include here. Even if we weaken the goal to inferring the [MEC]{} for a DAG or DCG, the CMC is insufficient for discovering the true [MEC]{} $\mathcal{M}(G^*)$ because there are many graphs satisfying the CMC, which do not belong to $\mathcal{M}(G^*)$. For example, any fully-connected graph always satisfies the CMC because it does not entail any d-separation rules. Hence, in order to identify the true [MEC]{} given the distribution $\mathbb{P}$, stronger identifiability assumptions that force the removal of edges are required. Faithfulness and minimality assumptions --------------------------------------- In this section, we discuss prior work on identifiability assumptions for both DAG and DCG models. To make the notion of identifiability and our assumptions precise, we need to introduce the notion of a true data-generating graphical model $(G^*, \mathbb{P})$. All we observe is the distribution (or samples from) $\mathbb{P}$, and we know the graphical model $(G^*, \mathbb{P})$ satisfies the CMC. Let $CI(\mathbb{P})$ denote the set of conditional independence statements corresponding to $\mathbb{P}$. The graphical model $(G^*, \mathbb{P})$ is *identifiable* if the Markov equivalence class of the graph $\mathcal{M}(G^*)$ can be uniquely determined based on $CI(\mathbb{P})$. For a directed graph $G$, let $E(G)$ denote the set of directed edges, $S(G)$ denote the set of edges without directions, also referred to as the skeleton, and $D_{sep}(G)$ denote the set of d-separation rules entailed by $G$. One of the most widely imposed identifiability assumptions for both DAG and DCG models is the *causal faithfulness condition* (CFC) [@Spirtes2000] also referred to as the stability condition in [@Pearl2014]. A directed graph is *faithful* to a probability distribution if there is no probabilistic independence in the distribution that is not entailed by the CMC. The CFC states that the graph is faithful to the true probability distribution. \[Def:CFC\] Consider a directed graphical model $(G^*, \mathbb{P})$. A graph $G^*$ is *faithful* to $\mathbb{P}$ if and only if for any $j,k \in V$ and any subset $S \subset V \setminus \{j,k\}$, $$j \textrm{ d-separated from } k \mid S \iff X_j {\protect\mathpalette{\protect\independenT}{\perp}}X_k \mid X_S \textrm{ according to $\mathbb{P}$}.$$ While the CFC is sufficient to guarantee identifiability for many polynomial-time search algorithms [@claassen2013learning; @Glymour1987; @hyttinen2012causal; @Richardson1996; @Richardson1995; @Spirtes2000] for both DAGs and DCGs, the CFC is known to be a very strong assumption (see e.g., [@forster2015frugal; @Raskutti2013; @Uhler2013]) that is often not satisfied in practice. Hence, milder identifiability assumptions have been considered. Minimality assumptions, notably the *P-minimality* [@pearl2000] and SGS-minimality [@Glymour1987] assumptions are two such assumptions. The P-minimality assumption asserts that for directed graphical models satisfying the CMC, graphs that entail more d-separation rules are preferred. For example, suppose that there are two graphs $G_1$ and $G_2$ which are not Markov equivalent. $G_1$ is *strictly preferred* to $G_2$ if $D_{sep}(G_2) \subset D_{sep}(G_1)$. The P-minimality assumption asserts that no graph is strictly preferred to the true graph $G^*$. The SGS-minimality assumption asserts that there exists no proper sub-graph of $G^*$ that satisfies the CMC with respect to the probability distribution $\mathbb{P}$. To define the term sub-graph precisely, $G_1$ is a sub-graph of $G_2$ if $E(G_1) \subset E(G_2)$ and $E(G_1) \neq E(G_2)$. [@Zhang2013] proved that the SGS-minimality assumption is weaker than the P-minimality assumption which is weaker than the CFC for both DAG and DCG models. While [@Zhang2013] states the results for DAG models, the result easily extends to DCG models. \[Thm:Sec2a\] If a directed graphical model $(G^*, \mathbb{P})$ satisfies - the CFC, it satisfies the P-minimality assumption. - the P-minimality assumption, it satisfies the SGS-minimality assumption. Sparsest Markov Representation (SMR) for DAG models --------------------------------------------------- While the minimality assumptions are milder than the CFC, neither the P-minimality nor SGS-minimality assumptions imply identifiability of the MEC for $G^*$. Recent work by [@Raskutti2013] developed the *sparsest Markov representation* (SMR) assumption and a slightly weaker version later referred to as *frugality* assumption [@forster2015frugal] which applies to DAG models. The [SMR]{} assumption which we refer to here as the identifiable [SMR]{} assumption states that the true DAG model is the graph satisfying the CMC with the fewest edges. Here we say that a DAG $G_1$ is *strictly sparser* than a DAG $G_2$ if $G_1$ has *fewer* edges than $G_2$. \[Def:SMR\] A DAG model $(G^*,\mathbb{P})$ satisfies the identifiable [SMR]{} assumption if $(G^* ,\mathbb{P})$ satisfies the CMC and $|S(G^*)| < |S(G)|$ for every DAG $G$ such that $(G ,\mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$. The identifiable SMR assumption is strictly weaker than the CFC while also ensuring a method known as the Sparsest Permutation (SP) algorithm [@Raskutti2013] recovers the true MEC. Hence the identifiable SMR assumption guarantees identifiability of the MEC for DAGs. A slightly weaker notion which we refer to as the weak SMR assumption does not guarantee model identifiability. \[Def:Fru\] A DAG model $(G^* ,\mathbb{P})$ satisfies the weak [SMR]{} assumption if $(G^* ,\mathbb{P})$ satisfies the CMC and $|S(G^*)| \leq |S(G)|$ for every DAG $G$ such that $(G ,\mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$. A comparison of [SMR]{}/frugality to the CFC and the minimality assumptions for DAG models is provided in [@Raskutti2013] and [@forster2015frugal]. \[Thm:Sec2b\] If a DAG model $(G^*, \mathbb{P})$ satisfies - the CFC, it satisfies the identifiable [SMR]{} assumption and consequently weak [SMR]{} assumption. - the weak [SMR]{} assumption, it satisfies the P-minimality assumption and consequently the SGS-minimality assumption. - the identifiable [SMR]{} assumption, $G^*$ is identifiable up to the true MEC $\mathcal{M}(G^*)$. It is unclear whether the [SMR]{}/frugality assumptions apply naturally to DCG models since the success of the [SMR]{} assumption relies on the local Markov property which is known to hold for DAGs but not DCGs [@Richardson1994]. In this paper, we investigate the extent to which these identifiability assumptions apply to DCG models and provide a new principle for learning DCG models. Based on this prior work, a natural question to consider is whether the identifiable and weak [SMR]{} assumptions developed for DAG models apply to DCG models and whether there are similar relationships between the CFC, identifiable and weak [SMR]{}, and minimality assumptions. In this paper we address this question by adapting both identifiable and weak [SMR]{} assumptions to DCG models. One of the challenges we address is dealing with the distinction between real and virtual edges in DCGs. We show that unlike for DAG models, the identifiable [SMR]{} assumption is not necessarily a weaker assumption than the CFC. Consequently, we introduce a new principle which is the maximum d-separation rule (MDR) principle which chooses the directed Markov graph with the greatest number of d-separation rules. We show that our [MDR]{} principle is strictly weaker than the CFC and stronger than the P-minimality assumption, while also guaranteeing model identifiability for DCG models. Our simulation results complement our theoretical results, showing that the [MDR]{} principle is more successful than the CFC in terms of recovering the true [MEC]{} for DCG models. Sparsity and [SMR]{} for DCG models {#SecSMRFrugality} =================================== In this section, we extend notions of sparsity and the [SMR]{} assumptions to DCG models. As mentioned earlier, in contrast to DAGs, DCGs can have two different types of edges which are real and virtual edges. In this paper, we define the *sparsest* DCG as the graph with the fewest *total edges* which are virtual edges plus real edges. The main reason we choose total edges rather than just real edges is that all DCGs in the same Markov equivalence class (MEC) have the same number of total edges [@Richardson1994]. However, the number of real edges may not be the same among the graphs even in the same [MEC]{}. For example in Figure \[Fig:Sec3a\], there are two different [MECs]{} and each [MEC]{} has two graphs: $G_1, G_2 \in \mathcal{M}(G_1)$ and $G_3, G_4 \in \mathcal{M}(G_3)$. $G_1$ and $G_2$ have $9$ total edges but $G_3$ and $G_4$ has $7$ total edges. On the other hand, $G_1$ has $6$ real edges, $G_2$ has $9$ real edges, $G_3$ has $5$ real edges, and $G_4$ has $7$ real edges (a bi-directed edge is counted as 1 total edge). For a DCG $G$, let $S(G)$ denote the *skeleton* of $G$ where $(j,k) \in S(G)$ is a real or virtual edge. (M1) at (3.1,0.6) [$\mathcal{M}(G_1)$]{}; (M2) at (11.15,0.6) [$\mathcal{M}(G_3)$]{}; \(A) at (0,0) [$1$]{}; (B) at (1.3,0) [$2$]{}; (C) at (2.6,0) [$3$]{}; (D) at (1.3,1) [$4$]{}; (E) at (1.3,2) [$5$]{}; (G1) at (1.3,-1.0) [$G_1$]{}; (A2) at (3.6,0) [$1$]{}; (B2) at (4.9,0) [$2$]{}; (C2) at (6.2,0) [$3$]{}; (D2) at (4.9,1) [$4$]{}; (E2) at (4.9,2) [$5$]{}; (G2) at (4.9,-1.0) [$G_2$]{}; (A3) at (8.0, 0) [$1$]{}; (B3) at (9.3,0) [$2$]{}; (C3) at (10.7,0) [$3$]{}; (D3) at (9.3,1) [$4$]{}; (E3) at (9.3,2) [$5$]{}; (G3) at (9.3,-1.0) [$G_3$]{}; (A4) at (11.7,0) [$1$]{}; (B4) at (13.0,0) [$2$]{}; (C4) at (14.3,0) [$3$]{}; (D4) at (13.0,1) [$4$]{}; (E4) at (13.0,2) [$5$]{}; (G4) at (13.0,-1.0) [$G_4$]{}; \(A) edge \[right=25\] node\[above\] [ ]{} (D); (B) edge \[left =25\] node\[above\] [ ]{} (D); (C) edge \[left =25\] node\[above\] [ ]{} (D); (D) edge \[left =25\] node\[above\] [ ]{} (E); (E) edge \[bend right =35\] node\[above\] [ ]{} (A); (E) edge \[bend left =35\] node\[above\] [ ]{} (C); (A) edge \[-, dotted,thick \] node\[above\] [ ]{} (B); (B) edge \[-, dotted,thick \] node\[above\] [ ]{} (C); (A) edge \[-, dotted,thick, bend right =30 \] node\[above\] [ ]{} (C); (A2) edge \[right=25\] node\[above\] [ ]{} (D2); (B2) edge \[left =15\] node\[above\] [ ]{} (D2); (C2) edge \[left =25\] node\[above\] [ ]{} (D2); (D2) edge \[left =25\] node\[above\] [ ]{} (E2); (E2) edge \[bend right =35\] node\[above\] [ ]{} (A2); (E2) edge \[bend left =35\] node\[above\] [ ]{} (C2); (A2) edge \[ \] node\[above\] [ ]{} (B2); (B2) edge \[ \] node\[above\] [ ]{} (C2); (A2) edge \[bend right =30 \] node\[above\] [ ]{} (C2); (A3) edge \[right=25\] node\[above\] [ ]{} (D3); (B3) edge \[bend left =15\] node\[above\] [ ]{} (D3); (D3) edge \[bend left =15\] node\[above\] [ ]{} (B3); (C3) edge \[left =25\] node\[above\] [ ]{} (D3); (A3) edge \[-,dotted,thick\] node\[above\] [ ]{} (B3); (C3) edge \[-,dotted,thick\] node\[above\] [ ]{} (B3); (A3) edge \[bend right =45\] node\[above\] [ ]{} (C3); (D3) edge \[left =45\] node\[above\] [ ]{} (E3); (A4) edge \[right=25\] node\[above\] [ ]{} (D4); (B4) edge \[left =25\] node\[above\] [ ]{} (D4); (C4) edge \[left =25\] node\[above\] [ ]{} (D4); (A4) edge \[left =25\] node\[above\] [ ]{} (B4); (A4) edge \[bend right =45\] node\[above\] [ ]{} (C4); (C4) edge \[left =45\] node\[above\] [ ]{} (B4); (D4) edge \[left =45\] node\[above\] [ ]{} (E4); Using this definition of the skeleton $S(G)$ for a DCG $G$, the definitions of the identifiable and weak [SMR]{} assumptions carry over from DAG to DCG models. For completeness, we re-state the definitions here. \[DefSMRDCG\] A DCG model $(G^* ,\mathbb{P})$ satisfies the identifiable [SMR]{} assumption if $(G^* ,\mathbb{P})$ satisfies the CMC and $|S(G^*)| < |S(G)|$ for every DCG $G$ such that $(G ,\mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$. \[DefFruDCG\] A DCG model $(G^* ,\mathbb{P})$ satisfies the weak [SMR]{} assumption if $(G^* ,\mathbb{P})$ satisfies the CMC and $|S(G^*)| \leq |S(G)|$ for every DCG $G$ such that $(G ,\mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$. Both the [SMR]{} and SGS minimality assumptions prefer graphs with the fewest total edges. The main difference between the SGS-minimality assumption and the [SMR]{} assumptions is that the SGS-minimality assumption requires that there is no DCGs with a *strict subset* of edges whereas the [SMR]{} assumptions simply require that there are no DCGs with *fewer* edges. Unfortunately as we observe later unlike for DAG models, the identifiable [SMR]{} assumption is not weaker than the CFC for DCG models. Therefore, the identifiable [SMR]{} assumption does not guarantee identifiability of [MECs]{} for DCG models. On the other hand, while the weak [SMR]{} assumption may not guarantee uniqueness, we prove it is a strictly weaker assumption than the CFC. We explore the relationships between the CFC, identifiable and weak [SMR]{}, and minimality assumptions in the next section. Comparison of SMR, CFC and minimality assumptions for DCG models {#SubSecSMR} ---------------------------------------------------------------- Before presenting our main result in this section, we provide a lemma which highlights the important difference between the [SMR]{} assumptions for graphical models with cycles compared to DAG models. Recall that the [SMR]{} assumptions involve counting the number of edges, whereas the CFC and P-minimality assumption involve d-separation rules. First, we provide a fundamental link between the presence of an edge in $S(G)$ and d-separation/connection rules. \[Lem:Sec3a\] For a DCG $G$, $(j,k) \in S(G)$ if and only if $j$ is d-connected to $k$ given $S$ for all $S \subset V \setminus \{j,k\}$. First, we show that if $(j,k) \in S(G)$ then $j$ is d-connected to $k$ given $S$ for all $S \subset V \setminus \{j,k\}$. By the definition of d-connection/separation, there is no subset $S \subset V \setminus \{j,k\}$ such that $j$ is d-separated from $k$ given $S$. Second, we prove that if $(j,k) \notin S(G)$ then there exists $S \subset V \setminus \{j,k\}$ such that $j$ is d-separated from $k$ given $S$. Let $S = \mbox{an}(j) \cup \mbox{an}(k)$. Then $S$ has no common children or descendants, otherwise $(j,k)$ are virtually adjacent. Then there is no undirected path between $j$ and $k$ conditioned on the union of ancestors of $j$ and $k$, and therefore $j$ is d-separated from $k$ given $S$. This completes the proof. Note that the above statement is true for real or virtual edges and not real edges alone. We now state an important lemma which shows the key difference in comparing the [SMR]{} assumptions to other identifiability assumptions (CFC, P-minimality, SGS-minimality) for graphical models with cycles, which does not arise for DAG models. \[Lem:Sec3b\] - For any two DCGs $G_1$ and $G_2$, $D_{sep}(G_1) \subseteq D_{sep}(G_2)$ implies $S(G_2) \subseteq S(G_1)$. - There exist two DCGs $G_1$ and $G_2$ such that $S(G_1) = S(G_2)$, but $D_{sep}(G_1)$ $\neq$ $D_{sep}(G_2)$ and $D_{sep}(G_1) \subset D_{sep}(G_2)$. For DAGs, no two such graphs exist. We begin with the proof of (a). Suppose that $S(G_1)$ is not a sub-skeleton of $S(G_2)$, meaning that there exists a pair $(j,k) \in S(G_1)$ and $(j,k) \notin S(G_2)$. By Lemma \[Lem:Sec3a\], $j$ is d-connected to $k$ given $S$ for all $S \subset V \setminus \{j,k\}$ in $G_1$ while there exists $S \subset V \setminus \{j,k\}$ such that $j$ is d-separated from $k$ given $S$ entailed by $G_2$. Hence it is contradictory that $D_{sep}(G_1) \subset D_{sep}(G_2)$. For (b), we refer to the example in Figure \[Fig:Sec3b\]. In Figure \[Fig:Sec3b\], the unshielded triple $(1, 4, 2)$ is a conductor in $G_1$ and an imperfect non-conductor in $G_2$ because of a reversed directed edge between $4$ and $5$. By the property of a conductor, $1$ is not d-separated from $4$ given the empty set for $G_1$. In contrast for $G_2$, $1$ is d-separated from $4$ given the empty set. Other d-separation rules are the same for both $G_1$ and $G_2$. \(A) at(0,0) [$1$]{}; (B) at(1.5,0) [$2$]{}; (C) at(3,0) [$3$]{}; (D) at(4.5,0) [$4$]{}; (E) at(3,1.5) [$5$]{}; (G1) at(3, -1.2) [$G_1$]{}; (A2) at(7,0) [$1$]{}; (B2) at(8.5,0) [$2$]{}; (C2) at(10,0) [$3$]{}; (D2) at(11.5,0) [$4$]{}; (E2) at(10,1.5) [$5$]{}; (G1) at(10, -1.2) [$G_2$]{}; \(A) edge \[right=25\] node\[above\] [ ]{} (B); (B) edge \[bend left =25\] node\[above\] [ ]{} (C); (C) edge \[bend left =25\] node\[above\] [ ]{} (B); (D) edge \[left =25\] node\[above\] [ ]{} (C); (B) edge \[left =25\] node\[above\] [ ]{} (E); (C) edge \[left =25\] node\[above\] [ ]{} (E); (E) edge \[left =25, color= red\] node\[above\] [ ]{} (D); (A) edge \[-, dotted, bend right= 35, thick \] node\[above\] [ ]{} (C); (B) edge \[-, dotted, bend right= 35, thick \] node\[above\] [ ]{} (D); (A2) edge \[right=25\] node\[above\] [ ]{} (B2); (B2) edge \[bend left =25\] node\[above\] [ ]{} (C2); (C2) edge \[bend left =25\] node\[above\] [ ]{} (B2); (D2) edge \[left =25\] node\[above\] [ ]{} (C2); (B2) edge \[left =25\] node\[above\] [ ]{} (E2); (C2) edge \[left =25\] node\[above\] [ ]{} (E2); (D2) edge \[left =25, color= red\] node\[above\] [ ]{} (E2); (A2) edge \[-, dotted, bend right= 35, thick \] node\[above\] [ ]{} (C2); (B2) edge \[-, dotted, bend right= 35, thick \] node\[above\] [ ]{} (D2); Lemma \[Lem:Sec3b\] (a) holds for both DAGs and DCGs, and allows us to conclude a subset-superset relation between edges in the skeleton and d-separation rules in a graph $G$. Part (b) is where there is a key difference DAGs and directed graphs with cycles. Part (b) asserts that there are examples in which the edge set in the skeleton may be totally equivalent, yet one graph entails a strict superset of d-separation rules. Now we present the main result of this section which compares the identifiable and weak [SMR]{} assumptions with the CFC and P-minimality assumption. \[Thm:Sec3a\] For DCG models, - the weak [SMR]{} assumption is weaker than the CFC. - there exists a DCG model $(G, \mathbb{P})$ satisfying the CFC that does not satisfy the identifiable [SMR]{} assumption. - the identifiable [SMR]{} assumption is stronger than the P-minimality assumption. - there exists a DCG model $(G, \mathbb{P})$ satisfying the weak [SMR]{} assumption that does not satisfy the P-minimality assumption. <!-- --> - The proof for (a) follows from Lemma \[Lem:Sec3b\] (a). If a DCG model $(G^*, \mathbb{P})$ satisfies the CFC, then for any graph $G$ such that $(G, \mathbb{P})$ satisfies the CMC, $D_{sep}(G) \subseteq D_{sep}(G^*)$. Hence based on Lemma \[Lem:Sec3b\] (a), $S(G^*) \subseteq S(G)$ and $(G^*,\mathbb{P})$ satisfies the weak [SMR]{} assumption. - We refer to the example in Figure \[Fig:Sec3b\] where $(G_2, \mathbb{P})$ satisfies the CFC and fails to satisfy the identifiable [SMR]{} assumption because $S(G_1) = S(G_2)$ and $(G_1, \mathbb{P})$ satisfies the CMC. - The proof for (c) again follows from Lemma \[Lem:Sec3b\] (a). Suppose that a DCG model $(G^*, \mathbb{P})$ fails to satisfy the P-minimality assumption. This implies that there exists a DCG $G$ such that $(G, \mathbb{P})$ satisfies the CMC, $G \notin \mathcal{M}(G^*)$ and $D_{sep}(G^*) \subset D_{sep}(G)$. Lemma \[Lem:Sec3b\] (a) implies $S(G) \subseteq S(G^*)$. Hence $G^*$ cannot have the fewest edges uniquely, therefore $(G^*, \mathbb{P})$ fails to satisfy the identifiable [SMR]{} assumption. - We refer to the example in Figure \[Fig:Sec3b\] where $(G_1,\mathbb{P})$ satisfies the weak [SMR]{} assumption and fails to satisfy the P-minimality assumption. Further explanation is given in Figure \[Fig:App2\] in the appendix. Theorem \[Thm:Sec3a\] shows that if a DCG model $(G, \mathbb{P})$ satisfies the CFC, the weak [SMR]{} assumption is satisfied whereas the identifiable [SMR]{} assumption is not necessarily satisfied. For DAG models, the identifiable [SMR]{} assumption is strictly weaker than the CFC and the identifiable [SMR]{} assumption guarantees identifiability of the true [MEC]{}. However, Theorem \[Thm:Sec3a\] (b) implies that the identifiable [SMR]{} assumption is not strictly weaker than the CFC for DCG models. On the other hand, unlike for DAG models, the weak [SMR]{} assumption does not imply the P-minimality assumption for DCG models, according to (d). In Section \[SecSimulation\], we implement an algorithm that uses the identifiable [SMR]{} assumption and the results seem to suggest that on average for DCG models, the identifiable [SMR]{} assumption is weaker than the CFC. New principle: Maximum d-separation rules (MDR) {#SecMaxDSep} =============================================== In light of the fact that the identifiable [SMR]{} assumption does not lead to a strictly weaker assumption than the CFC, we introduce the maximum d-separation rules (MDR) assumption. The [MDR]{} assumption asserts that $G^*$ entails more d-separation rules than any other graph satisfying the CMC according to the given distribution $\mathbb{P}$. We use $CI(\mathbb{P})$ to denote the conditional independence (CI) statements corresponding to the distribution $\mathbb{P}$. A DCG model $(G^* ,\mathbb{P})$ satisfies the maximum *d-separation* rules (MDR) assumption if $(G^* ,\mathbb{P})$ satisfies the CMC and $|D_{sep}(G)| < |D_{sep}(G^*)|$ for every DCG $G$ such that $(G ,\mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$. There is a natural and intuitive connection between the MDR assumption and the P-minimality assumption. Both assumptions encourage DCGs to entail more d-separation rules. The key difference between the P-minimality assumption and the MDR assumption is that the P-minimality assumption requires that there is no DCGs that entail a *strict superset* of d-separation rules whereas the MDR assumption simply requires that there are no DCGs that entail a *greater number* of d-separation rules. Comparison of [MDR]{} to CFC and minimality assumptions for DCGs {#SubSecMDROcc} ---------------------------------------------------------------- In this section, we provide a comparison of the MDR assumption to the CFC and P-minimality assumption. For ease of notation, let $\mathcal{G}_{M}(\mathbb{P})$ and $\mathcal{G}_{F}(\mathbb{P})$ denote the set of Markovian DCG models satisfying the MDR assumption and CFC, respectively. In addition, let $\mathcal{G}_{P}(\mathbb{P})$ denote the set of DCG models satisfying the P-minimality assumption. \[Thm:Sec4a\] Consider a DCG model $(G^*, \mathbb{P})$. - If $\mathcal{G}_F(\mathbb{P}) \neq \emptyset$, then $\mathcal{G}_F (\mathbb{P}) = \mathcal{G}_{M}(\mathbb{P})$. Consequently if $(G^*, \mathbb{P})$ satisfies the CFC, then $\mathcal{G}_F(\mathbb{P}) = \mathcal{G}_{M}(\mathbb{P}) = \mathcal{M}(G^*)$. - There exists a distribution $\mathbb{P}$ for which $\mathcal{G}_F(\mathbb{P}) = \emptyset$ while $(G^*, \mathbb{P})$ satisfies the [MDR]{} assumption and $\mathcal{G}_{M}(\mathbb{P}) = \mathcal{M}(G^*)$. - $\mathcal{G}_{M}(\mathbb{P}) \subseteq \mathcal{G}_{P}(\mathbb{P})$. - There exists a distribution $\mathbb{P}$ for which $\mathcal{G}_{M}(\mathbb{P}) = \emptyset$ while $(G^*, \mathbb{P})$ satisfies the P-minimality assumption and $\mathcal{G}_{P}(\mathbb{P}) \supseteq \mathcal{M}(G^*)$. <!-- --> - Suppose that $(G^*, \mathbb{P})$ satisfies the CFC. Then $CI(\mathbb{P})$ corresponds to the set of d-separation rules entailed by $G^*$. Note that if $(G, \mathbb{P})$ satisfies the CMC and $G \notin \mathcal{M}(G^*)$, then $CI(\mathbb{P})$ is a superset of the set of d-separation rules entailed by $G$ and therefore $D_{sep}(G) \subset D_{sep}(G^*)$. This allows us to conclude that graphs belonging to $\mathcal{M}(G^*)$ should entail the maximum number of d-separation rules among graphs satisfying the CMC. Furthermore, based on the CFC $\mathcal{G}_F(\mathbb{P}) = \mathcal{M}(G^*)$ which completes the proof. - Suppose that $(G^*,\mathbb{P})$ fails to satisfy the P-minimality assumption. By the definition of the P-minimality assumption, there exists $(G,\mathbb{P})$ satisfying the CMC such that $G \notin \mathcal{M}(G^*)$ and $D_{sep}(G^*) \subset D_{sep}(G)$. Hence, $G^*$ entails strictly less d-separation rules than $G$, and therefore $(G^*,\mathbb{P})$ violates the [MDR]{} assumption. - For (b) and (d), we refer to the example in Figure $\ref{fig:Sec4a}$. Suppose that $X_1$, $X_2$, $X_3$, $X_4$ are random variables with distribution $\mathbb{P}$ with the following CI statements: $$\label{CIrelations} CI(\mathbb{P}) = \{X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_3 \mid X_2;~X_2 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 \mid X_1, X_3;~X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_2 \mid X_4\}.$$ We show that $(G_1, \mathbb{P})$ satisfies the MDR assumption but not the CFC, whereas $(G_2, \mathbb{P})$ satisfies the P-minimality assumption but not the MDR assumption. Any graph satisfying the CMC with respect to $\mathbb{P}$ must only entail a subset of the three d-separation rules: $\{X_1~\mbox{d-sep}~X_3 \mid X_2; X_2~\mbox{d-sep} $ $X_4 \mid X_1,X_3;~X_1~\mbox{d-sep}~X_2 \mid X_4 \}$. Clearly $D_{sep}(G_1) = \{X_1 ~\mbox{d-sep} ~X_3 \mid X_2; ~X_2 ~\mbox{d-sep} ~X_4 \mid X_1, X_3\}$, therefore $(G_1, \mathbb{P})$ satisfies the CMC. It can be shown that no graph entails any subset containing two or three of these d-separation rules other than $G_1$. Hence no graph follows the CFC with respect to $\mathbb{P}$ since there is no graph that entails all three d-separation rules and $(G_1, \mathbb{P})$ satisfies the MDR assumption because no graph entails more or as many d-separation rules as $G_1$ entails, and satisfies the CMC with respect to $\mathbb{P}$. - Note that $G_2$ entails the sole d-separation rule, $D_{sep}(G_2) = \{X_1~\mbox{d-sep}~X_2 \mid X_4\}$ and it is clear that $(G_2, \mathbb{P})$ satisfies the CMC. If $(G_2, \mathbb{P})$ does not satisfy the P-minimality assumption, there exists a graph $G$ such that $(G,\mathbb{P})$ satisfies the CMC and $D_{sep}(G_2) \subsetneq D_{sep}(G)$. It can be shown that no such graph exists. Therefore, $(G_2, \mathbb{P})$ satisfies the P-minimality assumption. Clearly, $(G_2, \mathbb{P})$ fails to satisfy the [MDR]{} assumption because $G_1$ entails more d-separation rules. \(A) at (0,0) [$X_1$]{}; (B) at (2,0) [$X_2$]{}; (C) at (2,-1.5) [$X_3$]{}; (D) at (0,-1.5) [$X_4$]{}; (G1) at(1, -2.5) [$G_1$]{}; \(A) edge \[shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (B); (B) edge \[shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (C); (C) edge \[shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (D); (A) edge \[shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (D); (A2) at (5,0) [$X_1$]{}; (B2) at (7,0) [$X_2$]{}; (C2) at (7,-1.5) [$X_3$]{}; (D2) at (5,-1.5) [$X_4$]{}; (G1) at(6, -2.5) [$G_2$]{}; (A2) edge \[shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (C2); (B2) edge \[shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (D2); (B2) edge \[shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (C2); (D2) edge \[shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (C2); (D2) edge \[shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (A2); Theorem \[Thm:Sec4a\] (a) asserts that whenever the set of DCG models satisfying the CFC is not empty, it is equivalent to the set of DCG models satisfying the [MDR]{} assumption. Part (b) claims that there exists a distribution in which no DCG model satisfies the CFC, while the set of DCG models satisfying the [MDR]{} assumption consists of its [MEC]{}. Hence, (a) and (b) show that the [MDR]{} assumption is strictly superior to the CFC in terms of recovering the true [MEC]{}. Theorem \[Thm:Sec4a\] (c) claims that any DCG models satisfying the [MDR]{} assumption should lie in the set of DCG models satisfying the P-minimality assumption. (d) asserts that there exist DCG models satisfying the P-minimality assumption but violating the [MDR]{} assumption. Therefore, (c) and (d) prove that the [MDR]{} assumption is strictly stronger than the P-minimality assumption. Comparison between the [MDR]{} and [SMR]{} assumptions {#SubSecMDRSMR} ------------------------------------------------------ Now we show that the [MDR]{} assumption is neither weaker nor stronger than the [SMR]{} assumptions for both DAG and DCG models. \[Lem:Sec4a\] - There exists a DAG model satisfying the identifiable [SMR]{} assumption that does not satisfy the [MDR]{} assumption. Further, there exists a DAG model satisfying the [MDR]{} assumption that does not satisfy the weak [SMR]{} assumption. - There exists a DCG model that is not a DAG that satisfies the same conclusion as (a). Our proof for Lemma \[Lem:Sec4a\] involves us constructing two sets of examples, one for DAGs corresponding to (a) and one for cyclic graphs corresponding to (b). For (a), Figure $\ref{fig:Sec4c}$ displays two DAGs, $G_1$ and $G_2$ which are clearly not in the same [MEC]{}. For clarity, we use red arrows to represent the edges/directions that are different between the graphs. We associate the same distribution $\mathbb{P}$ to each DAG where $CI(\mathbb{P})$ is provided in Appendix \[Proof:lemma(a)\]. With this $CI(\mathbb{P})$, both $(G_1, \mathbb{P})$ and $(G_2, \mathbb{P})$ satisfy the CMC (explained in Appendix \[Proof:lemma(a)\]). The main point of this example is that $(G_2,\mathbb{P})$ satisfies the identifiable and weak [SMR]{} assumptions whereas $(G_1,\mathbb{P})$ satisfies the [MDR]{} assumption, and therefore two different graphs are determined depending on the given identifiability assumption with respect to the same $\mathbb{P}$. A more detailed proof that $(G_1, \mathbb{P})$ satisfies the [MDR]{} assumption whereas $(G_2,\mathbb{P})$ satisfies the [SMR]{} assumption is provided in Appendix \[Proof:lemma(a)\]. (Y1) at (0,0) [$X_1$]{}; (Y2) at (2,1.7) [$X_2$]{}; (Y3) at (2,0) [$X_3$]{}; (Y4) at (2,-1.7) [$X_4$]{}; (Y5) at (4,0) [$X_5$]{}; (Y12) at (2,-2.7) [$G_1$]{}; (Y1) edge \[right =-35\] node\[above\] [ ]{} (Y3); (Y2) edge \[bend right = 30, color = red\] node\[above\] [ ]{} (Y1); (Y2) edge \[bend right =-35\] node\[above\] [ ]{} (Y4); (Y2) edge \[bend right = -30\] node\[above\] [ ]{} (Y5); (Y4) edge \[bend right = 30 \] node\[above\] [ ]{} (Y5); (Y5) edge \[bend right =35, color = red\] node\[above\] [ ]{} (Y1); (Y5) edge \[right =35\] node\[above\] [ ]{} (Y3); (X1) at (7,0) [$X_1$]{}; (X2) at (9,1.7) [$X_2$]{}; (X3) at (9,0) [$X_3$]{}; (X4) at (9,-1.7) [$X_4$]{}; (X5) at (11,0) [$X_5$]{}; (X12) at (9,-2.7) [$G_2$]{}; (X1) edge \[bend right =-30, color = red\] node\[above\] [ ]{} (X2); (X1) edge \[right =25\] node\[above\] [ ]{} (X3); (X4) edge \[bend right =-30, color = red\] node\[above\] [ ]{} (X1); (X2) edge \[bend right =-30\] node\[above\] [ ]{} (X5); (X5) edge \[right =25\] node\[above\] [ ]{} (X3); (X4) edge \[bend right =30\] node\[above\] [ ]{} (X5); (Z1) at (0, 0) [$X_1$]{}; (Z2) at (3, 3) [$X_2$]{}; (Z3) at (3, 0) [$X_3$]{}; (Z4) at (3, -3) [$X_4$]{}; (Z5) at (6, 0) [$X_5$]{}; (Z6) at (1.5, 1.5) [$X_6$]{}; (Z7) at (3, 1.5) [$X_7$]{}; (Z8) at (4.5, 1.5) [$X_8$]{}; (Z9) at (1.5,-1.5) [$X_9$]{}; (Z10) at (3, -1.5) [$X_{10}$]{}; (Z11) at (4.5,-1.5) [$X_{11}$]{}; (K1) at (1.7, 4.2) [$~Y~$]{} ; (Z12) at (3,-4) [$G_1$]{}; (Z1) edge \[bend right=-35\] node\[above\] [ ]{} (Z2); (Z3) edge \[right=25, color =red\] node\[above\] [ ]{} (Z1); (Z1) edge \[ bend right= 35\] node\[above\] [ ]{} (Z4); (Z2) edge \[bend right= -35\] node\[above\] [ ]{} (Z5); (Z5) edge \[right=25, color = red\] node\[above\] [ ]{} (Z3); (Z4) edge \[bend right=35\] node\[above\] [ ]{} (Z5); (Z2) edge \[right=25\] node\[above\] [ ]{} (Z6); (Z2) edge \[right=25\] node\[above\] [ ]{} (Z7); (Z2) edge \[right=25\] node\[above\] [ ]{} (Z8); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z6); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z7); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z8); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z9); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z10); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z11); (Z4) edge \[right=25\] node\[above\] [ ]{} (Z9); (Z4) edge \[right=25\] node\[above\] [ ]{} (Z10); (Z4) edge \[right=25\] node\[above\] [ ]{} (Z11); (Z2) edge \[-, dotted, color =red, thick, bend right=25\] node\[above\] [ ]{} (Z4); (Z2) edge \[-, dotted, color =red, thick, bend right=10\] node\[above\] [ ]{} (K1); (K1) edge \[-, dotted, color =red, thick, bend right=25\] node\[above\] [ ]{} (Z4); (Z1) edge \[bend right=-25\] node\[above\] [ ]{} (K1); (K1) edge \[bend right=-45\] node\[above\] [ ]{} (Z5); (K2) at (10.7, 4.2) [$~Y~$]{} ; (Y1) at (9,0) [$X_1$]{}; (Y2) at (12,3) [$X_2$]{}; (Y3) at (12,0) [$X_3$]{}; (Y4) at (12,-3) [$X_4$]{}; (Y5) at (15,0) [$X_5$]{}; (Y6) at (10.5,1.5) [$X_6$]{}; (Y7) at (12,1.5) [$X_7$]{}; (Y8) at (13.5,1.5) [$X_8$]{}; (Y9) at (10.5,-1.5) [$X_9$]{}; (Y10) at (12,-1.5) [$X_{10}$]{}; (Y11) at (13.5,-1.5) [$X_{11}$]{}; (Y12) at (12,-4) [$G_2$]{}; (Y1) edge \[bend right=-35\] node\[above left\] [ ]{} (Y2); (Y1) edge \[right=25\] node\[above\] [ ]{} (Y3); (Y1) edge \[bend right=35\] node\[below left\] [ ]{} (Y4); (Y2) edge \[bend right=-35\] node\[above right\] [ ]{} (Y5); (Y5) edge \[right=25, color = red\] node\[above\] [ ]{} (Y3); (Y4) edge \[bend right=35\] node\[below right\] [ ]{} (Y5); (Y2) edge \[right=25\] node\[above\] [ ]{} (Y6); (Y2) edge \[right=25\] node\[above\] [ ]{} (Y7); (Y2) edge \[right=25\] node\[above\] [ ]{} (Y8); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y6); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y7); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y8); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y9); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y10); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y11); (Y4) edge \[right=25\] node\[above\] [ ]{} (Y9); (Y4) edge \[right=25\] node\[above\] [ ]{} (Y10); (Y4) edge \[right=25\] node\[above\] [ ]{} (Y11); (Y1) edge \[bend right= 25, color = red\] node\[ above left \] [ ]{} (Y5); (Y1) edge \[bend right=-25\] node\[above\] [ ]{} (K2); (K2) edge \[bend right=-45\] node\[above\] [ ]{} (Y5); For (b), Figure \[fig:Sec4d\] displays two DCGs $G_1$ and $G_2$ which do not belong to the same [MEC]{}. Once again red arrows are used to denote the edges (both real and virtual) that are different between the graphs. We associate the same distribution $\mathbb{P}$ with conditional independent statements $CI(\mathbb{P})$ (provided in Appendix \[Proof:lemma(b)\]) to each graph such that both $(G_1,\mathbb{P})$ and $(G_2,\mathbb{P})$ satisfy the CMC (explained in Appendix \[Proof:lemma(b)\]). Again, the main idea of this example is that $(G_1,\mathbb{P})$ satisfies the [MDR]{} assumption whereas $(G_2,\mathbb{P})$ satisfies the identifiable [SMR]{} assumption. A detailed proof that $(G_1, \mathbb{P})$ satisfies the [MDR]{} assumption whereas $(G_2,\mathbb{P})$ satisfies the identifiable [SMR]{} assumption can be found in Appendix \[Proof:lemma(b)\]. Intuitively, the reason why fewer edges does not necessarily translate to entailing more d-separation rules is that the placement of edges relative to the rest of the graph and what additional paths they allow affects the total number of d-separation rules entailed by the graph. In summary, the flow chart in Figure \[Flowchart\] shows how the CFC, SMR, MDR and minimality assumptions are related for both DAG and DCG models: \[-latex ,node distance = 2 cm and 3cm ,on grid , state/.style =[ rectangle, rounded corners, top color =white , bottom color=blue!20 , thick, text centered,text width= 1.7cm, minimum height=10mm, draw, black , text=black , minimum width =1 cm]{}, state2/.style =[ rectangle, rounded corners, top color =white , bottom color=white , dotted, thick, text centered,text width= 2.0cm, minimum height=10mm, draw, white , text=black , minimum width =1 cm]{}, state3/.style =[ rectangle, rounded corners, top color =white , bottom color=blue!20 , thick, text centered,text width= 1.7cm, minimum height=15mm, draw, black , text=black , minimum width =1 cm]{}, state4/.style =[ minimum height= 2mm, minimum width = 2mm]{}, state5/.style =[ rectangle, rounded corners, top color =white , bottom color=white , thick, text centered,text width= 2.6cm, minimum height=15mm, draw, black , text=black , minimum width =2.6cm]{}, state6/.style =[ rectangle, rounded corners, top color =white , bottom color=blue!20 , thick, text centered,text width= 1.6cm, minimum height=15mm, draw, black , text=black , minimum width =1.6cm]{}, state7/.style =[ rectangle, rounded corners, top color =white , bottom color=blue!20 , thick, text centered,text width= 1.0cm, minimum height=15mm, draw, black , text=black , minimum width =1.0cm]{}, label/.style=[thick, minimum size= 2mm]{} \] (A) at (0,10) [CFC]{}; (B) at (-2.2,7.7) [MDR]{}; (Z) at (2.2,7.7) [SMR]{}; (D) at (-2.2,5.4) [P-min]{}; (E) at ( 2.2,5.4) [SGS-min]{}; (G) at (0,4.2) [Directed Acyclic Graph (DAG)]{}; (An1) at (-1.2,10) ; (An2) at ( 1.2,10) ; (An3) at (-2.2,8.5) ; (An4) at ( 2.2,8.5) ; (An1) edge \[bend right =30\] node\[above left\] [Thm \[Thm:Sec4a\] (a) ]{} (An3); (An2) edge \[bend left =30\] node\[above right\] [Thm \[Thm:Sec2b\] ]{} (An4); \(B) edge \[shorten &lt;= 2pt, shorten &gt;= 2pt\] node\[left\] [Thm \[Thm:Sec4a\] (c) ]{} (D); (B) to node\[below\] [Lem \[Lem:Sec4a\] (a) ]{} (Z); (Z) edge \[shorten &lt;= 2pt, shorten &gt;= 2pt\] node\[below right\] [Thm \[Thm:Sec2a\] ]{} (D); (D) edge \[shorten &lt;= 2pt, shorten &gt;= 2pt\] node\[below\] [Thm \[Thm:Sec2a\] ]{} (E); (Z) edge \[shorten &lt;= 2pt, shorten &gt;= 2pt\] node\[below\] [ ]{} (E); (A2) at (7.9, 10) [CFC]{}; (B2) at (5.4, 7.7) [MDR]{}; (Z2) at (10.1,7.7) ; (C2) at (9.45,7.7) [Identifiable SMR ]{}; (K2) at (11.00,7.7) [Weak SMR]{}; (D2) at (5.4, 5.4) [P-min]{}; (E2) at (10.1, 5.4) [SGS-min]{}; (G) at (7.9, 4.2) [Directed Cyclic Graph (DCG)]{}; (Bn1) at ( 6.9,9.9) ; (Bn2) at ( 9.0,10) ; (Bn3) at ( 5.5,8.5) ; (Bn4) at ( 9.7 ,8.5) ; (Bn9) at ( 11.30,8.5) ; (Bn10)at ( 9.70,6.9) ; (Bn1) edge \[bend right =35\] node\[above left\] [ Thm \[Thm:Sec4a\] (a) ]{} (Bn3); (Bn2) edge \[bend left =10, shorten &gt;= 3pt\] node\[below left\] [ Thm \[Thm:Sec3a\] (d) ]{} (C2); (Bn2) edge \[bend right =-35\] node\[auto\] [ Thm \[Thm:Sec3a\] (a) ]{} (K2); (B2) edge \[shorten &lt;= 2pt, shorten &gt;= 2pt\] node\[left\] [Thm \[Thm:Sec4a\] (c) ]{} (D2); (B2) to node\[below \] [ Lem \[Lem:Sec4a\] (b) ]{} (C2); (D2) edge \[shorten &lt;= 2pt, shorten &gt;= 2pt\] node\[below\] [Thm \[Thm:Sec2a\] ]{} (E2); (C2) edge \[shorten &lt;= 2pt, shorten &gt;= 2pt\] node\[below right\] [Thm \[Thm:Sec3a\] (c) ]{} (D2); (Z2) edge \[shorten &lt;= 2pt, shorten &gt;= 2pt\] node\[below right\] [ ]{} (E2); ; Simulation results {#SecSimulation} ================== In Sections \[SecSMRFrugality\] and \[SecMaxDSep\], we proved that the [MDR]{} assumption is strictly weaker than the CFC and stronger than the P-minimality assumption for both DAG and DCG models, and the identifiable [SMR]{} assumption is stronger than the P-minimality assumption for DCG models. In this section, we support our theoretical results with numerical experiments on small-scale Gaussian linear DCG models (see e.g., [@Spirtes1995]) using the generic Algorithm \[algorithm\]. We also provide a comparison of Algorithm \[algorithm\] to state-of-the-art algorithms for small-scale DCG models in terms of recovering the skeleton of a DCG model. Step 1: Find all conditional independence statements $\widehat{CI}(\mathbb{P})$ using a conditional independence test Step 2: Find the set of graphs $\widehat{\mathcal{G}}$ satisfying the given identifiability assumption $\widehat{\mathcal{M}}(G) \gets \emptyset$ $\widehat{S}(G) \gets \emptyset$ DCG model and simulation setup ------------------------------ Our simulation study involves simulating DCG models from $p$-node random Gaussian linear DCG models where the distribution $\mathbb{P}$ is defined by the following linear structural equations: $$\label{eq:GGM} (X_1,X_2,\cdots,X_p)^T = B^T (X_1,X_2,\cdots,X_p)^T + \epsilon$$ where $B \in \mathbb{R}^{p \times p}$ is an edge weight matrix with $B_{jk} = \beta_{jk}$ and $\beta_{jk}$ is a weight of an edge from $X_j$ to $X_k$. Furthermore, $\epsilon \sim \mathcal{N}(\mathbf{0}_{p}, I_p)$ where $\mathbf{0}_{p} = (0,0,\cdots,0)^T \in \mathbb{R}^{p}$ and $I_p \in \mathbb{R}^{p \times p}$ is the identity matrix. The matrix $B$ encodes the DCG structure since if $\beta_{jk}$ is non-zero, $X_j \to X_k$ and the pair $(X_j, X_k)$ is *really adjacent*, otherwise there is no directed edge from $X_j$ to $X_k$. In addition if there is a set of nodes $S = (s_1, s_2,\cdots,s_t)$ such that the product of $\beta_{j s_1}, \beta_{k s_1}, \beta_{s_1 s_2}, \cdots, \beta_{s_t j}$ is non-zero, the pair $(X_j, X_k)$ is *virtually adjacent*. Note that if the graph is a DAG, we would need to impose the constraint that $B$ is upper triangular; however for DCGs we impose no such constraints. We present simulation results for two sets of models, DCG models where edges and directions are determined randomly, and DCG models whose edges have a specific graph structure. For the set of random DCG models, the simulation was conducted using $100$ realizations of 5-node random Gaussian linear DCG models  where we impose sparsity by assigning a probability that each entry of the matrix $B$ is non-zero and we set the expected neighborhood size range from $1$ (sparse graph) to $4$ (fully connected graph) depending on the non-zero edge weight probability. Furthermore the non-zero edge weight parameters were chosen uniformly at random from the range $\beta_{jk} \in [-1, -0.25] \cup [0.25, 1]$ which ensures the edge weights are bounded away from $0$. We also ran simulations using $100$ realizations of a 5-node Gaussian linear DCG models  with specific graph structures, namely trees, bipartite graphs, and cycles. Figure \[fig:Sec5g\] shows examples of skeletons of these special graphs. We generate these graphs as follows: First, we set the skeleton for our desired graph based on Figure. \[fig:Sec5g\] and then determine the non-zero edge weights which are chosen uniformly at random from the range $\beta_{jk} \in [-1, -0.25] \cup [0.25, 1]$. Second, we repeatedly assign a randomly chosen direction to each edge until every graph has at least one possible directed cycle. Therefore, the bipartite graphs always have at least one directed cycle. However, tree graphs have no cycles because they have no cycles in the skeleton. For cycle graphs, we fix the directions of edges to have a directed cycle $X_1 \to X_2 \to \cdots \to X_5 \to X_1$. \(A) at (0,0) [$X_1$]{}; (B) at (1.5, 1) [$X_2$]{}; (C) at (1.5,-1) [$X_3$]{}; (D) at (3, 2) [$X_4$]{}; (E) at (3, 0) [$X_5$]{}; (G1) at(1.5,-2) [Tree (1)]{}; \(A) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (B); (A) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (C); (B) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (D); (B) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (E); \(A) at (5.7, 0.5) [$X_1$]{}; (B) at (4.2, 0.5) [$X_2$]{}; (C) at (7.2, 0.5) [$X_3$]{}; (D) at (5.7, 2.0) [$X_4$]{}; (E) at (5.7, -1.0) [$X_5$]{}; (G1) at(5.7, -2.0) [Tree (2)]{}; \(A) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (B); (A) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (C); (A) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (D); (A) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (E); (A2) at (8.7,0.5) [$X_1$]{}; (B2) at (10.2, 2) [$X_2$]{}; (C2) at (10.2, 0.5) [$X_3$]{}; (D2) at (10.2,-1) [$X_4$]{}; (E2) at (11.7, 0.5) [$X_5$]{}; (G1) at (10.2, -2) [Bipartite]{}; (A2) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (B2); (A2) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (C2); (A2) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (D2); (B2) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (E2); (C2) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (E2); (D2) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (E2); \(A) at (13.0, 1.0) [$X_1$]{}; (B) at (14.4, 2.0) [$X_2$]{}; (C) at (15.5, 0.5) [$X_3$]{}; (D) at (14.7,-1.0) [$X_4$]{}; (E) at (13.0,-0.5) [$X_5$]{}; (G1) at(14.2, -2.0) [Cycle]{}; \(A) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (B); (B) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above\] [ ]{} (C); (C) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (D); (D) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (E); (E) edge \[-, shorten &lt;= 1pt, shorten &gt;= 1pt\] node\[above \][ ]{} (A); Comparison of assumptions ------------------------- In this section we provide a simulation comparison between the SMR, MDR, CFC and minimality assumptions. The CI statements were estimated based on $n$ independent samples drawn from $\mathbb{P}$ using Fisher’s conditional correlation test with significance level $\alpha = 0.001$. We detected all directed graphs satisfying the CMC and we measured what proportion of graphs in the simulation satisfy each assumption (CFC, [MDR]{}, identifiable [SMR]{}, P-minimality). In Figures \[fig:Sec5a\], \[fig:Sec5b\] and \[fig:Sec5e\], we simulated how restrictive each identifiability assumption (CFC, [MDR]{}, identifiable [SMR]{}, P-minimality) is for random DCG models and specific graph structures with sample sizes $n \in \{100, 200, 500, 1000\}$ and expected neighborhood sizes from $1$ (sparse graph) to $4$ (fully connected graph). As shown in Figures \[fig:Sec5b\] and \[fig:Sec5e\], the proportion of graphs satisfying each assumption increases as sample size increases because of fewer errors in CI tests. Furthermore, there are more DCG models satisfying the [MDR]{} assumption than the CFC and less DCG models satisfying the [MDR]{} assumption than the P-minimality assumption for all sample sizes and different expected neighborhood sizes. We can also see similar relationships between the CFC, identifiable [SMR]{} and P-minimality assumptions. The simulation study supports our theoretical result that the [MDR]{} assumption is weaker than the CFC but stronger than the P-minimality assumption, and the identifiable [SMR]{} assumption is stronger than the P-minimality assumption. Although there are no theoretical guarantees that the identifiable [SMR]{} assumption is stronger than the [MDR]{} assumption and weaker than the CFC, Figures \[fig:Sec5a\] and \[fig:Sec5b\] represent that the identifiable [SMR]{} assumption is substantially stronger than the [MDR]{} assumption and weaker than the CFC on average. [.30]{} [.30]{} [.30]{} [.10]{} [.30]{} [.30]{} [.30]{} [.10]{} [.30]{} [.30]{} [.30]{} [.10]{} Comparison to state-of-the-art algorithms ----------------------------------------- In this section, we compare Algorithm \[algorithm\] to state-of-the-art algorithms for small-scale DCG models in terms of recovering the skeleton $S(G)$ for the graph. This addresses the issue of how likely Algorithm \[algorithm\] based on each assumption is to recover the skeleton of a graph compared to state-of-the-art algorithms. Once again we used Fisher’s conditional correlation test with significance level $\alpha = 0.001$ for Step 1) of Algorithm \[algorithm\], and we used the MDR and identifiable [SMR]{} assumptions for Step 2). For comparison algorithms, we used the state-of-the-art GES algorithm [@chickering2002finding] and the FCI$+$ algorithms [@claassen2013learning] for small-scale DCG models. We used the R package ’pcalg’ [@Kalisch2012] for the FCI$+$ algorithm, and ’bnlearn’ [@scutari2009learning] for the GES algorithm. [.30]{} [.30]{} [.30]{} [.10]{} [.30]{} [.30]{} [.30]{} [.10]{} Figures \[fig:Sec5c\] and \[fig:Sec5d\] show recovery rates of skeletons for random DCG models with sample sizes $n \in \{100, 200, 500, 1000\}$ and expected neighborhood sizes from $1$ (sparse graph) to $4$ (fully connected graph). Our simulation results show that the accuracy increases as sample size increases because of fewer errors in CI tests. Algorithms \[algorithm\] based on the [MDR]{} and identifiable [SMR]{} assumptions outperforms the FCI$+$ algorithm on average. For dense graphs, we see that the GES algorithm out-performs other algorithms because the GES algorithm often prefers dense graphs. However, the GES algorithm is not theoretically consistent and cannot recover directed graphs with cycles while other algorithms are designed for recovering DCG models (see e.g., Figure \[fig:Sec5f\]). [.30]{} [.30]{} [.30]{} [.10]{} Figure \[fig:Sec5f\] shows the accuracy for each type of graph (Tree, Cycle, Bipartite) using Algorithms \[algorithm\] based on the [MDR]{} and identifiable [SMR]{} assumptions and the GES and the FCI$+$ algorithms. Simulation results show that Algorithms \[algorithm\] based on the [MDR]{} and identifiable [SMR]{} assumptions are favorable in comparison to the FCI+ and GES algorithms for small-scale DCG models. Acknowledgement {#acknowledgement .unnumbered} =============== GP and GR were both supported by NSF DMS-1407028 over the duration of this project. Appendix ======== Examples for Theorem \[Thm:Sec3a\] (d) {#examples-for-theoremthmsec3a-d .unnumbered} -------------------------------------- \(A) at(0,0) [$X_1$]{}; (B) at(2.0,0) [$X_2$]{}; (C) at(4, 0) [$X_3$]{}; (D) at(6,0) [$X_4$]{}; (E) at(4,2) [$X_5$]{}; (G1) at(4, -1.2) [$G_1$]{}; (A2) at(8,0) [$X_1$]{}; (B2) at(10,0) [$X_2$]{}; (C2) at(12,0) [$X_3$]{}; (D2) at(14,0) [$X_4$]{}; (E2) at(12,2) [$X_5$]{}; (G1) at(12, -1.2) [$G_2$]{}; \(A) edge \[right=25\] node\[above\] [ $\alpha_1$ ]{} (B); (B) edge \[bend left =25\] node\[below\] [ $\alpha_3$ ]{} (C); (C) edge \[bend left =25\] node\[below\] [ $\alpha_5$ ]{} (B); (D) edge \[left =25\] node\[above\] [ $\alpha_4$ ]{} (C); (B) edge \[left =25\] node\[above left\] [ $-\alpha_3 \alpha_7$ ]{} (E); (C) edge \[left =25\] node\[left\] [ $\alpha_7$ ]{} (E); (E) edge \[left =25, color= red\] node\[above right\] [ $\alpha_2$]{} (D); (A2) edge \[right=25\] node\[above\] [ ]{} (B2); (B2) edge \[bend left =25\] node\[below\] [ ]{} (C2); (C2) edge \[bend left =25\] node\[below\] [ ]{} (B2); (D2) edge \[left =25\] node\[above\] [ ]{} (C2); (B2) edge \[left =25\] node\[above left\] [ ]{} (E2); (C2) edge \[left =25\] node\[left\] [ ]{} (E2); (D2) edge \[left =25, color= red\] node\[above right\] [ ]{} (E2); Suppose that $(G_1,\mathbb{P})$ is a Gaussian linear DCG model with specified edge weights in Figure \[Fig:App2\]. With this choice of distribution $\mathbb{P}$ based on $G_1$ in Figure \[Fig:App2\], we have a set of CI statements which are the same as the set of d-separation rules entailed by $G_1$ and an additional set of CI statements, $CI(\mathbb{P}) \supset \{ X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 |~ \emptyset \textrm{, or } X_5,~ X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_5 |~ \emptyset \textrm{, or } X_4\}$. It is clear that $(G_2, \mathbb{P})$ satisfies the CMC, $D_{sep}(G_1) \subset D_{sep}(G_2)$ and $D_{sep}(G_1) \neq D_{sep}(G_2)$ (explained in Section \[SecSMRFrugality\]). This implies that $(G_1, \mathbb{P})$ fails to satisfy the P-minimality assumption. Now we prove that $(G_1, \mathbb{P})$ satisfies the weak [SMR]{} assumption. Suppose that $(G_1, \mathbb{P})$ does not satisfy the weak [SMR]{} assumption. Then there exists a $G$ such that $(G,\mathbb{P})$ satisfies the CMC and has fewer edges than $G_1$. By Lemma \[Lem:Sec3b\], if $(G, \mathbb{P})$ satisfies the CFC, $G$ satisfies the weak [SMR]{} assumption. Note that $G_1$ does not have edges between $(X_1, X_4)$ and $(X_1, X_5)$. Since the only additional conditional independence statements that are not entailed by $G_1$ are $\{ X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 |~ \emptyset \textrm{, or } X_5,~ X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_5 |~ \emptyset \textrm{, or } X_4\}$, no graph that satisfies the CMC with respect to $\mathbb{P}$ can have fewer edges than $G_1$. This leads to a contradiction and hence $(G_1, \mathbb{P})$ satisfies the weak [SMR]{} assumption. Proof of Lemma \[Lem:Sec4a\] (a) {#Proof:lemma(a)} --------------------------------- (Y1) at (0,0) [$X_1$]{}; (Y2) at (2,1.7) [$X_2$]{}; (Y3) at (2,0) [$X_3$]{}; (Y4) at (2,-1.7) [$X_4$]{}; (Y5) at (4,0) [$X_5$]{}; (Y12) at (2,-2.7) [$G_1$]{}; (Y1) edge \[right =-35\] node\[above\] [ ]{} (Y3); (Y2) edge \[bend right = 30, color = red\] node\[above\] [ ]{} (Y1); (Y2) edge \[bend right =-35\] node\[above\] [ ]{} (Y4); (Y2) edge \[bend right = -30\] node\[above\] [ ]{} (Y5); (Y4) edge \[bend right = 30 \] node\[above\] [ ]{} (Y5); (Y5) edge \[bend right =35, color = red\] node\[above\] [ ]{} (Y1); (Y5) edge \[right =35\] node\[above\] [ ]{} (Y3); (X1) at (7,0) [$X_1$]{}; (X2) at (9,1.7) [$X_2$]{}; (X3) at (9,0) [$X_3$]{}; (X4) at (9,-1.7) [$X_4$]{}; (X5) at (11,0) [$X_5$]{}; (X12) at (9,-2.7) [$G_2$]{}; (X1) edge \[bend right =-30, color = red\] node\[above\] [ ]{} (X2); (X1) edge \[right =25\] node\[above\] [ ]{} (X3); (X4) edge \[bend right =-30, color = red\] node\[above\] [ ]{} (X1); (X2) edge \[bend right =-30\] node\[above\] [ ]{} (X5); (X5) edge \[right =25\] node\[above\] [ ]{} (X3); (X4) edge \[bend right =30\] node\[above\] [ ]{} (X5); Here we show that $(G_1,\mathbb{P})$ satisfies the identifiable SMR assumption and and $(G_2,\mathbb{P})$ satisfies the MDR assumption, where $\mathbb{P}$ has the following CI statements: $$\begin{aligned} CI(\mathbb{P}) = \{ & X_2 {\protect\mathpalette{\protect\independenT}{\perp}}X_3 \mid (X_1, X_5) \textrm{ or } (X_1, X_4, X_5); X_2 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 \mid X_1; \\ & X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 \mid (X_2, X_5) \textrm{ or } (X_2, X_3, X_5); X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_5 \mid (X_2, X_4); \\ & X_3 {\protect\mathpalette{\protect\independenT}{\perp}}X_4 \mid (X_1, X_5), (X_2, X_5),\textrm{ or } (X_1, X_2, X_5) \}.\end{aligned}$$ Clearly both DAGs $G_1$ and $G_2$ do not belong to the same [MEC]{} since they have different skeletons. To be explicit, we state all d-separation rules entailed by $G_1$ and $G_2$. Both graphs entail the following sets of d-separation rules: - $X_2$ is d-separated from $X_3$ given $(X_1, X_5)$ or $(X_1, X_4, X_5)$. - $X_3$ is d-separated from $X_4$ given $(X_1, X_5)$ or $(X_1, X_2, X_5)$. The set of d-separation rules entailed by $G_1$ which are not entailed by $G_2$ is as follows: - $X_1$ is d-separated from $X_4$ given $(X_2, X_5)$ or $(X_2, X_4, X_5)$. - $X_3$ is d-separated from $X_4$ given $(X_2, X_5)$. Furthermore, the set of d-separation rules entailed by $G_2$ which are not entailed by $G_1$ is as follows: - $X_1$ is d-separated from $X_5$ given $(X_2, X_4)$. - $X_2$ is d-separated from $X_4$ given $X_1$. With our choice of distribution, both DAG models $(G_1, \mathbb{P})$ and $(G_2, \mathbb{P})$ satisfy the CMC and it is straightforward to see that $G_2$ has fewer edges than $G_1$ while $G_1$ entails more d-separation rules than $G_2$. It can be shown from an exhaustive search that there is no graph $G$ such that $G$ is sparser or as sparse as $G_2$ and $(G, \mathbb{P})$ satisfies the CMC. Moreover, it can be shown that $G_1$ entails the maximum d-separation rules amongst graphs satisfying the CMC with respect to the distribution again through an exhaustive search. Therefore $(G_1, \mathbb{P})$ satisfies the [MDR]{} assumption and $(G_2, \mathbb{P})$ satisfies the identifiable [SMR]{} assumption. Proof of Lemma \[Lem:Sec4a\] (b) {#Proof:lemma(b)} --------------------------------- (Z1) at (0, 0) [$X_1$]{}; (Z2) at (3, 3) [$X_2$]{}; (Z3) at (3, 0) [$X_3$]{}; (Z4) at (3, -3) [$X_4$]{}; (Z5) at (6, 0) [$X_5$]{}; (Z6) at (1.5, 1.5) [$X_6$]{}; (Z7) at (3, 1.5) [$X_7$]{}; (Z8) at (4.5, 1.5) [$X_8$]{}; (Z9) at (1.5,-1.5) [$X_9$]{}; (Z10) at (3, -1.5) [$X_{10}$]{}; (Z11) at (4.5,-1.5) [$X_{11}$]{}; (K1) at (1.7, 4.2) [$~Y~$]{} ; (Z12) at (3,-4) [$G_1$]{}; (Z1) edge \[bend right=-35\] node\[above\] [ ]{} (Z2); (Z3) edge \[right=25, color =red\] node\[above\] [ ]{} (Z1); (Z1) edge \[ bend right= 35\] node\[above\] [ ]{} (Z4); (Z2) edge \[bend right= -35\] node\[above\] [ ]{} (Z5); (Z5) edge \[right=25, color = red\] node\[above\] [ ]{} (Z3); (Z4) edge \[bend right=35\] node\[above\] [ ]{} (Z5); (Z2) edge \[right=25\] node\[above\] [ ]{} (Z6); (Z2) edge \[right=25\] node\[above\] [ ]{} (Z7); (Z2) edge \[right=25\] node\[above\] [ ]{} (Z8); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z6); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z7); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z8); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z9); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z10); (Z3) edge \[right=25\] node\[above\] [ ]{} (Z11); (Z4) edge \[right=25\] node\[above\] [ ]{} (Z9); (Z4) edge \[right=25\] node\[above\] [ ]{} (Z10); (Z4) edge \[right=25\] node\[above\] [ ]{} (Z11); (Z2) edge \[-, dotted, color =red, thick, bend right=25\] node\[above\] [ ]{} (Z4); (Z2) edge \[-, dotted, color =red, thick, bend right=10\] node\[above\] [ ]{} (K1); (K1) edge \[-, dotted, color =red, thick, bend right=25\] node\[above\] [ ]{} (Z4); (Z1) edge \[bend right=-25\] node\[above\] [ ]{} (K1); (K1) edge \[bend right=-45\] node\[above\] [ ]{} (Z5); (K2) at (10.7, 4.2) [$~Y~$]{} ; (Y1) at (9,0) [$X_1$]{}; (Y2) at (12,3) [$X_2$]{}; (Y3) at (12,0) [$X_3$]{}; (Y4) at (12,-3) [$X_4$]{}; (Y5) at (15,0) [$X_5$]{}; (Y6) at (10.5,1.5) [$X_6$]{}; (Y7) at (12,1.5) [$X_7$]{}; (Y8) at (13.5,1.5) [$X_8$]{}; (Y9) at (10.5,-1.5) [$X_9$]{}; (Y10) at (12,-1.5) [$X_{10}$]{}; (Y11) at (13.5,-1.5) [$X_{11}$]{}; (Y12) at (12,-4) [$G_2$]{}; (Y1) edge \[bend right=-35\] node\[above left\] [ ]{} (Y2); (Y1) edge \[right=25\] node\[above\] [ $\beta_1$ ]{} (Y3); (Y1) edge \[bend right=35\] node\[below left\] [ ]{} (Y4); (Y2) edge \[bend right=-35\] node\[above right\] [ ]{} (Y5); (Y5) edge \[right=25, color = red\] node\[above\] [ $\beta_2$ ]{} (Y3); (Y4) edge \[bend right=35\] node\[below right\] [ ]{} (Y5); (Y2) edge \[right=25\] node\[above\] [ ]{} (Y6); (Y2) edge \[right=25\] node\[above\] [ ]{} (Y7); (Y2) edge \[right=25\] node\[above\] [ ]{} (Y8); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y6); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y7); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y8); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y9); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y10); (Y3) edge \[right=25\] node\[above\] [ ]{} (Y11); (Y4) edge \[right=25\] node\[above\] [ ]{} (Y9); (Y4) edge \[right=25\] node\[above\] [ ]{} (Y10); (Y4) edge \[right=25\] node\[above\] [ ]{} (Y11); (Y1) edge \[bend right= 25, color = red\] node\[ above left \] [$\beta_1 \beta_2~~~~~~$ ]{} (Y5); (Y1) edge \[bend right=-25\] node\[above\] [ ]{} (K2); (K2) edge \[bend right=-45\] node\[above\] [ ]{} (Y5); Suppose that the pair $(G_2,\mathbb{P})$ is a Gaussian linear DCG model with specified edge weights in Figure \[fig:Sec4dA\], where the non-specified edge weights can be chosen arbitrarily. Once again to be explicit, we state all d-separation rules entailed by $G_1$ and $G_2$. Both graphs entail the following sets of d-separation rules: - For any node $A \in \{X_6,X_7,X_8\}$ and $B \in \{X_1,X_5\}$, $A$ is d-separated from $B$ given $\{X_2, X_3\} \cup C$ for any $C \subset \{ X_1,X_4,X_5,X_6,X_7,X_8, X_9, X_{10}, X_{11},Y \} \setminus \{A,B\}$. - For any node $A \in \{X_9,X_{10},X_{11}\}$ and $B \in \{X_1,X_5\}$, $A$ is d-separated from $B$ given $\{X_3, X_4\} \cup C$ for any $C \subset \{X_1, X_2, X_3, X_5,X_6,X_7,X_8, X_9, X_{10}$ $, X_{11},Y \} \setminus \{A,B\}$. - For any nodes $A,B \in \{X_6,X_7, X_8\}$, $A$ is d-separated from $B$ given $\{X_2,$ $X_3\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$. - For any nodes $A,B \in \{X_9,X_{10}, X_{11}\}$, $A$ is d-separated from $B$ given $\{X_3,X_4\} \cup C$ for any $C \subset \{X_1,X_2,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$. - For any nodes $A \in \{X_6,X_7, X_8\}$ and $B \in \{X_4\}$, $A$ is d-separated from $B$ given $\{X_2,X_3\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$, or given $\{X_1,X_2,X_5\} \cup D$ for any $D \subset \{X_4,X_6,X_7,X_8,Y \}\setminus\{A,B\}$. - For any nodes $A \in \{X_6, X_7, X_8\}$ and $B \in \{Y\}$, $A$ is d-separated from $B$ given $\{X_2,X_3\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$, or given $\{X_1,X_2,X_5\} \cup D$ for any $D \subset \{X_4,X_6,X_7,X_8,,X_9,X_{10}$ $,X_{11},Y \}\setminus\{A,B\}$. - For any nodes $A \in \{X_9,X_{10}, X_{11}\}$ and $B \in \{X_2\}$, $A$ is d-separated from $B$ given $\{X_3,X_4\} \cup C$ for any $C \subset \{X_1,X_2,X_5,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$, or given $\{X_1,X_4,X_5\} \cup D$ for any $D \subset \{X_2,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$. - For any nodes $A \in \{X_9,X_{10}, X_{11}\}$ and $B \in \{Y\}$, $A$ is d-separated from $B$ given $\{X_3,X_4\} \cup C$ for any $C \subset \{X_1,X_2,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B\}$, or given $\{X_1,X_4,X_5\} \cup D$ for any $D \subset \{X_2,X_6,X_7,X_8,X_9,X_{10}$ $,X_{11},Y \}\setminus\{A,B\}$. - For any nodes $A\in \{X_6,X_7, X_8\}$, $B \in \{X_9,X_{10}, X_{11}\}$, $A$ is d-separated from $B$ given $\{X_3\} \cup C \cup D$ for $C \subset \{X_1,X_2,X_4\}$, $C \neq \emptyset$ and $D \subset \{X_1,X_2,X_4,X_5,X_6,X_7,X_8,X_9,X_{10},X_{11},Y \}\setminus\{A,B,C\}$. - $X_2$ is d-separated from $X_3$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_1,X_4,X_5,$ $X_9,X_{10},X_{11},Y\}$. - $X_3$ is d-separated from $X_4$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_6$ $,X_7,X_8,Y\}$. - $X_3$ is d-separated from $Y$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_6$ $,X_7,X_8,X_9,X_{10},X_{11}\}$. - $X_2$ is d-separated from $X_3$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_4,X_9$ $,X_{10},X_{11}, Y\}$. - $X_4$ is d-separated from $X_3$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_2,X_6,X_7$ $,X_8, Y\}$. - $Y$ is d-separated from $X_3$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_2,X_6,X_7,X_8$ $,X_4,X_9,X_{10},X_{11}\}$. The set of d-separation rules entailed by $G_1$ that is not entailed by $G_2$ is as follows: - $X_1$ is d-separated from $X_5$ given $\{X_2,X_3,X_4,Y\} \cup C$ for any $C \subset \{X_6,X_7$ $,X_8, X_9,X_{10},X_{11}\}$. Furthermore, the set of d-separation rules entailed by $G_2$ that is not entailed by $G_1$ is as follows: - $X_2$ is d-separated from $X_4$ given $X_1$ or $\{ X_1, Y\}$. - $X_2$ is d-separated from $Y $ given $X_1$ or $\{ X_1, X_4\}$. - $X_4$ is d-separated from $Y $ given $X_1$ or $\{ X_1, X_2\}$. It can then be shown that by using the co-efficients specified for $G_2$ in Figure \[fig:Sec4dA\], $CI(\mathbb{P})$ is the union of the CI statements implied by the sets of d-separation rules entailed by both $G_1$ and $G_2$. Therefore $(G_1,\mathbb{P})$ and $(G_2,\mathbb{P})$ satisfy the CMC. It is straightforward to see that $G_2$ is sparser than $G_1$ while $G_1$ entails more d-separation rules than $G_2$. Now we prove that $(G_1, \mathbb{P})$ satisfies the [MDR]{} assumption and $(G_2, \mathbb{P})$ satisfies the identifiable [SMR]{} assumption. First we prove that $(G_2, \mathbb{P})$ satisfies the identifiable [SMR]{} assumption. Suppose that $(G_2,\mathbb{P})$ does not satisfy the identifiable [SMR]{} assumption. Then there exists a $G$ such that $(G, \mathbb{P})$ satisfies the CMC and $G$ has the same number of edges as $G_2$ or fewer edges than $G_2$. Since the only additional CI statements that are not implied by the d-separation rules of $G_2$ are $X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_5 \mid \{X_2,X_3,X_4,Y\} \cup C$ for any $C \subset \{X_6,X_7,X_8, X_9,X_{10},X_{11}\}$ and $(G, \mathbb{P})$ satisfies the CMC, we can consider two graphs, one with an edge between $(X_1, X_5)$ and another without an edge between $(X_1, X_5)$. We firstly consider a graph without an edge between $(X_1, X_5)$. Since $G$ does not have an edge between $(X_1, X_5)$ and by Lemma \[Lem:Sec3a\], $G$ should entail at least one d-separation rule from (a) $X_1$ is d-separated from $X_5$ given $\{X_2,X_3,X_4,Y\} \cup C$ for any $C \subset \{X_6,X_7,X_8, X_9,X_{10},X_{11}\}$. If $G$ does not have an edge between $(X_2, X_3)$, by Lemma \[Lem:Sec3a\] $G$ should entail at least one d-separation rule from (10) $X_2$ is d-separated from $X_3$ given $\{X_1, X_5\} \cup C$ for any $C \subset \{X_1,X_4,X_5,X_9,X_{10},X_{11},Y\}$. These two sets of d-separation rules can exist only if a cycle $X_1 \to X_2 \to X_5 \to X_3 \to X_1$ or $X_1 \leftarrow X_2 \leftarrow X_5 \leftarrow X_3 \leftarrow X_1$ exists. In the same way, if $G$ does not have edges between $(X_3, X_4)$ and $(X_3, Y)$, there should be cycles which are $X_1 \to A \to X_5 \to X_3 \to X_1$ or $X_1 \leftarrow A \leftarrow X_5 \leftarrow X_3 \leftarrow X_1$ for any $A \in \{X_4, Y\}$ as occurs in $G_1$. However these cycles create virtual edges between $(X_2, X_4), (X_2, Y)$ or $(X_4, Y)$ as occurs in $G_1$. Therefore $G$ should have at least 3 edges either real or virtual edges. This leads to a contradiction that $G$ has the same number of edges of $G_2$ or fewer edges than $G_2$. Secondly, we consider a graph $G$ with an edge between $(X_1, X_5)$ such that $(G, \mathbb{P})$ satisfies the CMC and $G$ has fewer edges than $G_2$. Note that $G_1$ entails the maximum number of d-separation rules amongst graphs with an edge between $(X_1, X_5)$ satisfying the CMC because $CI(\mathbb{P}) \setminus \{X_1 {\protect\mathpalette{\protect\independenT}{\perp}}X_5 \mid \{X_2,X_3,X_4,Y\} \cup C$ for any $C \subset \{X_6,X_7,X_8, X_9, X_{10},X_{11}\}$ is exactly matched to the d-separation rules entailed by $G_1$. This leads to $D_{sep}(G) \subset D_{sep}(G_1)$ and $D_{sep}(G) \neq D_{sep}(G_1)$. By Lemma \[Lem:Sec3b\], $G$ cannot contain fewer edges than $G_1$. However since $G_2$ has fewer edges than $G_1$, it is contradictory that $G$ has the same number of edges of $G_2$ or fewer edges than $G_2$. Therefore, $(G_2,\mathbb{P})$ satisfies the identifiable [SMR]{} assumption. Now we prove that $(G_1, \mathbb{P})$ satisfies the [MDR]{} assumption. Suppose that $(G_1, \mathbb{P})$ fails to satisfy the [MDR]{} assumption. Then, there is a graph $G$ such that $(G, \mathbb{P})$ satisfies the CMC and $G$ entails more d-separation rules than $G_1$ or as many d-separation rules as $G_1$. Since $(G, \mathbb{P})$ satisfies the CMC, in order for $G$ to entail at least the same number of d-separation rules entailed by $G_1$, $G$ should entail at least one d-separation rule from (b) $X_2$ is d-separated from $X_4$ given $X_1$ or $\{ X_1, Y\}$, (c) $X_2$ is d-separated from $Y$ given $X_1$ or $\{ X_1, X_4\}$ and (d) $X_4$ is d-separated from $Y $ given $X_1$ or $\{ X_1, X_2\}$. This implies that $G$ does not have an edge between $(X_2, X_4)$, $(X_2, Y)$ or $(X_4, Y)$ by Lemma \[Lem:Sec3a\]. As we discussed, there is no graph satisfying the CMC without edges $(X_2, X_4)$, $(X_2, Y)$, $(X_4, Y)$, and $(X_1, X_5)$ unless $G$ has additional edges as occurs in $G_1$. Note that the graph $G$ entails at most six d-separation rules than $G_1$ (the total number of d-separation rules of (b), (c), and (d)). However, adding any edge in the graph $G$ generates more than six more d-separation rules because by Lemma \[Lem:Sec3a\], $G$ loses an entire set of d-separation rules from the sets (1) to (15) which each contain more than six d-separation rules. This leads to a contradiction that $G$ entails more d-separation rules than $G_1$ or as many d-separation rules as $G_1$.
--- abstract: 'Franson’s Bell experiment with energy-time entanglement \[Phys. Rev. Lett. [**62**]{}, 2205 (1989)\] does not rule out all local hidden variable models. This defect can be exploited to compromise the security of Bell inequality-based quantum cryptography. We introduce a novel Bell experiment using genuine energy-time entanglement, based on a novel interferometer, which rules out all local hidden variable models. The scheme is feasible with actual technology.' author: - Adán Cabello - Alessandro Rossi - Giuseppe Vallone - Francesco De Martini - Paolo Mataloni title: 'Proposed Bell Experiment with Genuine Energy-Time Entanglement' --- Two particles exhibit “energy-time entanglement” when they are emitted at the same time in an energy-conserving process and the essential uncertainty in the time of emission makes undistinguishable two alternative paths that the particles can take. Franson [@Franson89] proposed an experiment to demonstrate the violation of local realism [@Bell64] using energy-time entanglement, based on a formal violation of the Bell Clauser-Horne-Shimony-Holt (CHSH) inequality [@CHSH69]. However, Aerts [*et al.*]{} [@AKLZ99] showed that, even in the ideal case of perfect preparation and perfect detection efficiency, there is a local hidden variable (LHV) model that simulates the results predicted by quantum mechanics for the experiment proposed by Franson [@Franson89]. This model proves that “the Franson experiment does not and cannot violate local realism” and that “\[t\]he reported violations of local realism from Franson experiments [@KVHNC90] have to be reexamined” [@AKLZ99]. Despite this fundamental deficiency, and despite that this defect can be exploited to create a Trojan horse attack in Bell inequality-based quantum cryptography [@Larsson02], Franson-type experiments have been extensively used for Bell tests and Bell inequality-based quantum cryptography [@TBZG00], have become standard in quantum optics [@Paul04; @GC08], and an extended belief is that “the results of experiments with the Franson experiment violate Bell’s inequalities” [@GC08]. This is particularly surprising, given that recent research has emphasized the fundamental role of a (loophole-free) violation of the Bell inequalities in proving the device-independent security of key distribution protocols [@Ekert91], and in detecting entanglement [@HGBL05]. Polarization entanglement can be transformed into energy-time entanglement [@Kwiat95]. However, to our knowledge, there is no single experiment showing a violation of the Bell-CHSH inequality using genuine energy-time entanglement (or “time-bin entanglement” [@BGTZ99]) that cannot be simulated by a LHV model. By “genuine” we mean not obtained by transforming a previous form of entanglement, but created because the essential uncertainty in the time of emission makes two alternative paths undistinguishable. Because of the above reasons, a single experiment using energy-time entanglement able to rule out all possible LHV models is of particular interest. The aim of this Letter is to describe such an experiment by means of a novel interferometric scheme. The main purpose of the new scheme is not to compete with existing interferometers used for quantum communication in terms of practical usability, but to fix a fundamental defect common to all of them. We will first describe the Franson Bell-CHSH experiment. Then, we will introduce a LHV model reproducing any conceivable violation of the Bell-CHSH inequality. The model underlines why a Franson-type experiment does not and cannot be used to violate local realism. Then, we will introduce a new two-photon energy-time Bell-CHSH experiment that avoids these problems and can be used for a conclusive Bell test. [*The Franson Bell-CHSH experiment.—*]{}The setup of a Franson Bell-CHSH experiment is in Fig. \[Fig1\]. The source emits two photons, photon $1$ to the left and photon $2$ to the right. Each of them is fed into an unbalanced interferometer. $BS_i$ are beam splitters and $M_i$ are perfect mirrors. There are two distant observers, Alice on the left and Bob on the right. Alice randomly chooses the phase of the phase shifter $\phi_A$ between $A_0$ and $A_1$, and records the counts in each of her detectors (labeled $a=+1$ and $a=-1$), the detection times, and the phase settings at $t_D-t_I$, where $t_D$ is the detection time and $t_I$ is the time the photon takes to reach the detector from the location of the phase shifter $\phi_A$. Similarly, Bob chooses $\phi_B$ between $B_0$ and $B_1$, and records the counts in each of his detectors (labeled $b=+1$ and $b=-1$), the detection times, and the phase settings. The setup must satisfy four requirements: (I) To have two-photon interference, the emission of the two photons must be simultaneous, the moment of emission unpredictable, and both interferometers identical. If the detections of the two photons are coincident, there is no information about whether both photons took the short paths $S$ or both took the long paths $L$. A simultaneous random emission is achieved in actual experiments by two methods, both based on spontaneous parametric down conversion. In energy-time experiments, a non-linear crystal is pumped continuously by a monochromatic laser so the moment of emission is unpredictable in a temporal window equal to the coherence time of the pump laser. In time-bin experiments, a non-linear crystal is pumped by pulses previously passing through an unbalanced interferometer, so it is the uncertainty of which pulse, the earlier or the later, has caused the emission what provokes the uncertainty in the emission time. In both cases, the simultaneity of the emission is guaranteed by the conservation of energy. (II) To prevent single-photon interference, the difference between paths $L$ and $S$, i.e., twice the distance between $BS1$ and $M1$, $\Delta {\cal L}=2 d(BS1,M1)$ (See Fig. \[Fig1\]), must satisfy $\Delta {\cal L} > c t_{\rm coh}$, where $c$ is the speed of light and $t_{\rm coh}$ is the coherence time of the photons. (III) To make distinguishable those events where one photon takes $S$ and the other takes $L$, $\Delta {\cal L}$ must satisfy $\Delta {\cal L} > c \Delta t_{\rm coinc}$, where $\Delta t_{\rm coinc}$ is the duration of the coincidence window. (IV) To prevent that the local phase setting at one side can affect the outcome at the other side, the local phase settings must randomly switch ($\phi_A$ between $A_0$ and $A_1$, and $\phi_B$ between $B_0$ and $B_1$) with a frequency of the order $c/D$, where $D=d({\rm Source},BS1)$. The observers record all their data locally and then compare them. If the detectors are perfect they find that $$\begin{aligned} P(A_i=+1)=P(A_i=-1)=\frac{1}{2}, \label{Amarginal} \\ P(B_j=+1)=P(B_j=-1)=\frac{1}{2}, \label{Bmarginal}\end{aligned}$$ for $i,j \in \{0,1\}$. $P(A_0=+1)$ is the probability of detecting a photon in the detector $a=+1$ if the setting of $\phi_A$ was $A_0$. They also find $25\%$ of two-photon events in which photon $1$ is detected a time $\Delta {\cal L} /c$ before photon $2$, and $25\%$ of events in which photon $1$ is detected $\Delta {\cal L}/c$ after photon $2$. The observers reject this $50\%$ of events and keep the $50\%$ that are coincident. For these selected events, quantum mechanics predicts that $$P(A_i=a, B_j=b)=\frac{1}{4}\left[1+ab \cos(\phi_{A_i}+\phi_{B_j})\right], \label{joint}$$ where $a,b \in \{-1,+1\}$ and $\phi_{A_i}$ ($\phi_{B_j}$) is the phase setting corresponding to $A_i$ ($B_j$). The Bell-CHSH inequality is $$-2 \le \beta_{\rm CHSH} \le 2, \label{CHSH}$$ where $$\beta_{\rm CHSH} = \langle A_0 B_0 \rangle + \langle A_0 B_1 \rangle + \langle A_1 B_0 \rangle - \langle A_1 B_1 \rangle.$$ According to quantum mechanics, the maximal violation of the Bell-CHSH inequality is $\beta_{\rm CHSH} = 2 \sqrt{2}$ [@Tsirelson80], and is obtained, e.g., with $\phi_{A_0}=0$, $\phi_{A_1}=\frac{\pi}{2}$, $\phi_{B_0}=-\frac{\pi}{4}$, $\phi_{B_1}=\frac{\pi}{4}$. -------------------------------------------------------------------------------------------------------------------------------- $A_0$ $A_1$ $B_0$ $B_1$ $\langle $\langle A_0 B_1 \rangle$ $\langle A_1 $\langle A_1 B_1 \rangle$ A_0 B_0 \rangle$ B_0 \rangle$ -------- -------- -------- -------- ------------------ --------------------------- -------------- --------------------------- -- $S+$ $S+$ $S+$ $L\pm$ $+1$ rejected $+1$ rejected $L+$ $L+$ $L+$ $S\pm$ $+1$ rejected $+1$ rejected $S+$ $S-$ $L\pm$ $S+$ rejected $+1$ rejected $-1$ $L+$ $L-$ $S\pm$ $L+$ rejected $+1$ rejected $-1$ $S+$ $L\pm$ $S+$ $S+$ $+1$ $+1$ rejected rejected $L+$ $S\pm$ $L+$ $L+$ $+1$ $+1$ rejected rejected $L\pm$ $S+$ $S+$ $S-$ rejected rejected $+1$ $-1$ $S\pm$ $L+$ $L+$ $L-$ rejected rejected $+1$ $-1$ -------------------------------------------------------------------------------------------------------------------------------- : \[TableI\]$32$ sets of instructions (out of $64$) of the LHV model (the other $32$ are in Table \[TableII\]). Each row represents $4$ sets of local instructions (first $4$ entries) and their corresponding contributions for the calculation of $\beta_{\rm CHSH}$ after applying the postselection procedure of the Franson experiment (last $4$ entries). For each row, two sets (corresponding to $\pm$ signs) are explicitly written, while the other two can be obtained by changing all signs. -------------------------------------------------------------------------------------------------------------------------------- $A_0$ $A_1$ $B_0$ $B_1$ $\langle $\langle A_0 B_1 \rangle$ $\langle A_1 $\langle A_1 B_1 \rangle$ A_0 B_0 \rangle$ B_0 \rangle$ -------- -------- -------- -------- ------------------ --------------------------- -------------- --------------------------- -- $S+$ $S+$ $S-$ $L\pm$ $-1$ rejected $-1$ rejected $L+$ $L+$ $L-$ $S\pm$ $-1$ rejected $-1$ rejected $S+$ $S-$ $L\pm$ $S-$ rejected $-1$ rejected $+1$ $L+$ $L-$ $S\pm$ $L-$ rejected $-1$ rejected $+1$ $S-$ $L\pm$ $S+$ $S+$ $-1$ $-1$ rejected rejected $L-$ $S\pm$ $L+$ $L+$ $-1$ $-1$ rejected rejected $L\pm$ $S-$ $S+$ $S-$ rejected rejected $-1$ $+1$ $S\pm$ $L-$ $L+$ $L-$ rejected rejected $-1$ $+1$ -------------------------------------------------------------------------------------------------------------------------------- : \[TableII\]$32$ sets of instructions of the LHV model. [*LHV models for the Franson experiment.—*]{}A LHV theory for the Franson experiment must describe how each of the photons makes two decisions. The $+1/-1$ decision: the decision of a detection to occur at detector $+1$ or at detector $-1$, and the $S/L$ decision: the decision of a detection to occur at time $t_D=t$ or a time $t_D=t+\frac{\Delta {\cal L}}{c}$. Both decisions may be made as late as the detection time $t_D$, and may be based on events in the backward light cones of the detections. In a Franson-type setup both decisions may be based on the corresponding local phase setting at $t_D-t_I$. For a conclusive Bell test, there is no problem if photons make the $+1/-1$ decision based on the local phase setting. The problem is that the $50\%$ postselection procedure should be independent on the phase settings, otherwise the Bell-CHSH inequality (\[CHSH\]) is not valid. In the Franson experiment the phase setting at $t_D-t_I$ can causally affect the decision of a detection of the corresponding photon to occur at time $t_D=t$ or a time $t_D=t+\frac{\Delta {\cal L}}{c}$. If the $S/L$ decision can depend on the phase settings, then, after the $50\%$ postselection procedure, one can formally obtain not only the violations predicted by quantum mechanics, as proven in [@AKLZ99], but any value of $\beta_{\rm CHSH}$, even those forbidden by quantum mechanics. This is proven by constructing a family of explicit LHV models. Consider the $64$ sets of local instructions in tables \[TableI\] and \[TableII\]. For instance, if the pair of photons follows the first set of local instructions in Table \[TableI\], $(A_0=)S+$, $(A_1=)S+$, $(B_0=)S-$, $(B_1=)L+$, then, if the setting of $\phi_A$ is $A_0$ or $A_1$, photon $1$ will be detected by the detector $a=+1$ at time $t$ (corresponding to the path $S$), and if the setting of $\phi_B$ is $B_0$, photon $2$ will be detected by $b=-1$ at time $t$, but if the setting of $\phi_B$ is $B_1$, photon $2$ will be detected by $b=+1$ at time $t+\frac{\Delta {\cal L}}{c}$ (corresponding to the path $L$). If each of the $32$ sets of instructions in Table \[TableI\] occurs with probability $p/32$, and each of the $32$ sets of instructions in Table \[TableII\] with probability $(1-p)/32$, then it is easy to see that, for any value of $0 \le p \le 1$, the model gives $25\%$ of $SL$ events, $25\%$ of $LS$ events, $50\%$ of $SS$ or $LL$ events, and satisfies (\[Amarginal\]) and (\[Bmarginal\]). If $p=0$, the model gives $\beta_{\rm CHSH}=-4$. If $p=1$, the model gives $\beta_{\rm CHSH}=4$. If $0 < p < 1$, the model gives any value between $-4 < \beta_{\rm CHSH} < 4$. Specifically, a maximal quantum violation $\beta_{\rm CHSH} = 2 \sqrt{2}$, satisfying (\[joint\]), is obtained when $p=(2+\sqrt{2})/4$. The reason why this LHV model is possible is that the $50\%$ postselection procedure in Franson’s experiment allows the subensemble of selected events to depend on the phase settings. For instance, the first $8$ sets of instructions in Table \[TableI\] are rejected only when $\phi_B=B_1$. The main aim of this Letter is to introduce a similar experiment which does not have this problem. There is a previously proposed solution consisting on replacing the beam splitters $BS_1$ and $BS_2$ in Fig. \[Fig1\] by switchers synchronized with the source [@BGTZ99]. However, these active switchers are replaced in actual experiments by passive beam splitters [@TBZG00; @BGTZ99] that force a Franson-type postselection with the same problem described above. One way to avoid the problem is to make an extra assumption, namely that the decision of being detected at time $t_D=t$ or a time $t_D=t+\frac{\Delta {\cal L}}{c}$ is actually made at the first beam splitter, before having information of the local phase settings [@AKLZ99; @Franson99]. This assumption is similar to the fair sampling assumption, namely that the probability of rejection does not depend on the measurement settings. As we have seen, there are local models that do not satisfy this assumption. The experiment we propose does not require this extra assumption. [*Proposed energy-time entanglement Bell experiment.—*]{}The setup of the new Bell experiment is illustrated in Fig. \[Fig2\]. The source emits two photons, photon $1$ to the left and photon $2$ to the right. The $S$ path of photon $1$ (photon $2$) ends on the detectors $a$ on the left ($b$ on the right). The difference with Fig. \[Fig1\] is that now the $L$ path of photon $1$ (photon $2$) ends on the detectors $b$ ($a$). In this setup, the two photons end in different sides only when both are detected in coincidence. If one photon takes $S$ and the other photon takes $L$, both will end on detectors of the same side. An interferometer with this last property is described in [@RVDM08]. The data that the observers must record is the same as in Franson’s experiment. The setup must satisfy the following requirements: (I’) To have two-photon interference, the emission of the two photons must be simultaneous, the moment of emission unpredictable, and both arms of the setup identical. The phase stabilization of the entire setup of Fig. \[Fig2\] is more difficult than in Franson’s experiment. (II’) Single-photon interference is not possible in the setup of Fig. \[Fig2\]. (III’) To temporally distinguish two photons arriving at the same detector at times $t$ and $t+\frac{\Delta {\cal L}'}{c}$, where $\Delta {\cal L}'=2 [d({\rm Source},BS2)+d(BS2,M1)]$ (see Fig. \[Fig2\]), the dead time of the detectors must be smaller than $\frac{\Delta {\cal L}'}{c}$. For detectors with a dead time of $1$ ns, ${\Delta {\cal L}'} > 30$ cm. (IV’) The probability of two two-photons events in $\frac{\Delta {\cal L}'}{c}$ must be negligible. This naturally occurs when using standard non-linear crystals pumped continuously. (V’) To prevent that the local phase setting at one side can affect the outcome at the other side, the local phase settings must randomly switch ($\phi_A$ between $A_0$ and $A_1$, and $\phi_B$ between $B_0$ and $B_1$) with a frequency of the order $c/D'$, where $D'=d({\rm Source},\phi_A)\gg \Delta {\cal L}'$. There is a trade-off between the phase stabilization of the apparatus (which requires a short interferometer) and the prevention of reciprocal influences between the two local phase settings (which requires a long interferometer). By considering a random phase modulation frequency of 300 kHz, an interferometer about 1 km long would be needed. Current technology allows us to stabilize interferometers of up 4 km long (for instance, one of the interferometers of the LIGO experiment is 4 km long). With these stable interferometers, the experiment would be feasible. The predictions of quantum mechanics for the setup of Fig. \[Fig2\] are similar to those in Franson’s proposal: Eqs. (\[Amarginal\]) and (\[Bmarginal\]) hold, there is $25\%$ of events in which both photons are detected on the left at times $t$ and $t+\frac{\Delta {\cal L}'}{c}$, $25\%$ of events in which both photons are detected on the right, and $50\%$ of coincident events for which (\[joint\]) holds. The observers must keep the coincident events and reject those giving two detections on detectors of the same side. The main advantages of this setup are: (i) The rejection of events is local and does not require communication between the observers. (ii) The selection and rejection of events is independent of the local phase settings. This is the crucial difference with Franson’s experiment and deserves a detailed examination. First consider a selected event: both photons have been detected at time $t_D$, one in a detector $a$ on the left, and the other in a detector $b$ on the right. $t_I$ is the time a photon takes from $\phi_A$ ($\phi_B$) to a detector $a$ ($b$). The phase setting of $\phi_A$ ($\phi_B$) at $t_D-t_I$ is in the backward light cone of the photon detected in $a$ ($b$), but the point is, could a different value of one or both of the phase settings have caused that this selected event would become a rejected event in which both photons are detected on the same side? The answer is no. This would require a mechanism to make one detection to “wait” until the information about the setting in other side comes. However, when this information has finally arrived, the phase settings (both of them) have changed, so this information is useless to base a decision on it. Now consider a rejected event. For instance, one in which both photons are detected in the detectors $a$ on the left, one at time $t_D=t$, and the other at $t_D=t+\frac{\Delta {\cal L}'}{c}$. Then, the phase settings of $\phi_B$ at times $t_D-t_I$ are out of the backward light cones of the detected photons. The photons cannot have based their decisions on the phase settings of $\phi_B$. A different value of $\phi_A$ cannot have caused that this rejected event would become a selected event. This would require a mechanism to make one detection to wait until the information about the setting arrives to the other side, and when this information has arrived, the phase setting of $\phi_A$ has changed so this information is useless. For the proposed setup, there is no physical mechanism preserving locality which can turn a selected (rejected) event into a rejected (selected) event. The selected events are independent of the local phase settings. For the selected events, only the $+1/-1$ decision can depend on the phase settings. This is exactly the assumption under which the Bell-CHSH inequality (\[CHSH\]) is valid. Therefore, an experimental violation of (\[CHSH\]) using the setup of Fig. \[Fig2\] and the postselection procedure described before provides a conclusive (assuming perfect detectors) test of local realism using energy-time (or time-bin) entanglement. Indeed, the proposed setup opens up the possibility of using genuine energy-time or time-bin entanglement for many other quantum information experiments. The authors thank J.D. Franson, J.-Å. Larsson, T. Rudolph, and M. Żukowski for their comments. This work was supported by Junta de Andalucía Excellence Project No. P06-FQM-02243 and by Finanziamento Ateneo 07 Sapienza Universitá di Roma. [14]{} J.D. Franson, Phys. Rev. Lett. [**62**]{}, 2205 (1989). J.S. Bell, Physics (Long Island City, N.Y.) [**1**]{}, 195 (1964). J.F. Clauser, M.A. Horne, A. Shimony, and R.A. Holt, Phys. Rev. Lett. [**23**]{}, 880 (1969). S. Aerts, P.G. Kwiat, J.-Å. Larsson, and M. Żukowski, Phys. Rev. Lett. [**83**]{}, 2872 (1999); [**86**]{}, 1909 (2001). P.G. Kwiat [*et al.*]{}, Phys. Rev. A [**41**]{}, 2910 (1990); Z.Y. Ou, X.Y. Zou, L.J. Wang, and L. Mandel, Phys. Rev. Lett. [**65**]{}, 321 (1990); J. Brendel, E. Mohler, and W. Martienssen, [*ibid.*]{} [**66**]{}, 1142 (1991); P.G. Kwiat, A.M. Steinberg, and R.Y. Chiao, Phys. Rev. A [**47**]{}, R2472 (1993); P.R. Tapster, J.G. Rarity, and P.C.M. Owens, Phys. Rev. Lett. [**73**]{}, 1923 (1994); W. Tittel, J. Brendel, H. Zbinden, and N. Gisin, Phys. Rev. Lett. [**81**]{}, 3563 (1998). J.-Å. Larsson, Quantum Inf. Comput. [**2**]{}, 434 (2002). W. Tittel, J. Brendel, H. Zbinden, and N. Gisin, Phys. Rev. Lett. [**84**]{}, 4737 (2000); G. Ribordy [*et al.*]{}, Phys. Rev. A [**63**]{}, 012309 (2000); R.T. Thew, A. Acín, H. Zbinden, and N. Gisin, Phys. Rev. Lett. [**93**]{}, 010503 (2004); I. Marcikic [*et al.*]{}, [*ibid.*]{} [**93**]{}, 180502 (2004); D. Salart [*et al.*]{}, [*ibid.*]{} [**100**]{}, 220404 (2008). H. Paul, [*Introduction to Quantum Optics*]{} (Cambridge University Press, Cambridge, England, 2004). J.C. Garrison and R.Y. Chiao, [*Quantum Optics*]{} (Oxford University Press, Oxford, 2008). A.K. Ekert, Phys. Rev. Lett. [**67**]{}, 661 (1991); A. Acín, N. Gisin, and L. Masanes, [*ibid.*]{} [**97**]{}, 120405 (2006). P. Hyllus, O. G[ü]{}hne, D. Bruß, and M. Lewenstein, Phys. Rev. A [**72**]{}, 012321 (2005). P.G. Kwiat, Phys. Rev. A [**52**]{}, 3380 (1995); D.V. Strekalov [*et al.*]{}, [*ibid.*]{} [**54**]{}, R1 (1996). J. Brendel, N. Gisin, W. Tittel, and H. Zbinden, Phys. Rev. Lett. [**82**]{}, 2594 (1999). B.S. Tsirelson, Lett. Math. Phys. [**4**]{}, 93 (1980). J.D. Franson (private communication). See also, J.D. Franson, Phys. Rev. A [**61**]{}, 012105 (1999). A. Rossi, G. Vallone, F. De Martini, and P. Mataloni, Phys. Rev. A [**78**]{}, 012345 (2008).
--- author: - 'Armeen Taeb[^1]' - 'Arian Maleki[^2]' - 'Christoph Studer[^3]' - 'Richard G. Baraniuk' bibliography: - 'references2.bib' title: Maximin Analysis of Message Passing Algorithms for Recovering Block Sparse Signals --- Group sparsity; group LASSO; approximate message passing; phase transition. Introduction ============ Background ========== Main results ============ Proofs of the main results ========================== [^1]: Dept. of Electrical, Computer, and Energy Engineering, University of Colorado at Boulder. [^2]: Dept. of Statistics, Columbia University. [^3]: Dept. of Electrical and Computer Engineering, Rice University.
[**The srank Conjecture on Schur’s $Q$-Functions**]{} William Y. C. Chen$^{1}$, Donna Q. J. Dou$^2$,\ Robert L. Tang$^3$ and Arthur L. B. Yang$^{4}$\ Center for Combinatorics, LPMC-TJKLC\ Nankai University, Tianjin 300071, P. R. China\ $^{1}$[chen@nankai.edu.cn]{}, $^{2}$[qjdou@cfc.nankai.edu.cn]{}, $^{3}$[tangling@cfc.nankai.edu.cn]{}, $^{4}$[yang@nankai.edu.cn]{} **Abstract.** We show that the shifted rank, or srank, of any partition $\lambda$ with distinct parts equals the lowest degree of the terms appearing in the expansion of Schur’s $Q_{\lambda}$ function in terms of power sum symmetric functions. This gives an affirmative answer to a conjecture of Clifford. As pointed out by Clifford, the notion of the srank can be naturally extended to a skew partition $\lambda/\mu$ as the minimum number of bars among the corresponding skew bar tableaux. While the srank conjecture is not valid for skew partitions, we give an algorithm to compute the srank. **MSC2000 Subject Classification:** 05E05, 20C25 Introduction ============ The main objective of this paper is to answer two open problems raised by Clifford [@cliff2005] on sranks of partitions with distinct parts, skew partitions and Schur’s $Q$-functions. For any partition $\lambda$ with distinct parts, we give a proof of Clifford’s srank conjecture that the lowest degree of the terms in the power sum expansion of Schur’s $Q$-function $Q_{\lambda}$ is equal to the number of bars in a minimal bar tableaux of shape $\lambda$. Clifford [@cliff2003; @cliff2005] also proposed an open problem of determining the minimum number of bars among bar tableaux of a skew shape $\lambda/\mu$. As noted by Clifford [@cliff2003], this minimum number can be naturally regarded as the shifted rank, or srank, of $\lambda/\mu$, denoted $\mathrm{srank}(\lambda/\mu)$. For a skew bar tableau, we present an algorithm to generate a skew bar tableau without increasing the number of bars. This algorithm eventually leads to a bar tableau with the minimum number of bars. Schur’s $Q$-functions arise in the study of the projective representations of symmetric groups [@schur1911], see also, Hoffman and Humphreys [@hofhum1992], Humphreys [@humphr1986], J$\rm{\acute{o}}$zefiak [@jozef1989], Morris [@morri1962; @morri1979] and Nazarov [@nazar1988]. Shifted tableaux are closely related to Schur’s $Q$-functions analogous to the role of ordinary tableaux to the Schur functions. Sagan [@sagan1987] and Worley [@worley1984] have independently developed a combinatorial theory of shifted tableaux, which includes shifted versions of the Robinson-Schensted-Knuth correspondence, Knuth’s equivalence relations, Schützenberger’s jeu de taquin, etc. The connections between this combinatorial theory of shifted tableaux and the theory of projective representations of the symmetric groups are further explored by Stembridge [@stemb1989]. Clifford [@cliff2005] studied the srank of shifted diagrams for partitions with distinct parts. Recall that the rank of an ordinary partition is defined as the number of boxes on the main diagonal of the corresponding Young diagram. Nazarov and Tarasov [@naztar2002] found an important generalization of the rank of an ordinary partition to a skew partition in their study of tensor products of Yangian modules. A general theory of border strip decompositions and border strip tableaux of skew partitions is developed by Stanley [@stanl2002], and it has been shown that the rank of a skew partition is the least number of strips to construct a minimal border strip decomposition of the skew diagram. Motivated by Stanley’s theorem, Clifford [@cliff2005] generalized the rank of a partition to the rank of a shifted partition, called srank, in terms of the minimal bar tableaux. On the other hand, Clifford has noticed that the srank is closely related to Schur’s $Q$-function, as suggested by the work of Stanley [@stanl2002] on the rank of a partition. Stanley introduced a degree operator by taking the degree of the power sum symmetric function $p_{\mu}$ as the number of nonzero parts of the indexing partition $\mu$. Furthermore, Clifford and Stanley [@clista2004] defined the bottom Schur functions to be the sum of the lowest degree terms in the expansion of the Schur functions in terms of the power sums. In [@cliff2005] Clifford studied the lowest degree terms in the expansion of Schur’s $Q$-functions in terms of power sum symmetric functions and conjectured that the lowest degree of the Schur’s $Q$-function $Q_{\lambda}$ is equal to the srank of $\lambda$. Our first result is a proof of this conjecture. However, in general, the lowest degree of the terms, which appear in the expansion of the skew Schur’s $Q$-function $Q_{\lambda/\mu}$ in terms of the power sums, is not equal to the srank of the shifted skew diagram of $\lambda/\mu$. This is different from the case for ordinary skew partitions and skew Schur functions. Instead, we will take an algorithmic approach to the computation of the srank of a skew partition. It would be interesting to find an algebraic interpretation in terms of Schur’s $Q$-functions. Shifted diagrams and bar tableaux {#sect2} ================================= Throughout this paper we will adopt the notation and terminology on partitions and symmetric functions in [@macdon1995]. A *partition* $\lambda$ is a weakly decreasing sequence of positive integers $\lambda_1\geq \lambda_2\geq \ldots\geq \lambda_k$, denoted $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_k)$, and $k$ is called the *length* of $\lambda$, denoted $\ell(\lambda)$. For convenience we may add sufficient 0’s at the end of $\lambda$ if necessary. If $\sum_{i=1}^k\lambda_i=n$, we say that $\lambda$ is a partition of the integer $n$, denoted $\lambda\vdash n$. For each partition $\lambda$ there exists a geometric representation, known as the Young diagram, which is an array of squares in the plane justified from the top and left corner with $\ell(\lambda)$ rows and $\lambda_i$ squares in the $i$-th row. A partition is said to be *odd* (resp. even) if it has an odd (resp. even) number of even parts. Let $\mathcal{P}^o(n)$ denote the set of all partitions of $n$ with only odd parts. We will call a partition *strict* if all its parts are distinct. Let $\mathcal {D}(n)$ denote the set of all strict partitions of $n$. For each partition $\lambda\in \mathcal {D}(n)$, let $S(\lambda)$ be the shifted diagram of $\lambda$, which is obtained from the Young diagram by shifting the $i$-th row $(i-1)$ squares to the right for each $i>1$. For instance, Figure \[shifted diagram\] illustrates the shifted diagram of shape $(8,7,5,3,1)$. (180,120) (10,100)[(1,0)[160]{}]{} (10,80)[(1,0)[160]{}]{} (30,60)[(1,0)[140]{}]{}(50,40)[(1,0)[100]{}]{} (70,20)[(1,0)[60]{}]{}(90,0)[(1,0)[20]{}]{} (10,80)[(0,1)[20]{}]{} (30,60)[(0,1)[40]{}]{}(50,40)[(0,1)[60]{}]{} (70,20)[(0,1)[80]{}]{}(90,0)[(0,1)[100]{}]{} (110,0)[(0,1)[100]{}]{}(130,20)[(0,1)[80]{}]{} (150,40)[(0,1)[60]{}]{}(170,60)[(0,1)[40]{}]{} Given two partitions $\lambda$ and $\mu$, if for each $i$ we have $\lambda_i\geq \mu_i$, then the skew partition $\lambda/\mu$ is defined to be the diagram obtained from the diagram of $\lambda$ by removing the diagram of $\mu$ at the top-left corner. Similarly, the skew shifted diagram $S(\lambda/\mu)$ is defined as the set-theoretic difference of $S(\lambda)$ and $S(\mu)$. Now we recall the definitions of bars and bar tableaux as given in Hoffman and Humphreys [@hofhum1992]. Let $\lambda\in \mathcal {D}(n)$ be a partition with length $\ell(\lambda)=k$. Fixing an odd positive integer $r$, three subsets $I_{+}, I_{0}, I_{-}$ of integers between $1$ and $k$ are defined as follows: $$\begin{aligned} I_{+}& = &\{i: \lambda_{j+1}<\lambda_i-r<\lambda_j\: \mbox{for some } j\leq k,\: \mbox {taking}\:\lambda_{k+1}=0\},\\[5pt] I_{0} & = & \{i: \lambda_i=r\},\\[5pt] I_{-} & = & \{i: r-\lambda_{i}=\lambda_j \:\mbox{for some} \:j\: \mbox{with} \:i<j\leq k\}.\end{aligned}$$ Let $I(\lambda,r)=I_{+}\cup I_{0}\cup I_{-}$. For each $i\in I(\lambda,r)$, we define a new strict partition $\lambda(i,r)$ of $\mathcal {D}(n-r)$ in the following way: - If $i\in I_{+}$, then $\lambda_i>r$, and let $\lambda(i,r)$ be the partition obtained from $\lambda$ by removing $\lambda_i$ and inserting $\lambda_i-r$ between $\lambda_j$ and $\lambda_{j+1}$. - If $i\in I_{0}$, let $\lambda(i,r)$ be the partition obtained from $\lambda$ by removing $\lambda_i$. - If $i\in I_{-}$, then let $\lambda(i,r)$ be the partition obtained from $\lambda$ by removing both $\lambda_i$ and $\lambda_j$. Meanwhile, for each $i\in I(\lambda,r)$, the associated $r$-bar is given as follows: - If $i\in I_{+}$, the $r$-bar consists of the rightmost $r$ squares in the $i$-th row of $S(\lambda)$, and we say that the $r$-bar is of Type $1$. - If $i\in I_{0}$, the $r$-bar consists of all the squares of the $i$-th row of $S(\lambda)$, and we say that the $r$-bar is of Type $2$. - If $i\in I_{-}$, the $r$-bar consists of all the squares of the $i$-th and $j$-th rows, and we say that the $r$-bar is of Type $3$. For example, as shown in Figure \[bar tableau\], the squares filled with $6$ are a $7$-bar of Type $1$, the squares filled with $4$ are a $3$-bar of Type $2$, and the squares filled with $3$ are a $7$-bar of Type $3$. (180,103) (10,100)[(1,0)[180]{}]{} (10,80)[(1,0)[180]{}]{} (30,60)[(1,0)[140]{}]{}(50,40)[(1,0)[120]{}]{} (70,20)[(1,0)[60]{}]{}(90,0)[(1,0)[20]{}]{} (10,80)[(0,1)[20]{}]{} (30,60)[(0,1)[40]{}]{}(50,40)[(0,1)[60]{}]{} (70,20)[(0,1)[80]{}]{}(90,0)[(0,1)[100]{}]{} (110,0)[(0,1)[100]{}]{}(130,20)[(0,1)[80]{}]{} (150,40)[(0,1)[60]{}]{}(170,40)[(0,1)[60]{}]{} (190,80)[(0,1)[20]{}]{} (18,87)[$1$]{}(38,87)[$1$]{}(58,87)[$6$]{} (78,87)[$6$]{}(98,87)[$6$]{}(118,87)[$6$]{} (138,87)[$6$]{}(158,87)[$6$]{}(178,87)[$6$]{} (38,67)[$1$]{}(58,67)[$2$]{}(78,67)[$2$]{} (98,67)[$2$]{}(118,67)[$5$]{}(138,67)[$5$]{} (158,67)[$5$]{} (58,47)[$3$]{}(78,47)[$3$]{}(98,47)[$3$]{} (118,47)[$3$]{}(138,47)[$3$]{}(158,47)[$3$]{} (78,27)[$4$]{}(98,27)[$4$]{}(118,27)[$4$]{} (98,7)[$3$]{} A *bar tableau* of shape $\lambda$ is an array of positive integers of shape $S(\lambda)$ subject to the following conditions: - It is weakly increasing in every row; - The number of parts equal to $i$ is odd for each positive integer $i$; - Each positive integer $i$ can appear in at most two rows, and if $i$ appears in two rows, then these two rows must begin with $i$; - The composition obtained by removing all squares filled with integers larger than some $i$ has distinct parts. We say that a bar tableau $T$ is of type $\rho=(\rho_1,\rho_2,\ldots)$ if the total number of $i$’s appearing in $T$ is $\rho_i$. For example, the bar tableau in Figure \[weight\] is of type $(3,1,1,1)$. For a bar tableau $T$ of shape $\lambda$, we define its weight $wt(T)$ recursively by the following procedure. If $T$ is empty, let $wt(T)=1$. Let $\varepsilon(\lambda)$ denote the parity of the partition $\lambda$, i.e., $\varepsilon(\lambda)=0$ if $\lambda$ has an even number of even parts; otherwise, $\varepsilon(\lambda)=1$. Suppose that the largest numbers in $T$ form an $r$-bar, which is associated with an index $i\in I(\lambda, r)$. Let $j$ be the integer that occurrs in the definitions of $I_{+}$ and $I_{-}$. Let $T'$ be the bar tableau of shape $\lambda(i, r)$ obtained from $T$ by removing this $r$-bar. Now, let $$wt(T)=n_i\, wt(T'),$$ where $$n_i=\left\{\begin{array}{cc} (-1)^{j-i}2^{1-\varepsilon(\lambda)},& \mbox{if}\ i\in I_{+},\\[6pt] (-1)^{\ell(\lambda)-i},& \mbox{if}\ i\in I_{0},\\[6pt] (-1)^{j-i+\lambda_i}2^{1-\varepsilon(\lambda)},& \mbox{if}\ i\in I_{-}. \end{array} \right.$$ For example, the weight of the bar tableau $T$ in Figure \[weight\] equals $$wt(T)=(-1)^{1-1}2^{1-0}\cdot(-1)^{1-1}2^{1-1}\cdot(-1)^{2-2} \cdot(-1)^{1-1}=2.$$ (180,40) (40,40)[(1,0)[100]{}]{}(40,20)[(1,0)[100]{}]{} (60,0)[(1,0)[20]{}]{} (40,20)[(0,1)[20]{}]{}(60,0)[(0,1)[40]{}]{} (80,0)[(0,1)[40]{}]{}(100,20)[(0,1)[20]{}]{} (120,20)[(0,1)[20]{}]{}(140,20)[(0,1)[20]{}]{} (48,26)[$1$]{}(68,26)[$1$]{} (88,26)[$1$]{}(108,26)[$3$]{}(128,26)[$4$]{} (68,6)[$2$]{} The following lemma will be used in Section 3 to determine whether certain terms will vanish in the power sum expansion of Schur’s $Q$-functions indexed by partitions with two distinct parts. \[vanishbar\] Let $\lambda=(\lambda_1,\lambda_2)$ be a strict partition with the two parts $\lambda_1$ and $\lambda_2$ having the same parity. Given an partition $\sigma=(\sigma_1,\sigma_2)\in \mathcal{P}^o(|\lambda|)$, if $\sigma_2<\lambda_2$, then among all bar tableaux of shape $\lambda$ there exist only two bar tableaux of type $\sigma$, say $T_1$ and $T_2$, and furthermore, we have $wt(T_1)+wt(T_2)=0$. Suppose that both $\lambda_1$ and $\lambda_2$ are even. The case when $\lambda_1$ and $\lambda_2$ are odd numbers can be proved similarly. Note that $\sigma_2<\lambda_{2}<\lambda_{1}$. By putting $2$’s in the last $\sigma_2$ squares of the second row and then filling the remaining squares in the diagram with $1$’s, we obtain one tableau $T_1$. By putting $2$’s in the last $\sigma_2$ squares of the first row and then filling the remaining squares with $1$’s, we obtain another tableau $T_2$. Clearly, both $T_1$ and $T_2$ are bar tableaux of shape $\lambda$ and type $\sigma$, and they are the only two such bar tableaux. We notice that $$wt(T_1)=(-1)^{2-2}2^{1-0}\cdot (-1)^{2-1+\lambda_1} 2^{1-1}=-2.$$ While, for the weight of $T_2$, there are two cases to consider. If $\lambda_1-\sigma_2>\lambda_2$, then $$wt(T_2)=(-1)^{1-1}2^{1-0}\cdot (-1)^{2-1+\lambda_1-\sigma_2}2^{1-1}=2.$$ If $\lambda_1-\sigma_2<\lambda_2$, then $$wt(T_2)=(-1)^{2-1}2^{1-0}\cdot (-1)^{2-1+\lambda_2}2^{1-1}=2.$$ Thus we have $wt(T_2)=2$ in either case, so the relation $wt(T_1)+wt(T_2)=0$ holds. For example, taking $\lambda=(8,6)$ and $\sigma=(11,3)$, the two bar tableaux $T_1$ and $T_2$ in the above lemma are depicted as in Figure \[2-bar tableaux1\]. (180,40) (0,30)[(1,0)[80]{}]{}(0,20)[(1,0)[80]{}]{} (10,10)[(1,0)[60]{}]{}(0,30)[(0,-1)[10]{}]{} (10,30)(10,0)[7]{}[(0,-1)[20]{}]{} (80,30)[(0,-1)[10]{}]{} (4,23)(10,0)[8]{}[$1$]{} (14,13)(10,0)[3]{}[$1$]{} (44,13)(10,0)[3]{}[$2$]{} (40,0)[$T_1$]{} (110,30)[(1,0)[80]{}]{}(110,20)[(1,0)[80]{}]{} (120,10)[(1,0)[60]{}]{}(110,30)[(0,-1)[10]{}]{} (120,30)(10,0)[7]{}[(0,-1)[20]{}]{} (190,30)[(0,-1)[10]{}]{} (114,23)(10,0)[5]{}[$1$]{} (164,23)(10,0)[3]{}[$2$]{}(124,13)(10,0)[6]{}[$1$]{} (150,0)[$T_2$]{} Clifford gave a natural generalization of bar tableaux to skew shapes [@cliff2005]. Formally, a *skew bar tableau* of shape $\lambda/\mu$ is an assignment of nonnegative integers to the squares of $S(\lambda)$ such that in addition to the above four conditions (1)-(4) we further impose the condition that - the partition obtained by removing all squares filled with positive integers and reordering the remaining rows is $\mu$. For example, taking the skew partition $(8,6,5,4,1)/(8,2,1)$, Figure \[skew bar tableau\] is a skew bar tableau of such shape. (180,160) (-125,150)[(1,0)[120]{}]{}(-125,135)[(1,0)[120]{}]{} (-110,120)[(1,0)[90]{}]{}(-95,105)[(1,0)[75]{}]{} (-80,90)[(1,0)[60]{}]{}(-65,75)[(1,0)[15]{}]{} (-125,135)[(0,1)[15]{}]{}(-110,120)[(0,1)[30]{}]{} (-95,105)[(0,1)[45]{}]{}(-80,90)[(0,1)[60]{}]{} (-65,75)[(0,1)[75]{}]{}(-50,75)[(0,1)[75]{}]{} (-35,90)[(0,1)[60]{}]{}(-20,90)[(0,1)[60]{}]{} (-5,135)[(0,1)[15]{}]{} (-5,105)[$\longrightarrow$]{} (-120,139)[$0$]{}(-105,139)[$0$]{}(-90,139)[$0$]{} (-75,139)[$0$]{}(-60,139)[$0$]{}(-45,139)[$0$]{} (-30,139)[$0$]{}(-15,139)[$0$]{} (-105,124)[$1$]{}(-90,124)[$1$]{} (-75,124)[$1$]{}(-60,124)[$3$]{}(-45,124)[$3$]{} (-30,124)[$3$]{} (-90,109)[$0$]{} (-75,109)[$0$]{}(-60,109)[$2$]{}(-45,109)[$2$]{} (-30,109)[$2$]{} (-75,94)[$1$]{}(-60,94)[$1$]{}(-45,94)[$1$]{} (-30,94)[$1$]{} (-60,79)[$0$]{} (25,150)[(1,0)[120]{}]{}(25,135)[(1,0)[120]{}]{} (40,120)[(1,0)[75]{}]{}(55,105)[(1,0)[60]{}]{} (70,90)[(1,0)[45]{}]{}(85,75)[(1,0)[15]{}]{} (25,135)[(0,1)[15]{}]{}(40,120)[(0,1)[30]{}]{} (55,105)[(0,1)[45]{}]{}(70,90)[(0,1)[60]{}]{} (85,75)[(0,1)[75]{}]{}(100,75)[(0,1)[75]{}]{} (115,90)[(0,1)[60]{}]{}(130,135)[(0,1)[15]{}]{} (145,135)[(0,1)[15]{}]{}(145,105)[$\longrightarrow$]{} (30,139)[$0$]{}(45,139)[$0$]{}(60,139)[$0$]{} (75,139)[$0$]{}(90,139)[$0$]{}(105,139)[$0$]{} (120,139)[$0$]{}(135,139)[$0$]{} (45,124)[$0$]{}(60,124)[$0$]{} (75,124)[$2$]{}(90,124)[$2$]{}(105,124)[$2$]{} (60,109)[$1$]{} (75,109)[$1$]{}(90,109)[$1$]{}(105,109)[$1$]{} (75,94)[$1$]{}(90,94)[$1$]{}(105,94)[$1$]{} (90,79)[$0$]{} (175,150)[(1,0)[120]{}]{}(175,135)[(1,0)[120]{}]{} (190,120)[(1,0)[60]{}]{}(205,105)[(1,0)[45]{}]{} (220,90)[(1,0)[30]{}]{}(235,75)[(1,0)[15]{}]{} (175,135)[(0,1)[15]{}]{}(190,120)[(0,1)[30]{}]{} (205,105)[(0,1)[45]{}]{}(220,90)[(0,1)[60]{}]{} (235,75)[(0,1)[75]{}]{}(250,75)[(0,1)[75]{}]{} (265,135)[(0,1)[15]{}]{}(280,135)[(0,1)[15]{}]{} (295,135)[(0,1)[15]{}]{} (180,139)[$0$]{}(195,139)[$0$]{}(210,139)[$0$]{} (225,139)[$0$]{}(240,139)[$0$]{}(255,139)[$0$]{} (270,139)[$0$]{}(285,139)[$0$]{} (195,124)[$1$]{}(210,124)[$1$]{} (225,124)[$1$]{}(240,124)[$1$]{} (210,109)[$1$]{} (225,109)[$1$]{}(240,109)[$1$]{} (225,94)[$0$]{}(240,94)[$0$]{} (240,79)[$0$]{} (-50,30)[$\longrightarrow$]{} (25,45)[(1,0)[120]{}]{}(25,30)[(1,0)[120]{}]{} (40,15)[(1,0)[30]{}]{}(55,0)[(1,0)[15]{}]{} (25,30)[(0,1)[15]{}]{}(40,15)[(0,1)[30]{}]{} (55,0)[(0,1)[45]{}]{}(70,0)[(0,1)[45]{}]{} (85,30)[(0,1)[15]{}]{}(100,30)[(0,1)[15]{}]{} (115,30)[(0,1)[15]{}]{}(130,30)[(0,1)[15]{}]{} (145,30)[(0,1)[15]{}]{} (30,34)[$0$]{}(45,34)[$0$]{}(60,34)[$0$]{} (75,34)[$0$]{}(90,34)[$0$]{}(105,34)[$0$]{} (120,34)[$0$]{}(135,34)[$0$]{} (45,19)[$0$]{}(60,19)[$0$]{} (60,4)[$0$]{} A bar tableau of shape $\lambda$ is said to be *minimal* if there does not exist a bar tableau with fewer bars. Motivated by Stanley’s results in [@stanl2002], Clifford defined the srank of a shifted partition $S(\lambda)$, denoted ${\rm srank}(\lambda)$, as the number of bars in a minimal bar tableau of shape $\lambda$ [@cliff2005]. Clifford also gave the following formula for ${\rm srank}(\lambda)$. \[min bar\] Given a strict partition $\lambda$, let $o$ be the number of odd parts of $\lambda$, and let $e$ be the number of even parts. Then ${\rm srank}(\lambda)=\max(o,e+(\ell(\lambda) \ \mathrm{mod}\ 2))$. Next we consider the number of bars in a minimal skew bar tableau of shape $\lambda/\mu$. Note that the squares filled with $0$’s in the skew bar tableau give rise to a shifted diagram of shape $\mu$ by reordering the rows. Let $o_ r$ (resp. $e_r$) be the number of nonempty rows of odd (resp. even) length with blank squares, and let $o_s$ (resp. $e_s$) be the number of rows of $\lambda$ with some squares filled with $0$’s and an odd (resp. even) number of blank squares. It is obvious that the number of bars in a minimal skew bar tableau is greater than or equal to $$o_s+2e_s+\max(o_r,e_r+((e_r+o_r)\ \mathrm{mod}\ 2)).$$ In fact the above quantity has been considered by Clifford [[@cliff2003]]{}. Observe that this quantity depends on the positions of the 0’s. It should be remarked that a legal bar tableau of shape $\lambda/\mu$ may not exist once the positions of $0$’s are fixed. One open problem proposed by Clifford [@cliff2003] is to find a characterization of ${\rm srank}(\lambda/\mu)$. In Section 5 we will give an algorithm to compute the srank of a skew shape. Clifford’s conjecture {#sect3} ===================== In this section, we aim to show that the lowest degree of the power sum expansion of a Schur’s $Q$-function $Q_{\lambda}$ equals ${\rm srank}(\lambda)$. Let us recall relevant terminology on Schur’s $Q$-functions. Let $x=(x_1,x_2,\ldots)$ be an infinite sequence of independent indeterminates. We define the symmetric functions $q_k=q_k(x)$ in $x_1,x_2,\ldots$ for all integers $k$ by the following expansion of the formal power series in $t$: $$\prod_{i\geq 1}\frac{1+x_it}{1-x_it}=\sum_{k}q_{k}(x)t^k.$$ In particular, $q_k=0$ for $k<0$ and $q_0=1$. It immediately follows that $$\label{eq-def} \sum_{i+j=n}(-1)^iq_iq_j=0,$$ for all $n\geq 1$. Let $Q_{(a)}=q_a$ and $$Q_{(a,b)}=q_aq_b+2\sum_{m=1}^b(-1)^m q_{a+m}q_{b-m}.$$ From we see that $Q_{(a,b)}=-Q_{(b,a)}$ and thus $Q_{(a,a)}=0$ for any $a,b$. In general, for any strict partition $\lambda$, the symmetric function $Q_{\lambda}$ is defined by the recurrence relations: $$\begin{aligned} Q_{(\lambda_1,\ldots,\lambda_{2k+1})}&=& \sum_{m=1}^{2k+1} (-1)^{m+1} q_{\lambda_m}Q_{(\lambda_1,\ldots,\hat{\lambda}_m,\ldots,\lambda_{2k+1})},\\[5pt] Q_{(\lambda_1,\ldots,\lambda_{2k})}&=& \sum_{m=2}^{2k} (-1)^{m} Q_{(\lambda_1,\lambda_m)}Q_{(\lambda_2,\ldots,\hat{\lambda}_m,\ldots,\lambda_{2k})},\end{aligned}$$ where $\hat{}$ stands for a missing entry. It was known that $Q_{\lambda}$ can be also defined as the specialization at $t=-1$ of the Hall-Littlewood functions associated with $\lambda$ [@macdon1995]. Originally, these $Q_{\lambda}$ symmetric functions were introduced in order to express irreducible projective characters of the symmetric groups [@schur1911]. Note that the irreducible projective representations of $S_n$ are in one-to-one correspondence with partitions of $n$ with distinct parts, see [@jozef1989; @stemb1988; @stemb1989]. For any $\lambda\in \mathcal{D}(n)$, let $\langle\lambda\rangle$ denote the character of the irreducible projective or spin representation indexed by $\lambda$. Morris [@morri1965] has found a combinatorial rule for calculating the characters, which is the projective analogue of the Murnaghan-Nakayama rule. In terms of bar tableaux, Morris’s theorem reads as follows: \[mnrule\] Let $\lambda\in \mathcal{D}(n)$ and $\pi\in \mathcal{P}^o(n)$. Then $$\label{mnruleeq} \langle\lambda\rangle(\pi)=\sum_{T}wt(T)$$ where the sum ranges over all bar tableaux of shape $\lambda$ and type $\pi$. The above theorem for projective characters implies the following formula, which will be used later in the proof of Lemma \[len2\]. \[2odd\] Let $\lambda$ be a strict partition of length $2$. Suppose that the two parts $\lambda_1,\lambda_2$ are both odd. Then we have $$\langle\lambda\rangle(\lambda)=-1.$$ Let $T$ be the bar tableau obtained by filling the last $\lambda_2$ squares in the first row of $S(\lambda)$ with $2$’s and the remaining squares with $1$’s, and let $T'$ be the bar tableau obtained by filling the first row of $S(\lambda)$ with $1$’s and the second row with $2$’s. Clearly, $T$ and $T'$ are of the same type $\lambda$. Let us first consider the weight of $T$. If $\lambda_1-\lambda_2<\lambda_2$, then $$wt(T)=(-1)^{2-1} 2^{1-0}\cdot (-1)^{2-1+\lambda_2}2^{1-1}=-2.$$ If $\lambda_1-\lambda_2>\lambda_2$, then $$wt(T)=(-1)^{1-1} 2^{1-0}\cdot (-1)^{2-1+\lambda_1-\lambda_2}2^{1-1}=-2.$$ In both cases, the weight of $T'$ equals $$wt(T')=(-1)^{2-2}\cdot (-1)^{1-1}=1.$$ Since there are only two bar tableaux, $T$ and $T'$, of type $\lambda$, the corollary immediately follows from Theorem \[mnrule\]. Let $p_k(x)$ denote the $k$-th power sum symmetric functions, i.e., $p_k(x)=\sum_{i\geq 1}x_i^k$. For any partition $\lambda=(\lambda_1,\lambda_2,\cdots)$, let $p_{\lambda}=p_{\lambda_1}p_{\lambda_2}\cdots$. The fundamental connection between $Q_{\lambda}$ symmetric functions and the projective representations of the symmetric group is as follows. \[conn\] Let $\lambda\in \mathcal{D}(n)$. Then we have $$Q_{\lambda}=\sum_{\pi\in \mathcal{P}^o(n)} 2^{[\ell(\lambda)+\ell(\pi)+\varepsilon(\lambda)]/2} \langle\lambda\rangle(\pi)\frac{p_{\pi}}{z_{\pi}},$$ where $$z_{\pi}=1^{m_1}m_1!\cdot 2^{m_2}m_2!\cdot \cdots, \quad \mbox{if $\pi=\langle 1^{m_1}2^{m_2}\cdots \rangle$.}$$ Stanley [@stanl2002] introduced a degree operator on symmetric functions by defining $\deg(p_i)=1$, and so $\deg(p_{\nu})=\ell(\nu)$. Clifford [@cliff2005] applied this operator to Schur’s $Q$-functions and obtained the following lower bound from Theorem \[conn\]. \[atleast\] The terms of the lowest degree in $Q_{\lambda}$ have degree at least ${\rm srank}(\lambda)$. The following conjecture is proposed by Clifford: The terms of the lowest degree in $Q_{\lambda}$ have degree ${\rm srank}(\lambda)$. Our proof of the above conjecture depends on the Pfaffian formula for Schur’s $Q$-functions. Given a skew-symmetric matrix $A=(a_{i,j})$ of even size $2n\times 2n$, the *Pfaffian* of $A$, denoted [Pf]{}(A), is defined by $${\rm Pf}(A)=\sum_{\pi}(-1)^{{\rm cr}(\pi)} a_{i_1j_1}\cdots a_{i_nj_n},$$ where the sum ranges over all set partitions $\pi$ of $\{1,2,\cdots, 2n\}$ into two element blocks $i_k<j_k$ and $cr(\pi)$ is the number of crossings of $\pi$, i.e., the number of pairs $h<k$ for which $i_h<i_k<j_h<j_k$. \[pfexp\] Given a strict partition $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_{2n})$ satisfying $\lambda_1>\ldots>\lambda_{2n}\geq 0$, let $M_{\lambda}=(Q_{(\lambda_i,\lambda_j)})$. Then we have $$Q_{\lambda}={\rm Pf}(M_{\lambda}).$$ We first prove that Clifford’s conjecture holds for strict partitions of length less than three. The proof for the general case relies on this special case. \[len2\] Let $\lambda$ be a strict partition of length $\ell(\lambda)<3$. Then the terms of the lowest degree in $Q_{\lambda}$ have degree ${\rm srank}(\lambda)$. In view of Theorem \[mnrule\] and Theorem \[conn\], if there exists a unique bar tableau of shape $\lambda$ and type $\pi$, then the coefficient of $p_{\pi}$ is nonzero in the expansion of $Q_{\lambda}$. There are five cases to consider. - $\ell(\lambda)=1$ and $\lambda_1$ is odd. Clearly, we have ${\rm srank}(\lambda)=1$. Note that there exists a unique bar tableau $T$ of shape $\lambda$ and of type $\lambda$ with all squares of $S(\lambda)$ filled with $1$’s. Therefore, the coefficient of $p_{\lambda}$ in the power sum expansion of $Q_{\lambda}$ is nonzero and the lowest degree of $Q_{\lambda}$ is $1$. - $\ell(\lambda)=1$ and $\lambda_1$ is even. We see that ${\rm srank}(\lambda)=2$. Since the bars are all of odd size, there does not exist any bar tableau of shape $\lambda$ and of type $\lambda$. But there is a unique bar tableau $T$ of shape $\lambda$ and of type $(\lambda_1-1,1)$, which is obtained by filling the rightmost square of $S(\lambda)$ with $2$ and the remaining squares with $1$’s. So the coefficient of $p_{(\lambda_1-1,1)}$ in the power sum expansion of $Q_{\lambda}$ is nonzero and the terms of the lowest degree in $Q_{\lambda}$ have degree $2$. - $\ell(\lambda)=2$ and the two parts $\lambda_1,\lambda_2$ have different parity. In this case, we have ${\rm srank}(\lambda)=1$. Note that there exists a unique bar tableau $T$ of shape $\lambda$ and of type $(\lambda_1+\lambda_2)$, which is obtained by filling all the squares of $S(\lambda)$ with $1$’s. Thus, the coefficient of $p_{\lambda_1+\lambda_2}$ in the power sum expansion of $Q_{\lambda}$ is nonzero and the terms of lowest degree in $Q_{\lambda}$ have degree $1$. - $\ell(\lambda)=2$ and the two parts $\lambda_1,\lambda_2$ are both even. It is easy to see that ${\rm srank}(\lambda)=2$. Since there exists a unique bar tableau $T$ of shape $\lambda$ and of type $(\lambda_1-1,\lambda_2+1)$, which is obtained by filling the rightmost $\lambda_2+1$ squares in the first row of $S(\lambda)$ with $2$’s and the remaining squares with $1$’s, the coefficient of $p_{(\lambda_1-1,\lambda_2+1)}$ in the power sum expansion of $Q_{\lambda}$ is nonzero; hence the lowest degree of $Q_{\lambda}$ is equal to $2$. - $\ell(\lambda)=2$ and the two parts $\lambda_1,\lambda_2$ are both odd. In this case, we have ${\rm srank}(\lambda)=2$. By Corollary \[2odd\], the coefficient of $p_{\lambda}$ in the power sum expansion of $Q_{\lambda}$ is nonzero, and therefore the terms of the lowest degree in $Q_{\lambda}$ have degree $2$. This completes the proof. Given a strict partition $\lambda$, we consider the Pfaffian expansion of $Q_{\lambda}$ as shown in Theorem \[pfexp\]. To prove Clifford’s conjecture, we need to determine which terms may appear in the expansion of $Q_{\lambda}$ in terms of power sum symmetric functions. Suppose that the Pfaffian expansion of $Q_{\lambda}$ is as follows: $$\label{q-expand} {\rm Pf}(M_{\lambda})=\sum_{\pi}(-1)^{{\rm cr}(\pi)} Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})},$$ where the sum ranges over all set partitions $\pi$ of $\{1,2,\cdots, 2m\}$ into two element blocks $\{(\pi_1,\pi_2),\ldots,(\pi_{2m-1},\pi_{2m})\}$ with $\pi_1<\pi_3<\cdots<\pi_{2m-1}$ and $\pi_{2k-1}<\pi_{2k}$ for any $k$. For the above expansion of $Q_{\lambda}$, the following two lemmas will be used to choose certain lowest degree terms in the power sum expansion of $Q_{(\lambda_i,\lambda_j)}$ in the matrix $M_\lambda$. \[lemma1\] Suppose that $\lambda$ has both odd parts and even parts. Let $\lambda_{i_1}$ (resp. $\lambda_{j_1}$) be the largest odd (resp. even) part of $\lambda$. If the power sum symmetric function $p_{\lambda_{i_1}+\lambda_{j_1}}$ appears in the terms of lowest degree originated from the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$ as in the expansion , then we have $(\pi_1,\pi_2)=(i_1,j_1)$. Without loss of generality, we may assume that $\lambda_{i_1}> \lambda_{j_1}$. By Lemma \[len2\], the term $p_{\lambda_{i_1}+\lambda_{j_1}}$ appears in $Q_{(\lambda_{i_1}, \lambda_{j_1})}$ with nonzero coefficients. Since $\lambda_{i_1}, \lambda_{j_1}$ are the largest odd and even parts, $p_{\lambda_{i_1}+\lambda_{j_1}}$ does not appear as a factor of any term of the lowest degree in the expansion of $Q_{(\lambda_{i_k}, \lambda_{j_k})}$, where $\lambda_{i_k}$ and $\lambda_{j_k}$ have different parity. Meanwhile, if $\lambda_{i_k}$ and $\lambda_{j_k}$ have the same parity, then we consider the bar tableaux of shape $(\lambda_{i_k}, \lambda_{j_k})$ and of type $(\lambda_{i_1}+\lambda_{j_1}, \lambda_{i_k}+ \lambda_{j_k}-\lambda_{i_1}-\lambda_{j_1})$. Observe that $\lambda_{i_k}+ \lambda_{j_k}-\lambda_{i_1}-\lambda_{j_1}<\lambda_{j_k}$. Since the lowest degree of $Q_{(\lambda_{i_k}, \lambda_{j_k})}$ is $2$, from Lemma \[vanishbar\] it follows that $p_{\lambda_{i_1}+\lambda_{j_1}}$ can not be a factor of any term of lowest degree in the power sum expansion of $Q_{(\lambda_{i_k}, \lambda_{j_k})}$. This completes the proof. \[lemma2\] Suppose that $\lambda$ only has even parts. Let $\lambda_1, \lambda_2$ be the two largest parts of $\lambda$ (allowing $\lambda_2=0$). If the power sums $p_{\lambda_1-1}p_{\lambda_2+1}$ appears in the terms of the lowest degree given by the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$ as in , then we have $(\pi_1,\pi_2)=(1,2)$. From Case (4) of the proof of Lemma \[len2\] it follows that $p_{\lambda_1-1}p_{\lambda_2+1}$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_1,\lambda_2)}$. We next consider the power sum expansion of any other $Q_{(\lambda_i,\lambda_j)}$. First, we consider the case when $\lambda_i+\lambda_j>\lambda_2+1$ and $\lambda_i \leq\lambda_2$. Since $\lambda_i+\lambda_j-(\lambda_2+1)<\lambda_j$, by Lemma \[vanishbar\], the term $p_{\lambda_2+1}$ is not a factor of any term of the lowest degree in the power sum expansion of $Q_{(\lambda_i,\lambda_j)}$. Now we are left with the case when $\lambda_i+\lambda_j>\lambda_1-1$ and $\lambda_i\leq \lambda_1-2$. Since $\lambda_i+\lambda_j-(\lambda_1-1)<\lambda_j$, by Lemma \[vanishbar\] the term $p_{\lambda_1-1}$ does not appear as a factor in the terms of the lowest degree of $Q_{(\lambda_i,\lambda_j)}$. So we have shown that if either $p_{\lambda_2+1}$ or $p_{\lambda_1-1}$ appears as a factor of some lowest degree term for $Q_{(\lambda_i,\lambda_j)}$, then we deduce that $\lambda_i=\lambda_1$. Moreover, if both $p_{\lambda_1-1}$ and $p_{\lambda_2+1}$ are factors of the lowest degree terms in the power sum expansion of $Q_{(\lambda_1,\lambda_j)}$, then we have $\lambda_j=\lambda_2$. The proof is complete. We now present the main result of this paper. For any $\lambda\in\mathcal{D}(n)$, the terms of the lowest degree in $Q_\lambda$ have degree ${\rm srank}(\lambda)$. We write the strict partition $\lambda$ in the form $(\lambda_1,\lambda_2,\ldots,\lambda_{2m})$, where $\lambda_1>\ldots>\lambda_{2m}\geq 0$. Suppose that the partition $\lambda$ has $o$ odd parts and $e$ even parts (including $0$ as a part). For the sake of presentation, let $(\lambda_{i_1},\lambda_{i_2},\ldots,\lambda_{i_o})$ denote the sequence of odd parts in decreasing order, and let $(\lambda_{j_1},\lambda_{j_2},\ldots,\lambda_{j_e})$ denote the sequence of even parts in decreasing order. We first consider the case $o\geq e$. In this case, it will be shown that ${\rm srank}(\lambda)=o$. By Theorem \[min bar\], if $\lambda_{2m}>0$, i.e., $\ell(\lambda)=2m$, then we have $${\rm srank}(\lambda)=\max(o,e+0)=o.$$ If $\lambda_{2m}=0$, i.e., $\ell(\lambda)=2m-1$, then we still have $${\rm srank}(\lambda)=\max(o,(e-1)+1)=o.$$ Let $$A=p_{\lambda_{i_1}+\lambda_{j_1}}\cdots p_{\lambda_{i_e}+\lambda_{j_e}}p_{\lambda_{i_{e+1}}}p_{\lambda_{i_{e+2}}}\cdots p_{\lambda_{i_o}}.$$ We claim that $A$ appears as a term of the lowest degree in the power sum expansion of $Q_{\lambda}$. For this purpose, we need to determine those matchings $\pi$ of $\{1,2,\ldots,2m\}$ in , for which the power sum expansion of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$ contains $A$ as a term of the lowest degree. By Lemma \[lemma1\], if the $p_{\lambda_{i_1}+\lambda_{j_1}}$ appears as a factor in the lowest degree terms of the power sum expansion of $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then we have $\{\pi_1,\pi_2\}=\{i_1,j_1\}$. Iterating this argument, we see that if $p_{\lambda_{i_1}+\lambda_{j_1}}\cdots p_{\lambda_{i_e}+\lambda_{j_e}}$ appears as a factor in the lowest degree terms of $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then we have $$\{\pi_1,\pi_2\}=\{i_1,j_1\},\ldots,\{\pi_{2e-1},\pi_{2e}\}=\{i_e,j_e\}.$$ It remains to determine the ordered pairs $$\{(\pi_{2e+1},\pi_{2e+2}),\ldots,(\pi_{2m-1},\pi_{2m})\}.$$ By the same argument as in Case (5) of the proof of Lemma \[len2\], for any $e+1\leq k<l\leq o$, the term $p_{\lambda_{i_{k}}}p_{\lambda_{i_{l}}}$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_{i_k},\lambda_{i_l})}$. Moreover, if the power sum symmetric function $p_{\lambda_{i_{e+1}}}p_{\lambda_{i_{e+2}}}\cdots p_{\lambda_{i_o}}$ appears as a term of the lowest degree in the power sum expansion of the product $Q_{(\lambda_{\pi_{2e+1}},\lambda_{\pi_{2e+2}})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then the composition of the pairs $\{(\pi_{2e+1},\pi_{2e+2}),\ldots,(\pi_{2m-1},\pi_{2m})\}$ could be any matching of $\{1,2,\ldots,2m\}/\{i_1,j_1,\ldots,i_e,j_e\}$. To summarize, there are $(2(m-e)-1)!!$ matchings $\pi$ such that $A$ appears as a term of the lowest degree in the power sum expansion of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$. Combining Corollary \[2odd\] and Theorem \[conn\], we find that the coefficient of $p_{\lambda_{i_k}}p_{\lambda_{i_l}}$ $(e+1\leq k<l\leq o)$ in the power sum expansion of $Q_{(\lambda_{i_k}, \lambda_{i_l})}$ is $-\frac{4}{\lambda_{i_k}\lambda_{i_l}}$. It follows that the coefficient of $A$ in the expansion of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}}, \lambda_{\pi_{2m}})}$ is independent of the choice of $\pi$. Since $(2(m-e)-1)!!$ is an odd number, the term $A$ will not vanish in the expansion of $Q_{\lambda}$. Note that the degree of $A$ is $e+(o-e)=o,$ which is equal to ${\rm srank}(\lambda)$, as desired. Similarly, we consider the case $e>o$. In this case, we aim to show that ${\rm srank}(\lambda)=e.$ By Theorem \[min bar\], if $\lambda_{2m}>0$, i.e., $\ell(\lambda)=2m$, then we have $${\rm srank}(\lambda)=\max(o,e+0)=e.$$ If $\lambda_{2m}=0$, i.e., $\ell(\lambda)=2m-1$, then we still have $${\rm srank}(\lambda)=\max(o,(e-1)+1)=e.$$ Let $$B=p_{\lambda_{i_1}+\lambda_{j_1}}\cdots p_{\lambda_{i_o}+\lambda_{j_o}}p_{\lambda_{j_{o+1}}-1}p_{\lambda_{j_{o+2}}+1}\cdots p_{\lambda_{j_{e-1}}-1}p_{\lambda_{j_e}+1}.$$ We proceed to prove that $B$ appears as a term of the lowest degree in the power sum expansion of $Q_{\lambda}$. Applying Lemma \[lemma1\] repeatedly, we deduce that if $p_{\lambda_{i_1}+\lambda_{j_1}}\cdots p_{\lambda_{i_o}+\lambda_{j_o}}$ appears as a factor in the lowest degree terms of the product $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then $$\label{match1} \{\pi_1,\pi_2\}=\{i_1,j_1\},\ldots,\{\pi_{2o-1},\pi_{2o}\}=\{i_o,j_o\}.$$ On the other hand, iteration of Lemma \[lemma2\] reveals that if the power sum symmetric function $p_{\lambda_{j_{o+1}}-1}p_{\lambda_{j_{o+2}}+1}\cdots p_{\lambda_{j_{e-1}}-1}p_{\lambda_{j_e}+1}$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_{\pi_{2o+1}},\lambda_{\pi_{2o+2}})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then $$\label{match2} \{\pi_{2o+1},\pi_{2o+2}\}=\{j_{o+1},j_{o+2}\},\ldots,\{\pi_{2m-1},\pi_{2m}\}=\{j_{e-1},j_e\}.$$ Therefore, if $B$ appears as a term of the lowest degree in the power sum expansion of $Q_{(\lambda_{\pi_1},\lambda_{\pi_2})}\cdots Q_{(\lambda_{\pi_{2m-1}},\lambda_{\pi_{2m}})}$, then the matching $\pi$ is uniquely determined by and . Note that the degree of $B$ is $e$, which coincides with ${\rm srank}(\lambda)$. Since there is always a term of degree ${\rm srank}(\lambda)$ in the power sum expansion of $Q_\lambda$, the theorem follows. Skew Schur’s $Q$-functions ========================== In this section, we show that the srank ${\rm srank}(\lambda/\mu)$ is a lower bound of the lowest degree of the terms in the power sum expansion of the skew Schur’s $Q$-function $Q_{\lambda/\mu}$. Note that Clifford’s conjecture does not hold for skew shapes. We first recall a definition of the skew Schur’s $Q$-function in terms of strip tableaux. The concept of strip tableaux were introuduced by Stembridge [@stemb1988] to describe the Morris rule for the evaluation of irreducible spin characters. Given a skew partition $\lambda/\mu$, the *$j$-th diagonal* of the skew shifted diagram $S(\lambda/\mu)$ is defined as the set of squares $(1,j), (2, j+1), (3, j+2), \ldots$ in $S(\lambda/\mu)$. A skew diagram $S(\lambda/\mu)$ is called a *strip* if it is rookwise connected and each diagonal contains at most one box. The *height* $h$ of a strip is defined to be the number of rows it occupies. A *double strip* is a skew diagram formed by the union of two strips which both start on the diagonal consisting of squares $(j,j)$. The *depth* of a double strip is defined to be $\alpha+\beta$ if it has $\alpha$ diagonals of length two and its diagonals of length one occupy $\beta$ rows. A *strip tableau* of shape $\lambda/\mu$ and type $\pi=(\pi_1,\ldots,\pi_k)$ is defined to be a sequence of shifted diagrams $$S(\mu)=S(\lambda^0)\subseteq S(\lambda^1)\subseteq \cdots \subseteq S(\lambda^k)=S(\lambda)$$ with $|\lambda^i/\lambda^{i-1}|=\pi_i$ ($1\leq i\leq k$) such that each skew shifted diagram $S(\lambda^i/\lambda^{i-1})$ is either a strip or a double strip. The skew Schur’s $Q$-function can be defined as the weight generating function of strip tableaux in the following way. For a strip of height $h$ we assign the weight $(-1)^{h-1}$, and for a double strip of depth $d$ we assign the weight $2(-1)^{d-1}$. The weight of a strip tableau $T$, denoted $wt(T)$, is the product of the weights of strips and double strips of which $T$ is composed. Then the skew Schur’s $Q$-function $Q_{\lambda/\mu}$ is given by $$Q_{\lambda/\mu}=\sum_{\pi\in \mathcal{P}^o(|\lambda/\mu|)}\sum_{T} 2^{\ell(\pi)}wt(T)\frac{p_{\pi}}{z_{\pi}},$$ where $T$ ranges over all strip tableaux $T$ of shape $\lambda/\mu$ and type $\pi$, see [@stemb1988 Theorem 5.1]. J$\rm{\acute{o}}$zefiak and Pragacz [@jozpra1991] obtained the following Pfaffian formula for the skew Schur’s $Q$-function. \[skewpf\] Let $\lambda, \mu$ be strict partitions with $m=\ell(\lambda)$, $n=\ell(\mu)$, $\mu\subset \lambda$, and let $M(\lambda,\mu)$ denote the skew-symmetric matrix $$\begin{pmatrix} A & B\\ -B^t & 0 \end{pmatrix},$$ where $A=(Q_{(\lambda_i,\lambda_j)})$ and $B=(Q_{(\lambda_i-\mu_{n+1-j})})$. Then - if $m+n$ is even, we have $Q_{\lambda/\mu}={\rm Pf}(M(\lambda,\mu))$; - if $m+n$ is odd, we have $Q_{\lambda/\mu}={\rm Pf}(M(\lambda,\mu^\prime))$, where $\mu^\prime=(\mu_1,\cdots,\mu_n, 0)$. A combinatorial proof of the above theorem was given by Stembridge [@stemb1990] in terms of lattice paths, and later, Hamel [@hamel1996] gave an interesting generalization by using the border strip decompositions of the shifted diagram. Given a skew partition $\lambda/\mu$, Clifford [@cliff2003] constructed a bijection between skew bar tableaux of shape $\lambda/\mu$ and skew strip tableaux of the same shape, which preserves the type of the tableau. Using this bijection, it is straightforward to derive the following result. The terms of the lowest degree in $Q_{\lambda/\mu}$ have degree at least ${\rm srank}(\lambda/\mu)$. Different from the case of non-skew shapes, in general, the lowest degree terms in $Q_{\lambda/\mu}$ do not have the degree ${\rm srank}(\lambda/\mu)$. For example, take the skew partition $(4,3)/3$. It is easy to see that ${\rm srank}((4,3)/3)=2$. While, using Theorem \[skewpf\] and Stembridge’s SF Package for Maple [@stem2], we obtain that $$Q_{(4,3)/3}={\rm Pf} \begin{pmatrix}0 & Q_{(4,3)} & Q_{(4)} & Q_{(1)}\\[5pt] Q_{(3,4)} & 0 & Q_{(3)} & Q_{(0)}\\[5pt] -Q_{(4)} & -Q_{(3)}& 0 & 0\\[5pt] -Q_{(1)} & -Q_{(0)}& 0 & 0 \end{pmatrix}=2p_1^4.$$ This shows that the lowest degree of $Q_{(4,3)/3}$ equals 4, which is strictly greater than ${\rm srank}((4,3)/3)$. The srank of skew partitions {#sect4} ============================ In this section, we present an algorithm to determine the srank for the skew partition $\lambda/\mu$. In fact, the algorithm leads to a configuration of $0$’s. To obtain the srank of a skew partition, we need to minimize the number of bars by adjusting the positions of $0$’s. Given a configuration $\mathcal{C}$ of $0$’s in the shifted diagram $S(\lambda)$, let $$\kappa(\mathcal{C})=o_s+2e_s+\max(o_r,e_r+((e_r+o_r)\ \mathrm{mod}\ 2)),$$ where $o_ r$ (resp. $e_r$) counts the number of nonempty rows in which there are an odd (resp. even) number of squares and no squares are filled with $0$, and $o_s$ (resp. $e_s$) records the number of rows in which at least one square is filled with $0$ but there are an odd (resp. nonzero even) number of blank squares. If there exists at least one bar tableau of type $\lambda/\mu$ under some configuration $\mathcal{C}$, we say that $\mathcal{C}$ is *admissible*. For a fixed configuration $\mathcal{C}$, each row is one of the following eight possible types: - an even row bounded by an even number of $0$’s, denoted $(e,e)$, - an odd row bounded by an even number of $0$’s, denoted $(e,o)$, - an odd row bounded by an odd number of $0$’s, denoted $(o,e)$, - an even row bounded by an odd number of $0$’s, denoted $(o,o)$, - an even row without $0$’s, denoted $(\emptyset,e)$, - an odd row without $0$’s, denoted $(\emptyset,o)$, - an even row filled with $0$’s, denoted $(e, \emptyset)$, - an odd row filled with $0$’s, denoted $(o, \emptyset)$. Given two rows with respective types $s$ and $s'$ for some configuration $\mathcal{C}$, if we can obtain a new configuration $\mathcal{C}'$ by exchanging the locations of $0$’s in these two rows such that their new types are $t$ and $t'$ respectively, then denote it by $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{s \atop s'}}\right] \rightarrow \left[\tiny{{{t} \atop {t'}}}\right]\right)$. Let $o_r,e_r,o_s,e_s$ be defined as above corresponding to configuration $\mathcal{C}$, and let $o_r',e_r',o_s',e_s'$ be those of $\mathcal{C}'$. In the following we will show that how the quantity $\kappa(\mathcal{C})$ changes when exchanging the locations of $0$’s in $\mathcal{C}$. \[varyzero1-1\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{s \atop s'}}\right] \rightarrow \left[\tiny{{s \atop s'}}\right]\right)$ or $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{s \atop s'}}\right] \rightarrow \left[\tiny{{{s'} \atop s}}\right]\right)$, i.e., the types of the two involved rows are remained or exchanged, where $s,s'$ are any two possible types, then $\kappa({\mathcal{C}'})= \kappa({\mathcal{C}})$. \[varyzero1-6\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,e)} \atop\rule{0pt}{10pt} {(\emptyset,o)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,e)} \atop\rule{0pt}{10pt} {(e,o)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$. In this case we have $$o_s^\prime=o_s+1, \quad e_s^\prime=e_s-1, \quad o_r^\prime=o_r-1,\quad e_r^\prime=e_r+1.$$ Note that $o_r+e_r=\ell(\lambda)-\ell(\mu)$. Now there are two cases to consider. **Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\ 2)$. - If $o_r\leq e_r$, then $o_r^\prime\leq e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+e_r,\\ \kappa(\mathcal{C}')&=o_s+1+2(e_s-1)+e_r^\prime=o_s+2e_s+e_r=\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\geq e_r+2$, then $o_r^\prime=o_r-1\geq e_r+1=e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+o_r,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r-2<\kappa(\mathcal{C}).\end{aligned}$$ **Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\ 2)$. - If $o_r\leq e_r+1$, then $o_r^\prime<e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+e_r+1,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+e_r^\prime+1=o_s+2e_s+e_r+1=\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\geq e_r+3$, then $o_r^\prime=o_r-1\geq e_r+2>e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+o_r,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r-2<\kappa(\mathcal{C}).\end{aligned}$$ Therefore, the inequality $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$ holds under the assumption. \[varyzero2-6\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt} {(\emptyset,e)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,o)} \atop\rule{0pt}{10pt} {(o,o)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$. In this case we have $$o_s^\prime=o_s+1, \quad e_s^\prime=e_s-1, \quad o_r^\prime=o_r+1,\quad e_r^\prime=e_r-1.$$ Now there are two possibilities. **Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\ 2)$. - If $o_r\leq e_r-2$, then $o_r^\prime\leq e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+e_r,\\ \kappa(\mathcal{C}')&=o_s+1+2(e_s-1)+e_r^\prime=o_s+2e_s+e_r-2<\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\geq e_r$, then $o_r^\prime=o_r+1> e_r-1=e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+o_r,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r=\kappa(\mathcal{C}).\end{aligned}$$ **Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\ 2)$. - If $o_r\leq e_r-3$, then $o_r^\prime<e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+e_r+1,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+e_r^\prime+1=o_s+2e_s+e_r-1<\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\geq e_r-1$, then $o_r^\prime=o_r+1> e_r-1=e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C})&=o_s+2e_s+o_r,\\ \kappa(\mathcal{C}')&=o_s+2e_s-1+o_r^\prime=o_s+2e_s+o_r=\kappa(\mathcal{C}).\end{aligned}$$ In both cases we have $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$, as required. \[varyzero1-4\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,e)} \atop\rule{0pt}{10pt} {(o,e)}}}\right] \rightarrow \left[\tiny{{{(o,o)} \atop\rule{0pt}{10pt} {(e,o)}}}\right]\right)$, then $\kappa({\mathcal{C}'})< \kappa({\mathcal{C}})$. In this case, we have $$o_s^\prime=o_s+2, \quad e_s^\prime=e_s-2, \quad o_r^\prime=o_r,\quad e_r^\prime=e_r.$$ Therefore, $$\kappa(\mathcal{C}')=o_s'+2e_s'+\max(o_r',e_r'+((e_r'+o_r')\ \mathrm{mod}\ 2))=\kappa(\mathcal{C})-2.$$ The desired inequality immediately follows. \[varyzero3-5\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,o)} \atop\rule{0pt}{10pt} {(\emptyset,e)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,o)} \atop\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$. Under this transformation we have $$o_s^\prime=o_s-1, \quad e_s^\prime=e_s, \quad o_r^\prime=o_r+1,\quad e_r^\prime=e_r-1.$$ Since $o_r+e_r=\ell(\lambda)-\ell(\mu)$ is invariant, there are two cases. **Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\ 2)$. - If $o_r\geq e_r$, then $o_r^\prime\geq e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime =o_s-1+2e_s+o_r+1 =\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\leq e_r-2$, then $o_r^\prime=o_r+1\leq e_r-1=e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s-1+2e_s+e_r^\prime=o_s+2e_s+e_r-2<\kappa(\mathcal{C}).\end{aligned}$$ **Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\ 2)$. - If $o_r\geq e_r+1$, then $o_r^\prime=o_r+1\geq e_r+2>e_r^\prime+1$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime=o_s+2e_s+o_r=\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\leq e_r-1$, then $o_r^\prime=o_r+1\leq e_r=e_r^\prime+1$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+e_r^\prime+1=o_s+2e_s+e_r-1<\kappa(\mathcal{C}).\end{aligned}$$ Hence the proof is complete. \[varyzero5-8\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(o,o)} \atop\rule{0pt}{10pt} {(\emptyset,o)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,e)} \atop\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right)$, then $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$. In this case we have $$o_s^\prime=o_s-1, \quad e_s^\prime=e_s, \quad o_r^\prime=o_r-1,\quad e_r^\prime=e_r+1.$$ There are two possibilities: **Case I.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 0\ (\mathrm{mod}\ 2)$. - If $o_r\geq e_r+2$, then $o_r^\prime=o_r-1\geq e_r+1=e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime =o_s-1+2e_s+o_r-1<\kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\leq e_r$, then $o_r^\prime=o_r-1\leq e_r-1<e_r^\prime$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s-1+2e_s+e_r^\prime=o_s+2e_s+e_r=\kappa(\mathcal{C}).\end{aligned}$$ **Case II.** The skew partition $\lambda/\mu$ satisfies that $\ell(\lambda)-\ell(\mu) \equiv 1\ (\mathrm{mod}\ 2)$. - If $o_r\geq e_r+3$, then $o_r^\prime=o_r-1\geq e_r+2=e_r^\prime+1$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+o_r^\prime=o_s+2e_s+o_r-2< \kappa(\mathcal{C}).\end{aligned}$$ - If $o_r\leq e_r+1$, then $o_r^\prime=o_r-1\leq e_r<e_r^\prime+1$ and $$\begin{aligned} \kappa(\mathcal{C}')=o_s^\prime+2e_s^\prime+e_r^\prime+1=o_s+2e_s+e_r+1=\kappa(\mathcal{C}).\end{aligned}$$ Therefore, in both cases we have $\kappa({\mathcal{C}'})\leq \kappa({\mathcal{C}})$. \[varyzero2-3\] If $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(e,o)} \atop\rule{0pt}{10pt} {(o,o)}}}\right] \rightarrow \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right)$ or $\mathcal{C}'=\mathcal{C}\left( \left[\tiny{{{(o,o)} \atop\rule{0pt}{10pt} {(e,o)}}}\right] \rightarrow \left[\tiny{{{(e,e)} \atop\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right)$, then $\kappa({\mathcal{C}'})= \kappa({\mathcal{C}})$. In each case we have $$o_s^\prime=o_s-2, \quad e_s^\prime=e_s+1, \quad o_r^\prime=o_r,\quad e_r^\prime=e_r.$$ Therefore $$\kappa(\mathcal{C}')=o_s'+2e_s'+\max(o_r',e_r'+((e_r'+o_r')\ \mathrm{mod}\ 2))=\kappa(\mathcal{C}),$$ as desired. \[varyzero1-7\] If $\mathcal{C}'$ is one of the following possible cases: $$\begin{array}{ccc} \mathcal{C}\left( \left[\tiny{{{(e,e)} \atop \rule{0pt}{10pt} {(e,e)}}}\right] \rightarrow \left[\tiny{{{(e,e)} \atop \rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right), & \mathcal{C}\left( \left[\tiny{{{(e,e)} \atop \rule{0pt}{10pt} {(o,o)}}}\right] \rightarrow \left[\tiny{{{(o,o)} \atop\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right), & \mathcal{C}\left( \left[\tiny{{{(e,o)} \atop \rule{0pt}{10pt} {(e,e)}}}\right] \rightarrow \left[\tiny{{{(e,o)} \atop \rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right),\\[10pt] \mathcal{C}\left( \left[\tiny{{{(e,e)} \atop\rule{0pt}{10pt} {(\emptyset,e)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,e)} \atop\rule{0pt}{10pt} {(e,\emptyset)}}}\right]\right), & \mathcal{C}\left( \left[\tiny{{{(o,o)} \atop \rule{0pt}{10pt} {(o,e)}}}\right] \rightarrow \left[\tiny{{{(o,o)} \atop \rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right), & \mathcal{C}\left( \left[\tiny{{ {(o,e)} \atop\rule{0pt}{10pt} {(e,o)}}}\right] \rightarrow \left[\tiny{{{(e,o)} \atop\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right),\\[10pt] \mathcal{C}\left( \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt} {(o,e)}}}\right] \rightarrow \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right), & \mathcal{C}\left( \left[\tiny{{{(o,e)} \atop\rule{0pt}{10pt} {(\emptyset,o)}}}\right] \rightarrow \left[\tiny{{{(\emptyset,o)} \atop \rule{0pt}{10pt} {(o,\emptyset)}}}\right]\right),& \end{array}$$ then $\kappa({\mathcal{C}'})< \kappa({\mathcal{C}})$. In each case we have $$o_s^\prime=o_s, \quad e_s^\prime=e_s-1, \quad o_r^\prime=o_r,\quad e_r^\prime=e_r.$$ Therefore $$\kappa(\mathcal{C}')=o_s'+2e_s'+\max(o_r',e_r'+((e_r'+o_r')\ \mathrm{mod}\ 2))<\kappa(\mathcal{C}),$$ as required. Note that Lemmas \[varyzero1-1\]-\[varyzero1-7\] cover all possible transformations of exchanging the locations of $0$’s in two involved rows. Lemmas \[varyzero1-6\]-\[varyzero1-4\] imply that, to minimize the number of bars, we should put $0$’s in the skew shifted diagram such that there are as more as possible rows for which the first several squares are filled with $0$’s and then followed by an odd number of blank squares. Meanwhile, from Lemmas \[varyzero3-5\]-\[varyzero1-7\] we know that the number of rows fully filled with $0$’s should be as more as possible. Based on these observations, we have the following algorithm to determine the location of $0$’s for a given skew partition $\lambda/\mu$, where both $\lambda$ and $\mu$ are strict partitions. Using this algorithm we will obtain a shifted diagram with some squares filled with $0$’s such that the corresponding quantity $\kappa(\mathcal{C})$ is minimized. This property allows us to determine the srank of $\lambda/\mu$. [**The Algorithm for Determining the Locations of $0$’s:**]{} - Let $\mathcal{C}_1=S(\lambda)$ be the initial configuration of $\lambda/\mu$ with blank square. Set $i=1$ and $J=\{1,\ldots,\ell(\lambda)\}$. - For $i\leq \ell(\mu)$, iterate the following procedure: - If $\mu_i=\lambda_j$ for some $j\in J$, then we fill the $j$-th row of $\mathcal{C}_i$ with $0$. - If $\mu_i\neq \lambda_j$ for any $j\in J$, then there are two possibilities. - $\lambda_j-\mu_i$ is odd for some $j\in J$ and $\lambda_j>\mu_i$. Then we take the largest such $j$ and fill the leftmost $\mu_i$ squares with $0$ in the $j$-th row of $\mathcal{C}_i$. - $\lambda_j-\mu_i$ is even for any $j\in J$ and $\lambda_j>\mu_i$. Then we take the largest such $j$ and fill the leftmost $\mu_i$ squares by $0$ in the $j$-th row of $\mathcal{C}_i$. Denote the new configuration by $\mathcal{C}_{i+1}$. Set $J=J\backslash \{j\}$. - Set $\mathcal{C}^{*}=\mathcal{C}_{i}$, and we get the desired configuration. It should be emphasized that although the above algorithm does not necessarily generate a bar tableau, it is sufficient for the computation of the srank of a skew partition. Using the arguments in the proofs of Lemmas \[varyzero1-1\]-\[varyzero1-7\], we can derive the following crucial property of the configuration $\mathcal{C}^*$. The proof is omitted since it is tedious and straightforward. \[prop-min\] For any configuration ${\mathcal{C}}$ of $0$’s in the skew shifted diagram of $\lambda/\mu$, we have $\kappa({\mathcal{C}^*})\leq \kappa({\mathcal{C}})$. \[number of skew\] Given a skew partition $\lambda/\mu$, let $\mathcal{C}^*$ be the configuration of $0$’s obtained by applying the algorithm described above. Then $$\label{srank} {\rm srank}(\lambda/\mu)=\kappa({\mathcal{C}^*}).$$ Suppose that for the configuration ${\mathcal{C}^*}$ there are $o_r^*$ rows of odd size with blank squares, and there are $o_s^*$ rows with at least one square filled with $0$ and an odd number of squares filled with positive integers. Likewise we let $e_r^*$ and $e_s^*$ denote the number of remaining rows. Therefore, $$\kappa(\mathcal{C}^*)=o_s^*+2e_s^*+\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2)).$$ Since for each configuration $\mathcal{C}$ the number of bars in a minimal bar tableau is greater than or equal to $\kappa({\mathcal{C}})$, by Proposition \[prop-min\], it suffices to confirm the existence of a skew bar tableau, say $T$, with $\kappa({\mathcal{C}^*})$ bars. Note that it is possible that the configuration ${\mathcal{C}^*}$ is not admissible. The key idea of our proof is to move $0$’s in the diagram such that the resulting configuration ${\mathcal{C}'}$ is admissible and $\kappa({\mathcal{C}'})=\kappa({\mathcal{C}^*})$. To achieve this goal, we will use the numbers $\{1,2,\ldots,\kappa({\mathcal{C}^*})\}$ to fill up the blank squares of $\mathcal{C}^*$ guided by the rule that the bars of Type $2$ or Type $3$ will occur before bars of Type $1$. Let us consider the rows without $0$’s, and there are two possibilities: (A) $o_r^*\geq e_r^*$, (B) $o_r^*<e_r^*$. In Case (A) we choose a row of even size and a row of odd size, and fill up these two rows with $\kappa({\mathcal{C}^*})$ to generate a bar of Type $3$. Then we continue to choose a row of even size and a row of odd size, and fill up these two rows with $\kappa({\mathcal{C}^*})-1$. Repeat this procedure until all even rows are filled up. Finally, we fill the remaining rows of odd size with $\kappa({\mathcal{C}^*})-e_r^*, \kappa({\mathcal{C}^*})-e_r^*-1, \ldots, \kappa({\mathcal{C}^*})-o_r^*+1$ to generate bars of Type $2$. In Case (B) we choose the row with the $i$-th smallest even size and the row with the $i$-th smallest odd size and fill their squares with the number $\kappa({\mathcal{C}^*})-i+1$ for $i=1,\ldots,o_r^*$. In this way, we obtain $o_r^*$ bars of Type $3$. Now consider the remaining rows of even size without $0$’s. There are two subcases. - The remaining diagram, obtained by removing the previous $o_r^*$ bars of Type $3$, does not contain any row with only one square. Under this assumption, it is possible to fill the squares of a row of even size with the number $\kappa({\mathcal{C}^*})-o_r^*$ except the leftmost square. This operation will result in a bar of Type $1$. After removing this bar from the diagram, we may combine this leftmost square of the current row and another row of even size, if it exists, and to generate a bar of Type $3$. Repeating this procedure until there are no more rows of even size, we obtain a sequence of bars of Type $1$ and Type $3$. Evidently, there is a bar of Type $2$ with only one square. To summarize, we have $\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2))$ bars. - The remaining diagram contains a row composed of the unique square filled with $0$. In this case, we will move this $0$ into the leftmost square of a row of even size, see Figure \[case2-2\]. Denote this new configuration by $\mathcal{C}^{\prime}$, and from Lemma \[varyzero5-8\] we see that $\kappa({\mathcal{C}^*})=\kappa({\mathcal{C}^{\prime}})$. If we start with ${\mathcal{C}'}$ instead of ${\mathcal{C}^*}$, by a similar construction, we get $\max(o_r',e_r'+((e_r'+o_r')\ \mathrm{mod}\ 2))$ bars, occupying the rows without $0$’s in the diagram. (300,100) (40,0)[(1,0)[20]{}]{}(40,20)[(1,0)[20]{}]{} (40,0)[(0,1)[20]{}]{}(60,0)[(0,1)[20]{}]{} (50,25)[$\vdots$]{}(20,40)[(1,0)[80]{}]{} (20,60)[(1,0)[80]{}]{} (20,40)(20,0)[5]{}[(0,1)[20]{}]{} (50,65)[$\vdots$]{} (0,80)(0,20)[2]{}[(1,0)[120]{}]{} (0,80)(20,0)[7]{}[(0,1)[20]{}]{}(48,6)[$0$]{} (130,50)[(1,0)[40]{}]{} (220,0)[(1,0)[20]{}]{}(220,20)[(1,0)[20]{}]{} (220,0)[(0,1)[20]{}]{}(240,0)[(0,1)[20]{}]{} (230,25)[$\vdots$]{}(200,40)[(1,0)[80]{}]{} (200,60)[(1,0)[80]{}]{} (200,40)(20,0)[5]{}[(0,1)[20]{}]{} (230,65)[$\vdots$]{} (180,80)(0,20)[2]{}[(1,0)[120]{}]{} (180,80)(20,0)[7]{}[(0,1)[20]{}]{}(188,86)[$0$]{} Without loss of generality, we may assume that for the configuration ${\mathcal{C}^*}$ the rows without $0$’s in the diagram have been occupied by the bars with the first $\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2))$ positive integers in the decreasing order, namely, $(\kappa({\mathcal{C}^*}), \ldots, 2, 1, 0)$. By removing these bars and reordering the remaining rows, we may get a shifted diagram with which we can continue the above procedure to construct a bar tableau. At this point, it is necessary to show that it is possible to use $o_s^*+2e_s^*$ bars to fill this diagram. In doing so, we process the rows from bottom to top. If the bottom row has an odd number of blank squares, then we simply assign the symbol $o_s^*+2e_s^*$ to these squares to produce a bar of Type $1$. If the bottom row are completely filled with $0$’s, then we continue to deal with the row above the bottom row. Otherwise, we fill the rightmost square of the bottom row with $o_s^*+2e_s^*$ and the remaining squares with $o_s^*+2e_s^*-1$. Suppose that we have filled $i$ rows from the bottom and all the involved bars have been removed from the diagram. Then we consider the $(i+1)$-th row from the bottom. Let $t$ denote the largest number not greater than $o_s^*+2e_s^*$ which has not been used before. If all squares in the $(i+1)$-th row are filled with $0$’s, then we continue to deal with the $(i+2)$-th row. If the number of blank squares in the $(i+1)$-th row is odd, then we fill these squares with $t$. If the number of blank squares in the $(i+1)$-th row is even, then we are left with two cases: - The rows of the diagram obtained by removing the rightmost square of the $(i+1)$-th row have distinct lengths. In this case, we fill the rightmost square with $t$ and the remaining blank squares of the $(i+1)$-th row with $t-1$. - The removal of the rightmost square of the $(i+1)$-th row does not result in a bar tableau. Suppose that the $(i+1)$-th row has $m$ squares in total. It can only happen that the row underneath the $(i+1)$-th row has $m-1$ squares and all these squares are filled with $0$’s. By interchanging the location of $0$’s in these two rows, we get a new configuration $\mathcal{C}^{\prime}$, see Figure \[case2’\]. From Lemma \[varyzero2-3\] we deduce that $\kappa({\mathcal{C}^*})=\kappa({\mathcal{C}^{\prime}})$. So we can transform ${\mathcal{C}^*}$ to ${\mathcal{C}'}$ and continue to fill up the $(i+1)$-th row. (340,40) (20,0)[(1,0)[120]{}]{} (0,20)(0,20)[2]{}[(1,0)[140]{}]{} (20,0)(20,0)[7]{}[(0,1)[40]{}]{} (0,20)[(0,1)[20]{}]{} (28,6)(20,0)[6]{}[$0$]{}(8,26)(20,0)[3]{}[$0$]{} (150,20)[(1,0)[40]{}]{} (220,0)[(1,0)[120]{}]{} (200,20)(0,20)[2]{}[(1,0)[140]{}]{} (220,0)(20,0)[7]{}[(0,1)[40]{}]{} (200,20)[(0,1)[20]{}]{} (228,6)(20,0)[3]{}[$0$]{}(208,26)(20,0)[6]{}[$0$]{} Finally, we arrive at a shifted diagram whose rows are all filled up. Clearly, for those rows containing at least one $0$ there are $o_s^*+2e_s^*$ bars that are generated in the construction, and for those rows containing no $0$’s there are $\max(o_r^*,e_r^*+((e_r^*+o_r^*)\ \mathrm{mod}\ 2))$ bars that are generated. It has been shown that during the procedure of filling the diagram with nonnegative numbers if the configuration ${\mathcal{C}^*}$ is transformed to another configuration ${\mathcal{C}^{\prime}}$, then $\kappa({\mathcal{C}^\prime})$ remains equal to $\kappa({\mathcal{C}^*})$. Hence the above procedure leads to a skew bar tableau of shape $\lambda/\mu$ with $\kappa({\mathcal{C}^*})$ bars. This completes the proof. [**Acknowledgments.**]{} This work was supported by the 973 Project, the PCSIRT Project of the Ministry of Education, the Ministry of Science and Technology, and the National Science Foundation of China. [19]{} P. Clifford, Algebraic and combinatorial properties of minimal border strip tableaux, Ph.D. Thesis, M.I.T., 2003. P. Clifford, Minimal bar tableaux, *Ann. Combin.* **9** (2005), 281–291. P. Clifford and R. P. Stanley, Bottom Schur functions, *Electron. J. Combin.* **11** (2004), Research Paper 67, 16 pp. A. M. Hamel, Pfaffians and determinants for Schur Q-functions, *J. Combin. Theory Ser. A* **75** (1996), 328–340. P. Hoffman and J. F. Humphreys, Projective Representations of the Symmetric Groups, Oxford University Press, Oxford, 1992. J. F. Humphreys, Blocks of projective representations of the symmetric groups, *J. London Math. Soc.* **33** (1986), 441–452. T. J$\rm{\acute{o}}$zefiak, Characters of projective representations of symmetric groups, *Exposition. Math.* **7** (1989), 193–247. T. J$\rm{\acute{o}}$zefiak and P. Pragacz, A determinantal formula for skew Schur $Q$-functions, *J. London Math. Soc.* **43** (1991), 76–90. I. G. Macdonald, Symmetric Functions and Hall Polynomials, 2nd Edition, Oxford University Press, Oxford, 1995. A. O. Morris, The spin representation of the symmetric group, *Proc. London Math. Soc.* **12** (1962), 55–76. A. O. Morris, The spin representation of the symmetric group. *Canad. J. Math.* **17** (1965), 543–549. A. O. Morris, The projective characters of the symmetric group—an alternative proof, *J. London Math. Soc.* **19** (1979), 57–58. M. L. Nazarov, An orthogonal basis in irreducible projective representations of the symmetric group, *Funct. Anal. Appl.* **22** (1988), 66–68. M. Nazarov and V. Tarasov, On irreducibility of tensor products of Yangian modules associated with skew Young diagrams, *Duke Math. J.* **112** (2002), 343–378. B. E. Sagan, Shifted tableaux, Schur $Q$-functions, and a conjecture of R. Stanley, *J. Combin. Theory Ser. A* **45** (1987), 62–103. I. Schur, Über die Darstellung der symmetrischen und der alternierenden Gruppe durch gebrochene lineare Substitutionen, *J. Reine Angew. Math.* **139** (1911), 155–250. R. P. Stanly, The rank and minimal border strip decompositions of a skew partition, *J. Combin. Theory Ser. A* **100** (2002), 349–375. J. R. Stembridge, On symmetric functions and the spin characters of $S_n$, Topics in Algebra, Part 2 (Warsaw, 1988), 433–453, Banach Center Publ., 26, Part 2, PWN, Warsaw, 1990. J. R. Stembridge, Shifted tableaux and the projective representations of symmetric groups, *Adv. Math.* **74** (1989), 87–134. J. R. Stembridge, Nonintersecting paths, Pfaffians and plane partitions, *Adv. Math.* **83** (1990), 96–131. J. R. Stembridge, The SF Package for Maple, http://www.math.lsa.umich.edu/\~jrs/maple.html \#SF. D. Worley, A theory of shifted Young tableaux, Ph.D. Thesis, Massachusetts Inst. Tech., Cambridge, Mass., 1984.
--- abstract: 'We give a detailed analysis of the proportion of elements in the symmetric group on $n$ points whose order divides $m$, for $n$ sufficiently large and $m\geq n$ with $m=O(n)$.' address: | School of Mathematics and Statistics,\ University of Western Australia,\ Nedlands, WA 6907\ Australia. author: - 'Alice C. Niemeyer' - 'Cheryl E. Praeger' date: '31 March 2006.' title: On Permutations of Order Dividing a Given Integer --- Introduction ============ The study of orders of elements in finite symmetric groups goes back at least to the work of Landau [@Landau09 p. 222] who proved that the maximum order of an element of the symmetric group $S_n$ on $n$ points is $e^{(1+o(1))(n\log n)^{1/2}}$. Erdős and Turán took a probabilistic approach in their seminal work in the area, proving in [@ErdosTuran65; @ErdosTuran67] that, for a uniformly distributed random element $g\in S_n$, the random variable $\log|g|$ is normally distributed with mean $(1/2) \log^2n$ and standard deviation $\frac{1}{\sqrt{3}} \log^{3/2}(n)$. Thus most permutations in $S_n$ have order considerably larger than $O(n)$. Nevertheless, permutations of order $O(n)$, that is, of order at most $cn$ for some constant $c$, have received some attention in the literature. Let $P(n,m)$ denote the proportion of permutations $g\in S_n$ which satisfy $g^m = 1$, that is to say, $|g|$ divides $m$. In 1952 Chowla, Herstein and Scott [@Chowlaetal52] found a generating function and some recurrence relations for $P(n,m)$ for $m$ fixed, and asked for its asymptotic behaviour for large $n$. Several years later, Moser and Wyman [@MoserWyman55; @MoserWyman56] derived an asymptotic for $P(n,m)$, for a fixed prime number $m$, expressing it as a contour integral. Then in 1986, Wilf [@Wilf86] obtained explicitly the limiting value of $P(n,m)$ for an arbitrary fixed value of $m$ as $n\rightarrow\infty$, see also the paper [@Volynets] of Volynets. Other authors have considered equations $g^m=h$, for a fixed integer $m$ and $h\in S_n$, see [@BouwerChernoff85; @GaoZha; @MineevPavlov76a; @MineevPavlov76b]. However in many applications, for example in [@Bealsetal03], the parameters $n$ and $m$ are linearly related, so that $m$ is unbounded as $n$ increases. For the special case where $m=n$, Warlimont [@Warlimont78] showed in 1978 that most elements $g\in S_n$ satisfying $g^n=1$ are $n$-cycles, namely he proved that $P(n,n)$, for $n$ sufficiently large, satisfies $$\frac{1}{n} + \frac{2c}{n^2} \le P(n,n) \le \frac{1}{n} + \frac{2c}{n^2} + O\left(\frac{1}{n^{3-o(1)}}\right)$$ where $c =1$ if $n$ is even and $c=0$ if $n$ is odd. Note that the proportion of $n$-cycles in $S_n$ is $1/n$ and, if $n$ is even, the proportion of elements that are a product of two cycles of length $n/2$ is $2/n^2$. Warlimont’s result proves in particular that most permutations satisfying $g^n=1$ are $n$-cycles. More precisely it implies that the conditional probability that a random element $g\in S_n$ is an $n$-cycle, given that $g^n =1$, lies between $1-2c n^{-1} - O(n^{-2+o(1)})$ and $1-2c n^{-1} + O(n^{-2})$. The main results of this paper, Theorems \[leadingterms\] and \[bounds\], generalise Warlimont’s result, giving a detailed analysis of $P(n,m)$ for large $n$, where $m=O(n)$ and $m\geq n$. For this range of values of $n$ and $m$, we have $rn\leq m<(r+1)n$ for some positive integer $r$, and we analyse $P(n,m)$ for $m$ in this range, for a fixed value of $r$ and $n\rightarrow\infty$. It turns out that the kinds of elements that make the largest contribution to $P(n,m)$ depend heavily on the arithmetic nature of $m$, for example, on whether $m$ is divisible by $n$ or by $r+1$. We separate out several cases in the statement of our results. Theorem \[leadingterms\] deals with two cases for which we give asymptotic expressions for $P(n,m)$. The first of these reduces in the case $m=n$ to Warlimont’s theorem [@Warlimont78] (modulo a small discrepancy in the error term). For other values of $m$ lying strictly between $rn$ and $(r+1)n$ we obtain in Theorem \[bounds\] only an upper bound for $P(n,m)$, since the exact value depends on both the arithmetic nature and the size of $m$ (see also Remark \[remark:leadinterms\]). \[leadingterms\] Let $n$ and $r$ be positive integers. Then for a fixed value of $r$ and sufficiently large $n$, the following hold. 1. $\displaystyle{ P(n,rn)=\frac{1}{n}+\frac{c(r)}{n^2} +O\left(\frac{1}{n^{2.5-o(1)}}\right) }$ where $c(r)=\sum (1+\frac{i+j}{2r})$ and the sum is over all pairs $(i,j)$ such that $1\leq i,j\leq r^2, ij =r^2,$ and both $r+i, r+j$ divide $rn$. In particular $c(1)=0$ if $n$ is odd, and $2$ if $n$ is even. 2. If $r=t!-1$ and $m=t!(n-t)=(r+1)n-t\cdot t!$, then $$P(n,m)=\frac{1}{n}+\frac{t+c'(r)}{n^2}+O\left(\frac{1}{n^{2.5-o(1)}} \right)$$ where $c'(r)=\sum(1+\frac{i+j-2}{2(r+1)})$ and the sum is over all pairs $(i,j)$ such that $1< i,j\leq (r+1)^2, (i-1)(j-1) =(r+1)^2,$ and both $r+i, r+j$ divide $m$. \[bounds\] Let $n,m,r$ be positive integers such that $rn< m<(r+1)n$, and ${{\delta}}$ a real number such that $0<{{\delta}}\leq 1/4$. Then for a fixed value of $r$ and sufficiently large $n$, $$P(n,m)\leq \frac{\alpha.(r+1)}{m}+\frac{k(r)} {n^2}+ O\left(\frac{1}{n^{2.5-2{{\delta}}}}\right)$$where $k(r) = \frac{4(r+3)^4}{r^2}$ and $$\alpha=\left\{\begin{array}{ll} 1&\mbox{if $r+1$ divides $m$ and $n-\frac{m}{r+1} < \frac{m}{2(r+1)(r+2)-1}$}\\ 0&\mbox{otherwise.} \end{array}\right.$$ \[remark:leadinterms\] \(a) In Theorem \[leadingterms\](a), the leading term $1/n$ is the proportion of $n$-cycles, while the proportion of permutations containing an $(n-t)$-cycle is $\frac{1}{n-t} = \frac{1}{n} + \frac{t}{n^2} + O(\frac{1}{n^3})$, which contributes to the first two terms in Theorem \[leadingterms\](b). The terms $\frac{c(r)}{n^2}$ and $\frac{c'(r)}{n^2}$ correspond to permutations in $S_n$ that have two long cycles, and these have lengths $\frac{m} {r+i}$ and $\frac{m}{r+j}$, for some $(i,j)$ satisfying the conditions in Theorem \[leadingterms\] (a) or (b) respectively, (where $m=rn$ in part (a)). \(b) In Theorem \[bounds\], if $r+1$ divides $m$ and $n-m/(r+1)<\frac{m}{2(r+1)(r+2)-1}$, then the term $(r+1)/m$ comes from elements containing a cycle of length $m/(r+1)$. The term $\frac{k(r)}{n^2}$ corresponds to permutations with exactly two ‘large’ cycles. More details are given in Remark \[rem:general\]. Our interest in $P(n,m)$ arose from algorithmic applications concerning finite symmetric groups. For example, $n$-cycles in $S_n$ satisfy the equation $g^n=1$, while elements whose cycle structure consists of a 2-cycle and a single additional cycle of odd length $n-t$, where $t = 2$ or $3$, satisfy the equation $g^{2(n-t)} =1$. For an element $g$ of the latter type we can construct a transposition by forming the power $g^{n-t}$. In many cases the group $S_n$ is not given as a permutation group in its natural representation, and, while it is possible to test whether an element $g$ satisfies one of these equations, it is often impossible to determine its cycle structure with certainty. It is therefore important to have lower bounds on the conditional probability that a random element $g$ has a desired cycle structure, given that it satisfies an appropriate equation. Using Theorem \[leadingterms\], we obtained the following estimates of various conditional probabilities. \[cdnlprobs1\] Let $r, n$ be positive integers and let $g$ be a uniformly distributed random element of $S_n$. Then for a fixed value of $r$ and sufficiently large $n$, the following hold, where $c(r)$ and $c'(r)$ are as in Theorem $\ref{leadingterms}$. 1. The conditional probability $P$ that $g$ is an $n$-cycle, given that $|g|$ divides $rn$, satisfies $$\begin{aligned} 1-\frac{c(r)}{n}-O\left(\frac{1} {n^{1.5-o(1)}}\right)&\leq& P \leq 1-\frac{c(r)}{n}+O\left(\frac{1} {n^{2}}\right).\\\end{aligned}$$ 2. If $r=t!-1$, then the conditional probability $P$ that $g$ contains an $(n-t)$-cycle, given that $|g|$ divides $t!(n-t)$, satisfies $$\begin{aligned} 1-\frac{c'(r)}{n}-O\left(\frac{1} {n^{1.5-o(1)}}\right)&\leq& P \leq 1-\frac{c'(r)}{n}+O\left(\frac{1} {n^{2}}\right).\\\end{aligned}$$ We note that Theorem \[leadingterms\] improves the upper bound of $(1+o(1))/n$ obtained in [@Bealsetal03 Theorem 3.7], while Corollary \[cdnlprobs1\] improves the corresponding lower bound of $1-o(1)$ of [@Bealsetal03 Theorem 1.3(a)]. These results have been developed and refined further in [@NiemeyerPraeger05b] to derive explicit ‘non-asymptotic’ bounds that hold for all $n$ and can be applied directly to improve the recognition algorithms for $S_n$ and $A_n$ in [@Bealsetal03]. [**Commentary on our approach**]{} Warlimont’s proof in [@Warlimont78] of an upper bound for $P(n,n)$ and the proof of [@Bealsetal03 Theorem 3.7] by Beals and Seress of an upper bound for $P(n,m)$ for certain values of $m$, rely on dividing the elements of $S_n$ into disjoint unions of smaller sets. Warlimont divides the elements according to how many ‘large’ cycles a permutation contains. Fix a real number $s$ such that $1/2 < s < 1$. We say that a cycle of a permutation in $S_n$ is *$s$-small* if its length is strictly less than $n^s$, and is *$s$-large* otherwise. Beals and Seress divide the elements according to the number of cycles in which three specified points lie. Both strategies are sufficient to prove Warlimont’s result or the slightly more general results of [@Bealsetal03 Theorem 3.7]. However, neither is sufficient to prove the general results in this paper. In particular, Warlimont’s approach breaks down when trying to estimate the proportion of elements with no or only one large cycle, which is perhaps why no progress has been made since his paper [@Warlimont78] towards answering Chowla, Herstein and Scott’s original question about the asymptotic behaviour of $P(n,m)$ for large $n$. One of the key ideas that allowed us to generalise Warlimont’s work is the insight that the number of permutations which contain no $s$-large cycles can be estimated by considering their behaviour on three specified points. Another important strategy is our careful analysis of elements containing only one large cycle by separating out divisors of $m$ which are very close to $n$. We regard Theorem \[lem:props\] below as the main outcome of the first stage of our analysis. It is used in the proof of Theorem \[leadingterms\]. The statement of Theorem \[lem:props\] involves the number $d(m)$ of positive divisors of $m$, and the fact that $d(m)=m^{o(1)}$, see Notation \[notation\] (c). It estimates the proportion $P_0(n,m)$ of elements of $S_n$ of order dividing $m$ and having no $s$-large cycles. \[lem:props\] Let $n,m$ be positive integers such that $m\geq n$, and let $s$ be a positive real number such that $1/2<s<1$. Then, with $P_0(n,m)$ as defined above, there is a constant $c$ such that $$P_0(n,m)<\frac{c d(m)m^{2s}}{n^3}=O\left(\frac{m^{2s+o(1)}}{n^3}\right).$$ Theorem \[lem:props\] is proved in Section \[sec:proportions\] and the other results are proved in Section \[sec:stheo\]. Proof of Theorem \[lem:props\] {#sec:proportions} ============================== In this section we introduce some notation that will be used throughout the paper, and we prove Theorem \[lem:props\]. Note that the order $|g|$ of a permutation $g \in S_n$ divides $m$ if and only if the length of each cycle of $g$ divides $m$. Thus $P(n,m)$ is the proportion of elements in $S_n$ all of whose cycle lengths divide $m$. As indicated in the introduction, we estimate $P(n,m)$ by partitioning this proportion in various ways. Sometimes the partition is according to the number of large cycle lengths, and at other times it is defined in terms of the cycles containing certain points. We specify these partitions, and give some other notation, below. \[notation\] The numbers $n,m$ are positive integers, and the symmetric group $S_n$ acts naturally on the set $\Omega=\{1,2,\dots,n\}$. 1. $s$ is a real number such that $1/2 < s < 1$. A divisor $d$ of $m$ is said to be $s$-*large* or $s$-*small* if $d \geq m^{s}$ or $d < m^s$, respectively; $D_\ell$ and $D_s$ denote the sets of all $s$-large and $s$-small divisors $d$ of $m$, respectively, such that $d \le n$. 2. For $g\in S_n$ with order dividing $m$, a $g$-cycle of length $d$ is called $s$-*large* or $s$-*small* according as $d$ is an $s$-large or $s$-small divisor of $m$. 3. $d(m)$ denotes the number of positive divisors of $m$ and $\delta$ and $c_\delta$ are positive real numbers such that $\delta < s$ and $d(m) \le c_\delta m^{\delta}$ for all $m \in {\bf{N}}$. 4. The following functions of $n$ and $m$ denote the proportions of elements $g\in S_n$ of order dividing $m$ and satisfying the additional properties given in the last column of the table below. --------------------- --------------------------------------------- $P_0(n,m)$ all $g$-cycles are $s$-small ${P_0^{(1)}}(n,m)$ all $g$-cycles are $s$-small and $1,2,3$ lie in the same $g$-cycle, ${P_0^{(2)}}(n,m)$ all $g$-cycles are $s$-small and $1,2,3$ lie in exactly two $g$-cycles ${P_0^{(3)}}(n,m)$ all $g$-cycles are $s$-small and $1,2,3$ lie in three different $g$-cycles $P_1(n,m)$ $g$ contains exactly one $s$-large cycle $P_2(n,m)$ $g$ contains exactly two $s$-large cycles $P_3(n,m)$ $g$ contains exactly three $s$-large cycles ${P_{\geq 4}}(n,m)$ $g$ contains at least four $s$-large cycles --------------------- --------------------------------------------- With respect to part (c) we note, see [@NivenZuckermanetal91 pp. 395-396], that for each $\delta > 0$ there exists a constant $c_\delta > 0$ such that $d(m) \le c_\delta m^\delta$ for all $m \in {\bf{N}}.$ This means that the parameter $\delta$ can be any positive real number and in particular that $d(m) = m^{o(1)}.$ Note that $$\label{eq-pi} P_0(n,m) = {P_0^{(1)}}(n,m) + {P_0^{(2)}}(n,m) + {P_0^{(3)}}(n,m)$$ and $$\label{eq-qi} P(n,m) = P_0(n,m) + P_1(n,m) + P_2(n,m) + P_3(n,m)+{P_{\geq 4}}(n,m).$$ We begin by deriving recursive expressions for the $P_0^{(i)}(n,m)$. \[lem:theps\] Using Notation $\ref{notation}$, the following hold, where we take $P_0(0,m) = 1.$ 1. $\displaystyle{{P_0^{(1)}}(n,m) = \frac{(n-3)!}{n!} \sum_{d \in D_s,\ d\ge 3}{(d-1)(d-2)}P_0(n-d,m),}$ 2. $\displaystyle{ {P_0^{(2)}}(n,m) = \frac{3(n-3)!}{n!}\sum_{\stackrel{d_1, d_2 \in D_s }{2\le d_2,\ d_1+d_2\le n}} (d_2-1)P_0(n-d_1-d_2,m)}$, 3. $\displaystyle{ {P_0^{(3)}}(n,m) = \frac{(n-3)!}{n!} \sum_{\stackrel{d_1,d_2,d_3\in D_s }{d_1+d_2+d_3 \le n}} P_0(n-d_1-d_2 -d_3,m)}$. We first compute ${P_0^{(1)}}(n,m)$, the proportion of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which the points $1, 2, 3$ are contained in one $g$-cycle, $C$ say, of length $d$ with $d \in D_s$ and $d\geq 3.$ We can choose the remainder of the support set of $C$ in $\binom{n-3}{d-3}$ ways and then the cycle $C$ in $(d-1)!$ ways. The rest of the permutation $g$ can be chosen in $P_0(n-d,m)(n-d)!$ ways. Thus, for a given $d$, the number of such elements is $(n-3)!(d-1)(d-2)P_0(n-d,m)$. We obtain the proportion ${P_0^{(1)}}(n,m)$ by summing over all $d\in D_s$ with $d\geq3$, and then dividing by $n!$, so part (a) is proved. Next we determine the proportion ${P_0^{(2)}}(n,m)$ of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which one of the points $1, 2, 3$ is contained in a $g$-cycle $C_1$, and the other two of these points are contained in a different $g$-cycle $C_2$. Let $d_1$ and $d_2$ denote the lengths of the cycles $C_1$ and $C_2$, respectively, so $d_1, d_2\in D_s$ and $d_2 \ge 2.$ Firstly we choose the support set of $C_1$ in $\binom{n-3}{d_1-1}$ ways and the cycle $C_1$ in $(d_1-1)!$ ways. Secondly we choose the support set of $C_2$ in $\binom{n-d_1 -2}{d_2-2}$ ways and the cycle $C_2$ in $(d_2-1)!$ ways. Finally, the rest of the permutation $g$ is chosen in $P_0(n-d_1 -d_2,m)(n-d_1-d_2)!$ ways. Thus, for a given pair $d_1, d_2$, the number of these elements is $(n-3)!(d_2-1)P_0(n-d_1-d_2,m)$. Since there are three choices for $C_1\cap\{ 1, 2, 3\}$, we have $$\begin{aligned} {P_0^{(2)}}(n,m) & = & \frac{3(n-3)!}{n!}\sum_{\stackrel{d_1, d_2 \in D_s}{2\le d_2,\ d_1+d_2 \le n}} (d_2-1) P_0(n-d_1-d_2,m). \\ \end{aligned}$$ Finally we consider the proportion ${P_0^{(3)}}(n,m)$ of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which each one of the points $1, 2, 3$ is contained in a separate $g$-cycle, say $C_i$ contains $i$ and $C_i$ has length $d_i \in D_s$. We can choose, in order, the support set of $C_1$ in $\binom{n-3}{d_1-1}$ ways and the cycle $C_1$ in $(d_1-1)!$ ways, the support set of $C_2$ in $\binom{n-d_1 -2}{d_2-1}$ ways and the cycle $C_2$ in $(d_2-1)!$ ways, the support set of $C_3$ in $\binom{n-d_1 -d_2 -1}{d_3-1}$ ways and the cycle $C_3$ in $(d_3-1)!$ ways, and the rest of the permutation in $P_0(n-d_1-d_2-d_3,m)(n-d_1-d_2-d_3)!$ ways. The expression for ${P_0^{(3)}}(n,m)$ in part (c) now follows. Next we derive expressions for the $P_i(n,m)$ and ${P_{\geq 4}}(n,m)$. \[lem:qi\] Using Notation $\ref{notation}$, and writing $P_0(0,m)=1$, 1. ${\displaystyle P_0(n,m) = \frac{1}{n}\sum_{d\in D_s} P_0(n-d, m),}$ 2. ${\displaystyle P_1(n,m) = \sum_{d\in D_\ell } \frac{1}{d} P_0(n-d, m)},$ 3. ${\displaystyle P_{2}(n,m) = \frac{1}{2} \sum_{d_1, d_2\in D_\ell } \frac{1}{d_1d_2} P_0(n-d_1-d_2, m)},$ where the sum is over all ordered pairs $(d_1, d_2)$ with $d_1 + d_2 \le n$. 4. ${\displaystyle P_3(n,m) = \frac{1}{6}\sum_{d_1, d_2, d_3 \in D_\ell} \frac{1}{d_1d_2d_3} P_0(n-d_1-d_2 - d_3, m)}$, where the sum is over all ordered triples $(d_1,d_2,d_3)$ with $d_1 + d_2 + d_3 \le n$. 5. ${\displaystyle {P_{\geq 4}}(n,m) \leq \frac{1}{24}\sum_{d_1, d_2, d_3,d_4 \in D_\ell} \frac{1}{d_1d_2d_3d_4} P(n-d_1-d_2 - d_3-d_4, m)}$, where the sum is over all ordered $4$-tuples $(d_1,d_2,d_3,d_4)$ with $d_1 + d_2 + d_3+d_4 \le n$. For each permutation in $S_n$ of order dividing $m$ and all cycles $s$-small, the point 1 lies in a cycle of length $d$, for some $d\in D_s$. For this value of $d$ there are $\binom{n-1} {d-1}(d-1)!$ choices of $d$-cycles containing 1, and $P_0(n-d,m)(n-d)!$ choices for the rest of the permutation. Summing over all $d\in D_s$ yields part (a). The proportion of permutations in $S_n$ of order dividing $m$ and having exactly one $s$-large cycle of length $d$ is $\binom{n}{d}(d-1)! P_0(n-d,m) (n-d)!/n!$. Summing over all $d\in D_\ell$ yields part (b). In order to find the proportion of elements in $S_n$ of order dividing $m$ and having exactly two $s$-large cycles we count triples $(C_1, C_2, g)$, where $C_1$ and $C_2$ are cycles of lengths $d_1$ and $d_2$ respectively, $d_1, d_2\in D_\ell$, $g\in S_n$ has order dividing $m$, $g$ contains $C_1$ and $C_2$ in its disjoint cycle representation, and all other $g$-cycles are $s$-small. For a given $d_1, d_2$, we have $\binom{n}{d_1}(d_1-1)!$ choices for $C_1$, then $\binom{n-d_1}{d_2}(d_2-1)!$ choices for $C_2$, and then the rest of the element $g$ containing $C_1$ and $C_2$ can be chosen in $P_0(n-d_1-d_2,m)(n-d_1-d_2)!$ ways. Thus the ordered pair $(d_1,d_2)$ contributes $\frac{n!}{d_1d_2}P_0(n-d_1-d_2,m)(n-d_1-d_2)!$ triples, and each element $g$ with the properties required for part (c) contributes exactly two of these triples. Hence, summing over ordered pairs $d_1, d_2\in D_\ell$ yields (c). Similar counts are used for parts (d) and (e). For $P_3(n,m), {P_{\geq 4}}(n,m)$ we count 4-tuples $(C_1, C_2,C_3, g)$ and $5$-tuples $(C_1,C_2,C_3,C_4,g)$ respectively, such that, for each $i$, $C_i$ is a cycle of length $d_i$ for some $d_i\in D_\ell$, $g\in S_n$ has order dividing $m$, and $g$ contains all the cycles $C_i$ in its disjoint cycle representation. The reason we have an inequality for ${P_{\geq 4}}(n,m)$ is that in this case each $g$ occurring has at least four $s$-large cycles and hence occurs in at least 24 of the 5-tuples, but possibly more. We complete this section by giving a proof of Theorem \[lem:props\]. The ideas for its proof were developed from arguments in Warlimont’s paper [@Warlimont78]. \[newPs\] Let $m\geq n\geq3$, and let $s, {{\delta}}$ be as in Notation [\[notation\]]{}. Then $$P_0(n,m) < \frac{(1 + 3c_\delta + c_\delta^2)d(m)m^{2s}}{n(n-1)(n-2)}< \frac{c'd(m)m^{2s}}{n^3}= O\left(\frac{m^{2s+\delta}}{n^3}\right)$$ where, if $n\geq6$, we may take $$c'=\left\{\begin{array}{ll} 2(1 + 3c_\delta + c_\delta^2)&\mbox{for any $m\geq n$}\\ 10&\mbox{if $m\geq c_\delta^{1/(s-\delta)}$.} \end{array}\right.$$ In particular Theorem [\[lem:props\]]{} is true. Moreover, if in addition $n\geq m^s+cn^a$ for some positive constants $a,c$ with $a\leq 1$, then $P_0(n,m)=O\left(\frac{m^{2s+2{{\delta}}}}{n^{1+3a}}\right)$. First assume only that $m\geq n\geq3$. Let $D_s$, and $P_0^{(i)}(n,m)$, for $i = 1, 2, 3$, be as in Notation \[notation\]. By (\[eq-pi\]), $P_0(n,m)$ is the sum of the $P_0^{(i)}(n,m)$. We first estimate ${P_0^{(1)}}(n,m).$ By Lemma \[lem:theps\] (a), and using the fact that $d<m^s$ for all $d\in D_s$, $${P_0^{(1)}}(n,m) \le\frac{(n-3)!}{n!} \sum_{\stackrel{d \in D_s}{d\ge 3}}{(d-1)(d-2)}< \frac{d(m) m^{2s}}{n(n-1)(n-2)}.$$ Similarly, by Lemma \[lem:theps\] (b), $$\begin{aligned} {P_0^{(2)}}(n,m) & < & \frac{3(n-3)!}{n!}\sum_{d_1, d_2 \in D_s} (d_2-1) \le \frac{3d(m)^2m^{s}}{n(n-1)(n-2)}\end{aligned}$$ and by Lemma \[lem:theps\] (c), $$\begin{aligned} {P_0^{(3)}}(n,m) &<& \frac{(n-3)!}{n!} \sum_{d_1,d_2,d_3\in D_s} 1 \le \frac{d(m)^3}{n(n-1)(n-2)}.\\\end{aligned}$$ Thus, using the fact noted in Notation \[notation\] that $d(m) \le c_\delta m^\delta$, $$\begin{aligned} P_0(n,m) & \le & \frac{d(m) \left( m^{2s} +3d(m)m^{s} + d(m)^2\right) }{n(n-1)(n-2)} \\ &\le&\frac{d(m)m^{2s}\left( 1 +3c_\delta m^{\delta-s} + (c_\delta m^{\delta-s})^2\right)}{ n(n-1)(n-2)}< \frac{c'd(m) m^{2s}}{n^3}.\end{aligned}$$ To estimate $c'$ note first that, for $n\geq6$, $n(n-1)(n-2)> n^3/2$. Thus if $n\geq6$ then, for any $m\geq n$ we may take $c'= 2(1 + 3c_\delta + c_\delta^2).$ If $m\geq c_\delta^{1/(s-\delta)}$, then $c_\delta m^{\delta-s}\leq 1$ and so we may take $c'=10$. Theorem \[lem:props\] now follows since $d(m)=m^{o(1)}$. Now assume that $n\geq m^s+cn^a$ for some positive constants $c$ and $a$. By Lemma \[lem:qi\], $$P_0(n,m)= \frac{1}{n}\sum_{d\in D_s}P_0(n-d, m).$$ For each $d\in D_s$ we have $m>n-d\geq n-m^s\geq cn^a$, and hence applying Theorem \[lem:props\] (which we have just proved), $$P_0(n-d,m) < \frac{c'd(m)m^{2s}}{(n-d)^3} \leq \frac{c'd(m) m^{2s}}{c^3 n^{3a}}.$$ Thus, $P_0(n,m) \leq \frac{d(m)}{n} \left(\frac{c'd(m)m^{2s}}{c^3n^{3a}} \right)\le \frac{c'c_\delta^2m^{2s + 2\delta}}{c^3n^{1+3a}}$. Proof of Theorem \[leadingterms\] {#sec:stheo} ================================= First we determine the ‘very large’ divisors of $m$ that are at most $n$. \[lem:divat\] Let $r, m$ and $n$ be positive integers such that $rn\le m < (r+1)n$. 1. If $d$ is a divisor of $m$ such that $d \le n$, then one of the following holds: 1. $d=n = \frac{m}{r}$, 2. $d = \frac{m}{r+1}$ so that $\frac{r}{r+1}n \le d < n$, 3. $d \le \frac{m}{r+2}<\frac{r+1}{r+2}n$. 2. Moreover, if $d_1, d_2$ are divisors of $m$ for which $$d_1\le d_2 \le \frac{m}{r+1}\quad \mbox{and}\quad n \ge d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)},$$ then $d_1=\frac{m}{c_1}, d_2= \frac{m}{c_2}$, where $c_1, c_2$ divide $m$, and satisfy $c_2 \le 2r+3$, and either $r+2\leq c_2 \le c_1 < 2(r+1)(r+2)$, or $c_2=r+1$, $c_1\geq r(r+1)$. As $d$ is a divisor of $m$ there is a positive integer $t$ such that $d = \frac{m}{t}$. Now $\frac{m}{t} \le n \le \frac{m}{r}$ and therefore $r \le t.$ If $r = t$ then $r$ divides $m$ and $d = \frac{m}{r} \le n$, and since also $rn \le m$ it follows that $d = \frac{m}{r}=n$ and (i) holds. If $t \ge r+2$ then (iii) holds. Finally, if $t=r+1$, then $d = \frac{m}{r+1}$ and $\frac{r}{r+1}n \le \frac{m}{r+1} < n$ and hence (ii) holds. Now we prove the last assertion. Suppose that $d_1, d_2$ are divisors of $m$ which are at most $ \frac{m}{r+1}$, and such that $d_1\leq d_2$ and $n\geq d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)}$. Then, as $d_1, d_2$ divide $m$, there are integers $c_1, c_2$ such that $d_1 = m/c_1$ and $d_2 = m/c_2.$ Since $d_i \le m/(r+1)$ we have $c_i \ge r+1$ for $i = 1,2$, and since $d_1\le d_2$ we have $c_1\ge c_2$. Now $m/r \ge n \ge d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)}$, and hence $1/r \ge 1/c_1 + 1/c_2 > \frac{2r+3}{2(r+1)(r+2)}$. If $c_2 \ge 2(r+2)$ then, as $c_1\ge c_2$, we would have $1/c_1 + 1/c_2 \le 1/(r+2)$, which is not the case. Thus $r+1 \le c_2 \le 2r+3.$ If $c_2\geq r+2$, then $$\frac{1}{c_1}> \frac{2r+3}{2(r+1)(r+2)} - \frac{1}{c_2} \ge \frac{2r+3}{2(r+1)(r+2)} - \frac{1}{r+2} = \frac{1}{2(r+1)(r+2)}$$ and hence $c_1 < 2(r+1)(r+2)$ as in the statement. On the other hand, if $c_2=r+1$, then $$\frac{1}{c_1}\leq \frac{n}{m}-\frac{1}{c_2}\leq \frac{1}{r}-\frac{1}{r+1}=\frac{1}{r(r+1)}$$ so $c_1\geq r(r+1)$. The next result gives our first estimate of an upper bound for the proportion $P(n,m)$ of elements in $S_n$ of order dividing $m$. Recall our observation that the parameter $\delta$ in Notation \[notation\](c) can be any positive real number; in Proposition \[prop:general\] we will restrict to $\delta \le s-\frac{1}{2}.$ Note that the requirement $rn\leq m<(r+1)n$ implies that $\frac{n}{r+1}\leq n-\frac{m}{r+1}\leq \frac{m}{r(r+1)}$; the first case of Definition \[def:kr\] (b) below requires an upper bound of approximately half this quantity. \[def:kr\] Let $r,\, m,\, n$ be positive integers such that $rn\le m < (r+1)n$. Let $1/2<s\leq 3/4$ and $0<{{\delta}}\leq s-\frac{1}{2}$. - Let $\alpha = \begin{cases} 1 & \mbox{if\ } m=rn,\\ 0 & \mbox{otherwise.} \end{cases}$ - Let $\alpha' = \begin{cases} 1 & \mbox{if\ } (r+1) \mbox{\ divides\ } m \ \mbox{and\ }n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)-1}, \\ 0 & \mbox{otherwise.} \end{cases}$ - Let $t(r,m,n)$ denote the number of divisors $d$ of $m$ with $\frac{m}{2r+3} \leq d\leq\frac{m}{r+1}$ such that there exists a divisor $d_0$ of $m$ satisfying - $d+d_0\leq n$ and - $\frac{m}{2(r+1)(r+2)}< d_0\leq d$. - Let $k(r,m,n)=t(r,m,n)\frac{2(r+1)(r+2)(2r+3)}{r^2}.$ \[prop:general\] Let $r,\, m,\, n, s$ and $\delta$ be as in Definition [\[def:kr\]]{}. Then, for a fixed value of $r$ and sufficiently large $n$, $$P(n,m) \le \frac{\alpha}{n}+\frac{\alpha'.(r+1)}{m}+\frac{k(r,m,n)}{n^2}+ O\left(\frac{1}{n^{1+2s-2{{\delta}}}} \right),$$ where $\alpha, \alpha', t(r, m, n)$ and $k(r, m, n)$ are as in Definition $\ref{def:kr}.$ Moreover, $t(r,m,n) \le r+3$ and $k(r,m,n) \le \frac{4(r+3)^4}{r^2} $. \[rem:general\] \(a) The term $\frac{1}{n}$, which occurs if and only if $m=rn$, corresponds to the $n$-cycles in $S_n$, and is the exact proportion of these elements. We refine the estimate for $P(n,rn)$ in Theorem \[rn\] below. \(b) The term $\frac{r+1}{m}$, which occurs only if $r+1$ divides $m$ and $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)}$, corresponds to permutations with order dividing $m$ and having either one or two $s$-large cycles, with one (the larger in the case of two cycles) of length $\frac{m}{r+1}$. The proportion of elements of $S_n$ containing a cycle of length $\frac{m}{r+1}$ is $\frac{r+1}{m}$, and if there exists a positive integer $d\leq n-\frac{m}{r+1}$ such that $d$ does not divide $m$, then some of these elements have a $d$-cycle and hence do not have order dividing $m$. Thus $\frac{r+1}{m}$ may be an over-estimate for the proportion of elements in $S_n$ (where $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)}$) having order dividing $m$, having exactly one $s$-large cycle of length $\frac{m}{r+1}$, and possibly one additional $s$-large cycle of length dividing $m$. However it is difficult to make a more precise estimate for this term that holds for all sufficiently large $m,n$. In Theorem \[rn\] we treat some special cases where this term either does not arise, or can be determined precisely. \(c) The term $\frac{k(r,m,n)}{n^2}$ arises as follows from permutations that have exactly two $s$-large cycles of lengths dividing $m$. For each of the $t(r,m,n)$ divisors $d$ of $m$ as in Definition \[def:kr\](c), let $d_0(d)$ be the largest of the divisors $d_0$ satisfying Definition \[def:kr\](c)(i),(ii). Note that $d_0(d)$ depends on $d$. Then $k(r,m,n)/n^2$ is an upper bound for the proportion of permutations of order dividing $m$ and having two $s$-large cycles of lengths $d$ and $d_0(d)$, for some $d$ satisfying $\frac{m}{2r+3} \leq d\leq\frac{m}{r+1}$. As in (b) this term may be an over-estimate, not only for the reason given there, but also because lower bounds for the cycle lengths $d, d_0(d)$ were used to define $k(r,m,n)$. Indeed in the case $m=rn$ we are able to obtain the exact value of the coefficient of the $\frac{1}{n^2}$ summand. We divide the estimation of $P(n,m)$ into five subcases. Recall that, by (\[eq-qi\]), $P(n,m)$ is the sum of ${P_{\geq 4}}(n,m)$ and the $P_i(n,m)$, for $i=0,1,2,3$, where these are as defined in Notation \[notation\]. We will use the recursive formulae for ${P_{\geq 4}}(n,m)$ and the $P_i(n,m)$ in Lemma \[lem:qi\], together with the expressions for $P_0(n,m)$ in Theorem \[lem:props\] and Lemma \[newPs\], to estimate these five quantities. Summing these estimates will give, by (\[eq-qi\]), our estimate for $P(n,m)$. We also use the information about divisors of $m$ in Lemma \[lem:divat\]. First we deal with $P_0(n,m)$. Since $r$ is fixed, it follows that, for sufficiently large $n$ (and hence sufficiently large $m$), we have $m^s \leq \frac{m}{r+2}$, which is less than $\frac{(r+1)n}{r+2}=n-\frac{n}{r+2}$. Thus $n>m^s+\frac{n}{r+2}$, and applying Lemma \[newPs\] with $a=1, c=\frac{1}{r+2}$, it follows that $$P_0(n,m)=O\left(\frac{m^{2s+2{{\delta}}}}{n^4}\right)=O\left(\frac{1}{n^{4-2s- 2{{\delta}}}}\right)\leq O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$$ since $4-2s-2{{\delta}}\geq 1+2s-2{{\delta}}$ when $s\leq 3/4$. Next we estimate $P_3(n,m)$ and ${P_{\geq 4}}(n,m)$. By Lemma \[lem:qi\], the latter satisfies ${P_{\geq 4}}(n,m)\leq \frac{1}{24}\sum\frac{1}{d_1d_2d_3d_4}$, where the summation is over all ordered 4-tuples of $s$-large divisors of $m$ whose sum is at most $n$. Thus ${P_{\geq 4}}(n,m)\leq \frac{1}{24}\,\frac{d(m)^4}{m^{4s}}= O\left(\frac{1}{n^{4s-4{{\delta}}}}\right)$. Also $$P_3(n,m)= \frac{1}{6}\sum \frac{1}{d_1d_2d_3}P_0(n-d_1-d_2-d_3,m),$$ where the summation is over all ordered triples of $s$-large divisors of $m$ whose sum is at most $n$. For such a triple $(d_1,d_2,d_3)$, if each $d_i\leq\frac{m} {4(r+1)}$, then $n-\sum d_i\geq n-\frac{3m}{4(r+1)}>\frac{n}{4}$, and so by Lemma \[newPs\], $P_0(n-\sum d_i,m)=O\left(\frac{m^{2s+{{\delta}}}}{n^{3}} \right)$. Thus the contribution of triples of this type to $P_3(n,m)$ is at most $O\left(\frac{d(m)^3m^{2s+{{\delta}}}}{m^{3s}n^3} \right)=O\left(\frac{1}{n^{3+s-4{{\delta}}}}\right)$. For each of the remaining triples, the maximum $d_i$ is greater than $\frac{m}{4(r+1)}$ and in particular there is a bounded number of choices for the maximum $d_i$. Thus the contribution of the remaining triples to $P_3(n,m)$ is at most $O\left(\frac{d(m)^2}{m^{1+2s}} \right)=O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. It follows that $$P_3(n,m)+{P_{\geq 4}}(n,m)=O\left(\frac{1}{n^{x_3}}\right),$$ where $x_3=\min\{4s-4{{\delta}},3+s-4{{\delta}},1+2s-2{{\delta}}\}=1+2s-2{{\delta}}$ (using the fact that ${{\delta}}\leq s-\frac{1}{2}\leq \frac{1}{4}$). Now we estimate $P_2(n,m)$. By Lemma \[lem:qi\], $$P_{2}(n,m)= \frac{1}{2}\sum \frac{1}{d_1d_2}P_0(n-d_1-d_2,m),$$ where the summation is over all ordered pairs of $s$-large divisors of $m$ whose sum is at most $n$. We divide these pairs $(d_1,d_2)$ into two subsets. The first subset consists of those for which $n- d_1-d_2\geq n^\nu$, where $\nu=(1+2s+{{\delta}})/3$. Note that $\nu<1$ since $\nu\leq s -\frac{1}{6}<1$ (because ${{\delta}}\leq s-\frac{1}{2}$ and $s\leq \frac{3}{4}$). For a pair $(d_1,d_2)$ such that $n- d_1-d_2\geq n^\nu$, by Lemma \[newPs\], $P_0(n-d_1-d_2,m)=O\left(\frac{m^{2s+{{\delta}}}}{n^{3\nu}} \right)$. Thus the total contribution to $P_{2}(n,m)$ from pairs of this type is at most $O\left(\frac{d(m)^2m^{2s+{{\delta}}}}{m^{2s}n^{3\nu}} \right)=O\left(\frac{1}{n^{3\nu-3{{\delta}}}}\right)=O\left(\frac{1}{n^{1+2s-2{{\delta}}}} \right)$. Now consider pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$. Since each $d_i<n\leq m/r$, it follows that each $d_i\leq m/(r+1)$. Since $\nu<1$, for sufficiently large $n$ (and hence sufficiently large $m$) we have $n^\nu\leq \left(\frac{m}{r} \right)^\nu<\frac{m}{2(r+1)(r+2)}$. Thus, for each of the pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$, we have $d_1+d_2>n-n^\nu>\frac{m}{r+1}- \frac{m}{2(r+1)(r+2)}=\frac{m(2r+3)}{2(r+1)(r+2)}$, and hence one of $(d_1,d_2)$, $(d_2,d_1)$ (or both if $d_1=d_2$) satisfies the conditions of Lemma \[lem:divat\] (b). Thus, by Lemma \[lem:divat\] (b), it follows that if $d_1 \le d_2$, then either $(d_0,d):=(d_1, d_2)$ satisfies the conditions of Definition \[def:kr\](c), or $d_2=\frac{m}{r+1}$ and $d_1\leq \frac{m}{2(r+1)(r+2)}$. Let $P_2'(n,m)$ denote the contribution to $P_2(n,m)$ from all the pairs $(d_1,d_2)$ where $\{d_1,d_2\}=\{ \frac{m}{r+1},d_0\}$ and $d_0 \leq \frac{m}{2(r+1)(r+2)}$. For the other pairs, we note that there are $t(r,m,n) \le r+3$ choices for the larger divisor $d$. Consider a fixed $d\leq \frac{m}{r+1}$, say $d = \frac{m}{c}.$ Then each divisor $d_0$ of $m$, such that $\frac{m}{2(r+1)(r+2)} < d_0 \le d$ and $d + d_0 \le n$, is equal to $\frac{m}{c_0}$ for some $c_0$ such that $c \le c_0 < 2(r+1)(r+2)$. Let $d_0(d) = \frac{m}{c_0}$ be the largest of these divisors $d_0.$ By Lemma \[lem:divat\](b), the combined contribution to $P_2(n,m)$ from the ordered pairs $(d,d_0(d))$ and $(d_0(d),d)$ is (since $d$ and $d_0(d)$ may be equal) at most $$\frac{1}{dd_0(d)} < \frac{2r+3}{m} \cdot \frac{2(r+1)(r+2)}{m} = \frac{2(r+1)(r+2)(2r+3)}{m^2}.$$ (Note that $\frac{1}{dd_0(d)} \ge \frac{(r+1)^2}{m^2} > \frac{1}{n^2}$.) If $d_0=\frac{m}{c'}$ is any other divisor of this type and $d_0 < d_0(d)$, then $c_0+1 \le c' < 2(r+1)(r+2)$, and so $n-d-d_0=(n-d-d_0(d))+d_0(d)-d_0$ is at least $$d_0(d)-d_0=\frac{m}{c_0} - \frac{m}{c'} \ge\frac{m}{c_0} - \frac{m}{c_0+1}= \frac{m}{c_0(c_0+1)} > \frac{m}{4(r+1)^2(r+2)^2}.$$ By Lemma \[newPs\], the contribution to $P_2(n,m)$ from the pairs $(d,d_0)$ and $(d_0,d)$ is $O( \frac{1}{m^2}\cdot \frac{m^{2s+\delta}}{m^3}) = O(\frac{1}{n^{5-2s-\delta}})$. Since there are $t(r,m,n) \le r+3$ choices for $d$, and a bounded number of divisors $d_0$ for a given $d$, the contribution to $P_2(n,m)$ from all the pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$ is at most $$P_2'(n,m) + t(r,m,n) \frac{2(r+1)(r+2)(2r+3)}{n^2r^2}+ O\left(\frac{1}{n^{5-2s-{{\delta}}}} \right).$$ Thus $$\begin{aligned} P_2(n,m)&\le& P_2'(n,m) + \frac{2t(r,m,n)(r+1)(r+2)(2r+3)}{n^2r^2}+ O\left(\frac{1}{n^{x_2}}\right) \\ &=& P_2'(n,m) +\frac{k(r,m,n)}{n^2} + O\left(\frac{1}{n^{x_2}}\right)\end{aligned}$$ with $x_2=\min\{1+2s-2{{\delta}},5-2s-{{\delta}}\}=1+2s-2{{\delta}}$. Note that $$k(r,m,n)\leq (r+3) \frac{2(r+1)(r+2)(2r+3)}{r^2}=4r^2+30r+80+\frac{90}{r}+\frac{36}{r^2}$$ which is less than $\frac{4(r+3)^4}{r^2}$. Finally we estimate $P_1(n,m)+P'_2(n,m)$. By Lemma \[lem:qi\], $P_1(n,m)= \sum \frac{1}{d}P_0(n-d,m)$, where the summation is over all $s$-large divisors $d$ of $m$ such that $d\leq n$, and we take $P_0(0,m)=1$. Note that $d\leq n\leq \frac{m}{r}$, so each divisor $d=\frac{m}{c}$ for some $c\geq r$. In the case where $m=rn$, that is, the case where $n$ divides $m$ (and only in this case), we have a contribution to $P_1(n,m)$ of $\frac{1}{n}$ due to $n$-cycles. If $d<n$ then $d=\frac{m}{c}$ with $c\geq r+1$. Next we consider all divisors $d$ of $m$ such that $d\leq \frac{m}{r+2}$. For each of these divisors, $n-d\geq n - \frac{m}{r+2}\ge n-\frac{(r+1)n}{r+2} =\frac{n}{r+2}$. Thus by Lemma \[newPs\], $P_0(n-d,m) = O\left(\frac{m^{2s + \delta}}{n^{3}}\right) = O\left(\frac{1}{n^{3-2s-\delta}}\right)$. The number of $d$ satisfying $d\geq \frac{m}{2(r+1)}$ is bounded in terms of $r$ (which is fixed), and hence the contribution to $P_1(n,m)$ from all the divisors $d$ satisfying $\frac{m}{2(r+1)}\leq d\leq \frac{m}{r+2}$ is at most $O\left(\frac{1}{m}\,\frac{1}{n^{3-2s-\delta}}\right)=O\left( \frac{1}{n^{4-2s-\delta}}\right)$. On the other hand, if $m^s\leq d <\frac{m}{2(r+1)}$, then $n-d>n - \frac{(r+1)n}{2(r+1)} =\frac{n}{2}$. Now since $r$ is fixed and $s<1$, for sufficiently large $n$, we have $m^s<\frac{n} {4}$, and so $n-d> m^s +\frac{n}{4}$. Then, by Lemma \[newPs\] (applied with $a=1$ and $c=\frac{1}{4}$), $P_0(n-d,m)= O\left(\frac{m^{2s + 2\delta}}{(n-d)^{4}}\right) = O\left(\frac{1}{n^{4-2s-2\delta}}\right)$, and the contribution to $P_1(n,m)$ from all $s$-large divisors $d< \frac{m}{2(r+1)}$ is at most $\frac{d(m)}{m^s}O\left(\frac{1}{n^{4-2s-2\delta}}\right)= O\left(\frac{1}{n^{4-s-3\delta}}\right)$. Thus, noting that $\min\{4-2s-{{\delta}}, 4-s-3{{\delta}}\}\geq 1+2s-2{{\delta}}$, the contribution to $P_1(n,m)$ from all $s$-large divisors $d$ of $m$ such that $d\leq\frac{m}{r+2}$ is $O\left(\frac{1}{n^{1+2s-2\delta}}\right)$. By Lemma \[lem:divat\], the only divisor not yet considered is $d=\frac{m} {r+1}$ and this case of course arises only when $r+1$ divides $m$. Suppose then that $r+1$ divides $m$. We must estimate the contribution to $P_1(n,m)+P'_2(n,m)$ from elements containing a cycle of length $d=\frac{m}{r+1}$. The contribution to $P_1(n,m)+P'_2(n,m)$ due to the divisor $d=\frac{m}{r+1}$ is $\frac{r+1}{m}P_0(n-\frac{m}{r+1},m)+\frac{r+1}{m}\sum_{d_0}\frac{1}{d_0} P_0(n-\frac{m}{r+1}-d_0,m)$, where the summation is over all $s$-large $d_0\leq \frac{m}{2(r+1)(r+2)}$. Suppose first that $n=\frac{m}{r+1}\geq \frac{m}{2(r+1)(r+2)-1}$, so that for each $d_0$, $n-\frac{m}{r+1}-d_0>\frac{m}{2(r+1)^2(r+2)^2}$. Then, by Lemma \[newPs\], the contribution to $P_1(n,m)+P'_2(n,m)$ is at most $$O\left(\frac{1}{m}.\frac{m^{2s+{{\delta}}}}{m^{3}}\right) +d(m) O\left(\frac{1}{m^{1+s}}.\frac{m^{2s+{{\delta}}}}{m^{3}}\right) =O\left(\frac{1}{n^{4-2s-{{\delta}}}}\right)$$ and this is $ O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$ since $4-2s-{{\delta}}\geq 1+2s-2{{\delta}}$. Finally suppose that $n-\frac{m}{r+1} < \frac{m}{2(r+1)(r+2)}$. In this case we estimate the contribution to $P_1(n,m)+P'_2(n,m)$ from $d=\frac{m}{r+1}$ by the proportion $\frac{1}{d}=\frac{r+1}{m}$ of elements of $S_n$ containing a $d$-cycle (recognising that this is usually an over-estimate). Putting these estimates together we have $$P_1(n,m)+P'_2(n,m)\leq\frac{\alpha}{n}+\frac{\alpha'.(r+1)}{m}+ O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right),$$ where $\alpha=1$ if $m=rn$ and is $0$ otherwise, and $\alpha'=1$ if $r+1$ divides $m$ and $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)-1}$, and is 0 otherwise. The result now follows using (\[eq-qi\]) and the estimates we have obtained for each of the summands. It is sometimes useful to separate out the results of Proposition \[prop:general\] according to the values of $m,n$. We do this in the theorem below, and also obtain in parts (a) and (b) exact asymptotic expressions for $P(n,rn)$ and $P(n,t!(n-t))$ where $r, t$ are bounded and $n$ is sufficiently large. For this it is convenient to define two sets of integer pairs. \[T\][For positive integers $r$ and $m$, define the following sets of integer pairs: $$\mathcal{T}(r)=\{(i,j)\,|\, 1\leq i,j\leq r^2, ij =r^2,\ \mbox{and both}\ r+i, r+j\ \mbox{divide}\ m\}$$ and $\mathcal{T}'(r)=\{(i,j)\,|\, 1< i,j\leq (r+1)^2, (i-1)(j-1) =(r+1)^2,$ and both $r+i, r+j\ \mbox{divide}\ m\}. $ ]{} \[rn\] Let $n,m,r$ be positive integers such that $rn\leq m<(r+1)n$. Let $1/2<s\leq 3/4$ and $0<{{\delta}}\leq s-1/2$. Then, the following hold for $r$ fixed and sufficiently large $n$ (where the sets $\mathcal{T}(r)$ and $\mathcal{T}'(r)$ are as in Definition [\[T\]]{}). 1. If $m=rn$, then ${\displaystyle P(n,m)=\frac{1}{n}+\frac{c(r)}{n^2} +O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)}$, where\ ${\displaystyle c(r)=\sum_{(i,j)\in\mathcal{T}(r)}(1+\frac{i+j}{2r}).} $ In particular $c(1)=0$ if $n$ is odd, and $2$ if $n$ is even. 2. If $r=t!-1$ and $m=t!(n-t)=(r+1)n-t\cdot t!$, then\ ${\displaystyle P(n,m)=\frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}+O\left(\frac{1}{n^{1+2s-2{{\delta}}}} \right)},$ where\ ${\displaystyle c'(r)=\sum_{(i,j)\in\mathcal{T}'(r)}(1+\frac{i+j-2}{2(r+1)})}$. 3. If $rn<m$, then ${\displaystyle P(n,m)\leq \frac{\alpha'.(r+1)}{m}+\frac{k(r,m,n)} {n^2}+ O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)}$, where $\alpha'$ and $k(r,m,n)$ are as in Definition [\[def:kr\]]{}. Part (c) follows immediately from Proposition \[prop:general\]. Next we prove part (a). Suppose that $m=rn$. If $r+1$ divides $m$ then we have $n-\frac{m}{r+1}= \frac{m}{r(r+1)}>\frac{m}{2(r+1)(r+2)-1}$. It follows from Proposition \[prop:general\] that $P(n,m)\leq\frac{1}{n}+\frac{k(r,m,n)} {n^2}+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. To complete the proof we refine the argument given in the proof of Proposition \[prop:general\] for $P_2(n,m)$ which gave rise to the term $\frac{k(r,m,n)}{n^2}$. The elements contributing to this term were those with exactly two $s$-large cycles, where one of these cycles had length $d=\frac{m}{r+i}$ for some $i$ such that $1\leq i\leq r+3$, and the other had length $d_0(d)=\frac{m}{r+j}$ for some $j$ such that $r+i\leq r+j < 2(r+1)(r+2)$ and $d + d_0(d) \le n.$ Moreover, for a given value of $d$, the value of $d_0(d)$ was the largest integer with these properties. Since we now assume that $m=rn$ we have $$d+d_0(d)=\frac{m(2r+i+j)}{(r+i)(r+j)}\leq n=\frac{m}{r}$$ that is, $r(2r+i+j)\leq(r+i)(r+j)$, which is equivalent to $r^2\leq ij$. If $d+d_0(d)$ is strictly less than $n$, that is to say, if $r^2<ij$, and thus $ij-r^2\geq1$, then $$n-d-d_0(d)=n-\frac{rn(2r+i+j)}{(r+i)(r+j)}=\frac{n(ij-r^2)}{(r+i)(r+j)}\geq \frac{n}{(r+i)(r+j)},$$ and since $i\leq r+3$ and $r+j<2(r+1)(r+2)$ we have $\frac{n}{(r+i)(r+j)} \geq \frac{n}{2(r+1)(r+2)(2r+3)}$. It now follows from Lemma \[newPs\] that the contribution to $P_2(n,m)$ from all ordered pairs $(d,d_0(d))$ and $(d_0(d),d)$ with $d,d_0(d)$ as above and $n>d+d_0(d)$ is $O\left( \frac{1}{n^2}\,\frac{m^{2s+{{\delta}}}}{n^3}\right)=O\left(\frac{1}{n^{5-2s-{{\delta}}}} \right)\leq O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Thus when $m=rn$, the only contributions to the $O\left(\frac{1}{n^2}\right)$ term come from pairs $(\frac{m}{r+i},\frac{m}{r+j})$ such that $r^2=ij$ and $1\leq i,j\leq r^2$. (Note that we no longer assume $i\leq j$.) These are precisely the pairs $(i,j)\in\mathcal{T}(r)$. For such a pair $(\frac{m}{r+i},\frac{m}{r+j})$, the contribution to $P_2(n,m)$ is $$\frac{1}{2}\cdot\frac{r+i}{m}\cdot\frac{r+j}{m}= \frac{r^2+r(i+j)+ij}{2n^2r^2}=\frac{1}{n^2}(1+\frac{i+j}{2r})$$ (since $ij=r^2$). Thus $P(n,m)\leq\frac{1}{n}+\frac{c(r)}{n^2} +O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Moreover, for each $(i,j)\in\mathcal{T}(r)$, each permutation in $S_n$ having exactly two cycles of lengths $\frac{m}{r+i}$ and $\frac{m}{r+j}$ is a permutation of order dividing $m$. Thus $P(n,rn)\geq \frac{1}{n}+\frac{c(r)}{n^2}$, and the main assertion of part (a) is proved. Finally we note that, if $r=1$ then the only possible pair in $\mathcal{T}(1)$ is $(1,1)$, and for this pair to lie in the set we require that $r+1=2$ divides $m=n$. Thus $c(1)$ is 0 if $n$ is odd, and is 2 if $n$ is even. Finally we prove part (b) where we have $r=t!-1$ and $m=t!(n-t)$. Then $rn=(t!-1)n=m+t\cdot t!-n$ which is less than $m$ if $n>t\cdot t!$. Also $(r+1)n=t!\,n>m$. Thus, for sufficiently large $n$, we have $rn<m<(r+1)n$. Moreover, $r+1$ divides $m$ and $n-\frac{m}{r+1}=n-(n-t)=t$, which for sufficiently large $n$ is less than $\frac{n-t}{3t!}<\frac{m}{2(r+1)(r+2)-1}$. It now follows from part (c) that $P(n,t!(n-t))\leq \frac{1}{n-t}+\frac{k(r,m,n)}{n^2}+O\left(\frac{1} {n^{1+2s-2{{\delta}}}}\right)$. Our next task is to improve the coefficient of the $O(\frac{1}{n^2})$ term using a similar argument to the proof of part (a). The elements contributing to this term have exactly two $s$-large cycles of lengths $d=\frac{m}{r+i}$ and $d_0(d)=\frac{m}{r+j}$, with $r+i,r+j\leq (r+1)(r+2)$ and $$d+d_0(d)=\frac{m(2r+i+j)}{(r+i)(r+j)}\leq n=\frac{m}{r+1}+t.$$ This is equivalent to $(r+1)(2r+i+j)\leq(r+i)(r+j)+\frac{t(r+1)(r+i)(r+j)}{m}$, and hence, for sufficiently large $n$ (and hence sufficiently large $m$), $(r+1)(2r+i+j)\leq (r+i)(r+j)$. This is equivalent to $(i-1)(j-1)\geq (r+1)^2$. If $(i-1)(j-1)> (r+1)^2$, then $$\begin{aligned} n-d-d_0(d)&=&(t+\frac{m}{r+1}) - \frac{m(2r+i+j)}{(r+i)(r+j)}\\ &=&t+\frac{m((i-1)(j-1)-(r+1)^2)}{(r+1)(r+i)(r+j)}\\ &>&\frac{rn}{(r+1)^3(r+2)^2}.\end{aligned}$$ As for part (a), the contribution to $P_2(n,m)$ from all pairs $(\frac{m}{r+i},\frac{m}{r+j})$ with $(i-1)(j-1)> (r+1)^2$ is $O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Thus the only contributions to the $O\left(\frac{1}{n^2}\right)$ term come from pairs $(d,d_0(d))=(\frac{m}{r+i},\frac{m}{r+j})$ such that $(r+1)^2=(i-1)(j-1)$ and $1\leq i,j\leq (r+1)^2$. These are precisely the pairs $(i,j)\in\mathcal{T}'(r)$. For each of these pairs we have $r^2+2r=ij-i-j$ and the contribution to $P_2(n,m)$ is $$\begin{aligned} \frac{1}{2dd_0(d)}&=&\frac{(r+i)(r+j)}{2m^2}= \frac{r^2+r(i+j)+ij}{2(r+1)^2(n-t)^2}\\ &=&\frac{(r+1)(2r+i+j)}{2(r+1)^2(n-t)^2}= \frac{1}{(n-t)^2}\left(1+\frac{i+j-2}{2(r+1)}\right).\end{aligned}$$ Thus $P(n,m)\leq\frac{1}{n-t}+\frac{c'(r)}{n^2} +O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. On the other hand, each permutation in $S_n$ that contains an $(n-t)$-cycle has order dividing $t!(n-t)=m$, and the proportion of these elements is $\frac{1}{n-t}$. Also, for each $(i,j)\in\mathcal{T}'(r)$, each permutation in $S_n$ having exactly two cycles of lengths $\frac{m}{r+i}$ and $\frac{m}{r+j}$, and inducing any permutation on the remaining $n-\frac{m}{r+i}-\frac{m}{r+j}=t$ points, is a permutation of order dividing $m=t!(n-t)$, and the proportion of all such elements is $\frac{c'(r)}{(n-t)^2}$. Thus $P(n,m)\geq \frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}$, and the assertion of part (b) is proved. It is a simple matter now to prove Theorems \[leadingterms\] and \[bounds\]. The first theorem follows from Theorem \[rn\] (a) and (b) on setting $s=3/4$ and allowing $\delta \rightarrow 0$. Note that $\frac{1}{n-t} = \frac{1}{n} + \frac{t}{n^2} + O(\frac{1}{n^3})$ and $\frac{1}{(n-t)^2} = \frac{1}{n^2} + O(\frac{1}{n^3})$. For the second theorem, again we set $s=3/4$ in Theorem \[rn\](c). By Proposition \[prop:general\] we have $k(r,m,n) \le \frac{4(r+3)^4}{r^2}$. If we define $k(r) = \frac{4(r+3)^4}{r^2}$ the result follows. Finally we derive the conditional probabilities in Corollary \[cdnlprobs1\]. Let $r,\, n$ be positive integers with $r$ fixed and $n$ ‘sufficiently large’, and let $g$ be a uniformly distributed random element of $S_n$. First set $m = rn.$ Let $A$ denote the event that $g$ is an $n$-cycle, and let $B$ denote the event that $g$ has order dividing $m$, so that the probability ${{\rm{Prob}}}(B)$ is $P(n,m)$. Then, by elementary probability theory, we have $$\begin{aligned} {{\rm{Prob}}}( A \mid B) &= &\frac{{{\rm{Prob}}}( A \cap B)} {{{\rm{Prob}}}(B)} = \frac{{{\rm{Prob}}}( A )} {{{\rm{Prob}}}(B)} = \frac{\frac{1}{n}}{P(n,m)}. \\\end{aligned}$$ By Theorem \[leadingterms\], $\frac{1}{n}+\frac{c(r)}{n^2}<P(n,m)=\frac{1}{n}+\frac{c(r)}{n^2}+O\left(\frac{1} {n^{2.5-o(1)}}\right)$, and hence $$\begin{aligned} 1-\frac{c(r)}{n}-O\left(\frac{1} {n^{1.5-o(1)}}\right)&\leq& {{\rm{Prob}}}(A \mid B) \leq 1-\frac{c(r)}{n}+O\left(\frac{1} {n^{2}}\right).\\\end{aligned}$$ Now suppose that $r=t!-1$ for some integer $t\geq2$, and let $A$ denote the event that $g$ contains an $(n-t)$-cycle, so that ${{\rm{Prob}}}(A)=\frac{1}{n-t}$. Then, with $B$ as above for the integer $m:=t!(n-t)$, we have $$\begin{aligned} {{\rm{Prob}}}( A \mid B) &= &\frac{{{\rm{Prob}}}( A \cap B)} {{{\rm{Prob}}}(B)} = \frac{{{\rm{Prob}}}( A )} {{{\rm{Prob}}}(B)} = \frac{\frac{1}{n-t}}{P(n,m)}. \\\end{aligned}$$ By Theorem \[rn\](b), $\frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}<P(n,m)=\frac{1}{n-t}+ \frac{c'(r)}{(n-t)^2}+O\left(\frac{1} {n^{2.5-o(1)}}\right)$, and hence $$\begin{aligned} 1-\frac{c'(r)}{n}-O\left(\frac{1} {n^{1.5-o(1)}}\right)&\leq& {{\rm{Prob}}}(A \mid B) \leq 1-\frac{c'(r)}{n}+O\left(\frac{1} {n^{2}}\right).\end{aligned}$$ This research was supported ARC Discovery Grants DP0209706 and DP0557587. The authors thank the referee for carefully reading the submitted version and advice on the paper. {#this-research-was-supported-arc-discovery-grants-dp0209706-and-dp0557587.-the-authors-thank-the-referee-for-carefully-reading-the-submitted-version-and-advice-on-the-paper. .unnumbered} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [10]{} Robert Beals, Charles R. Leedham-Green, Alice C. Niemeyer, Cheryl E. Praeger, and Ákos Seress, , , 355(5),(2003), [2097–2113]{}. I.Z. Bouwer and W.W. Chernoff, [Solutions to [$x\sp r=\alpha$]{} in the symmetric group]{}, [Tenth British combinatorial conference (Glasgow, 1985)]{}, [*Ars Combin.*]{}(A) 20, (1985), 83-88. S. Chowla, I. N. Herstein and W. R. Scott, The solutions of $x^d=1$ in symmetric groups, *Norske Vid. Selsk.* [**25**]{} (1952), 29–31. P. Erd[ő]{}s, and P. Tur[á]{}n, , , [**4**]{}, (1965), [175–186]{}. P. Erd[ő]{}s, and P. Tur[á]{}n, , , [18]{}, (1967), [309–320]{}. Lu Gao and Jian Guo Zha. [Solving the equation [$x\sp n=\sigma$]{} in the symmetric group [$S\sb m$]{}]{}, [*J. Math. (Wuhan)*]{}, 7 (2), (1987), 173–176, 1987. E. Landau. , , 1909. . [An equation in permutations]{}, [*Trudy Mat. Inst. Steklov.*]{}, 142 : 182–194, 270, 1976. . [*The number of permutations of a special form*]{}, [*Mat. Sb. (N.S.)*]{}, 99(141) [**3**]{}: 468–476, 480, 1976. Leo Moser and Max Wyman, , , 7, (1955), 159–168. Leo Moser and Max Wyman, , , 8, (1956), 225–233. Alice C. Niemeyer and Cheryl E. Praeger, On the proportion of permutations of order a multiple of the degree, preprint, 2005. Alice C. Niemeyer and Cheryl E. Praeger, On the frequency of permutations containing a long cycle, *J. Algebra* [**300**]{} (2006), 289-304. Ivan Niven, Herbert S. Zuckerman, and Hugh L. Montgomery. . John Wiley & Sons, New York, 5th edition, 1991. L.M. Volynets. . , 40:155–160, 286, 1986. Richard Warlimont. Über die [A]{}nzahl der [L]{}ösungen von $x\sp{n}=1$ in der symmetrischen [G]{}ruppe ${S}\sb{n}$. , 30(6), (1978), 591–594. Herbert S. Wilf. , , 15(2), (1986), [228-232]{}.
--- abstract: 'This paper introduces a novel feature detector based only on information embedded inside a CNN trained on standard tasks (e.g. classification). While previous works already show that the features of a trained CNN are suitable descriptors, we show here how to extract the feature locations from the network to build a detector. This information is computed from the gradient of the feature map with respect to the input image. This provides a saliency map with local maxima on relevant keypoint locations. Contrary to recent CNN-based detectors, this method requires neither supervised training nor finetuning. We evaluate how repeatable and how ‘matchable’ the detected keypoints are with the repeatability and matching scores. Matchability is measured with a simple descriptor introduced for the sake of the evaluation. This novel detector reaches similar performances on the standard evaluation HPatches dataset, as well as comparable robustness against illumination and viewpoint changes on Webcam and photo-tourism images. These results show that a CNN trained on a standard task embeds feature location information that is as relevant as when the CNN is specifically trained for feature detection.' author: - Assia Benbihi - Matthieu Geist - 'C[é]{}dric Pradalier' bibliography: - '../egbib.bib' title: 'ELF: Embedded Localisation of Features in pre-trained CNN' --- Introduction ============ ![(1-6) Embedded Detector: Given a CNN trained on a standard vision task (classification), we backpropagate the feature map back to the image space to compute a saliency map. It is thresholded to keep only the most informative signal and keypoints are the local maxima. (7-8): simple-descriptor.[]{data-label="fig:pipeline"}](img.png){width="\linewidth"} Feature extraction, description and matching is a recurrent problem in vision tasks such as Structure from Motion (SfM), visual SLAM, scene recognition and image retrieval. The extraction consists in detecting image keypoints, then the matching pairs the nearest keypoints based on their descriptor distance. Even though hand-crafted solutions, such as SIFT [@lowe2004distinctive], prove to be successful, recent breakthroughs on local feature detection and description rely on supervised deep-learning methods [@detone18superpoint; @ono2018lf; @yi2016lift]. They detect keypoints on saliency maps learned by a Convolutional Neural Network (CNN), then compute descriptors using another CNN or a separate branch of it. They all require strong supervision and complex training procedures: [@yi2016lift] requires ground-truth matching keypoints to initiate the training, [@ono2018lf] needs the ground-truth camera pose and depth maps of the images, [@detone18superpoint] circumvents the need for ground-truth data by using synthetic one but requires a heavy domain adaptation to transfer the training to realistic images. All these methods require a significant learning effort. In this paper, we show that a trained network already embeds enough information to build State-of-the-Art (SoA) detector and descriptor. The proposed method for local feature detection needs only a CNN trained on standard task, such as ImageNet [@deng2009imagenet] classification, and no further training. The detector, dubbed ELF, relies on the features learned by such a CNN and extract their locations from the feature map gradients. Previous work already highlights that trained CNN features are relevant descriptors [@fischer2014descriptor] and recent works [@balntas2016learning; @han2015matchnet; @simo2015discriminative] specifically train CNN to produce features suitable for keypoint description. However, no existing approach uses a pre-trained CNN for feature detection. ELF computes the gradient of a trained CNN feature map with respect to *w.r.t* the image: this outputs a saliency map with local maxima on keypoint positions. Trained detectors learn this saliency map with a CNN whereas we extract it with gradient computations. This approach is inspired by [@simonyan2013deep] which observes that the gradient of classification scores *w.r.t* the image is similar to the image saliency map. ELF differs in that it takes the gradient of feature maps and not the classification score contrary to existing work exploiting CNN gradients [@selvaraju2017grad; @smilkov2017smoothgrad; @springenberg2015striving; @sundararajan2017axiomatic]. These previous works aim at visualising the learning signal for classification specifically whereas ELF extracts the feature locations. The extracted saliency map is then thresholded to keep only the most relevant locations and standard Non-Maxima Suppression (NMS) extracts the final keypoints (Figure \[fig:heatmap\_coco\]). ![ Saliency maps thresholding to keep only the most informative location. Top: original image. (Left-Right: Webcam [@verdie2015tilde], HPatches [@balntas2017hpatches], COCO[@lin2014microsoft]) Middle: blurred saliency maps. Bottom: saliency map after threshold. (Better seen on a computer.) []{data-label="fig:heatmap_coco"}](fig3_heatmap.png){width="\linewidth"} ELF relies only on six parameters: 2$\times$2 Gaussian blur parameters for the automatic threshold level estimation and for the saliency map denoising; and two parameters for the (NMS) window and the border to ignore. Detection only requires one forward and one backward passes and takes $\sim$0.2s per image on a simple Quadro M2200, which makes it suitable for real-time applications. ELF is compared to individual detectors with standard *repeatability* [@mikolajczyk2005comparison] but results show that this metric is not discriminative enough. Most of the existing detectors can extract keypoints repeated across images with similar repeatability scores. Also, this metric does not express how ‘useful’ the detected keypoints are: if we sample all pixels as keypoints, we reach 100% of *rep.* but the matching may not be perfect if many areas look alike. Therefore, the detected keypoints are also evaluated on how ‘matchable’ they are with the *matching score* [@mikolajczyk2005comparison]. This metric requires to describe the keypoints so we define a simple descriptor: it is based on the interpolation of a CNN feature map on the detected keypoints, as in [@detone18superpoint]. This avoids biasing the performance by choosing an existing competitive descriptor. Experiments show that even this simple descriptor reaches competitive results which comforts the observation of [@fischer2014descriptor], on the relevance of CNN features as descriptors. More details are provided section 4.1. ELF is tested on five architectures: three classification networks trained on ImageNet classification: AlexNet, VGG and Xception [@krizhevsky2012imagenet; @simonyan2014very; @chollet17xception], as well as SuperPoint [@detone18superpoint] and LF-Net [@ono2018lf] descriptor networks. Although outside the scope of this paper, this comparison provides preliminary results of the influence of the network architecture, task and training data on ELF’s performance. Metrics are computed on HPatches [@balntas2017hpatches] for generic performances. We derive two auxiliary datasets from HPatches to study scale and rotation robustness. Light and 3D viewpoint robustness analysis are run on the Strecha, Webcam and datasets [@strecha2008benchmarking; @verdie2015tilde]. These extensive experiments show that ELF is on par with other sparse detectors, which suggests that the feature representation and location information learnt by a CNN to complete a vision task is as relevant as when the CNN is specifically trained for feature detection. We additionally test ELF’s robustness on 3D reconstruction from images in the context of the CVPR 2019 Image Matching challenge [@cvpr19challenge]. Once again, ELF is on par with other sparse methods even though denser methods, e.g. [@detone18superpoint], are more appropriate for such a task. Our contributions are the following: - We show that a CNN trained on a standard vision task embeds feature location in the feature gradients. This information is as relevant for feature detection as when a CNN is specifically trained for it. - We define a systematic method for local feature detection. Extensive experiments show that ELF is on par with other SoA deep trained detectors. They also update the previous result from [@fischer2014descriptor]: self-taught CNN features provide SoA descriptors in spite of recent improvements in CNN descriptors [@choy2016universal]. - We release the python-based evaluation code to ease future comparison together with ELF code[^1]. The introduced robustness datasets are also made public [^2]. Related work ============ Early methods rely on hand-crafted detection and description : SIFT [@lowe2004distinctive] detects 3D spatial-scale keypoints on difference of gaussians and describes them with a 3D Histogram Of Gradients (HOG). SURF [@bay2006surf] uses image integral to speed up the previous detection and uses a sum of Haar wavelet responses for description. KAZE [@alcantarilla2012kaze] extends the previous multi-scale approach by detecting features in non-linear scale spaces instead of the classic Gaussian ones. ORB [@rublee2011orb] combines the FAST [@rosten2006machine] detection, the BRIEF [@calonder2010brief] description, and improves them to make the pipeline scale and rotation invariant. MSER-based detector hand-crafts desired invariance properties for keypoints, and designs a fast algorithm to detect them [@matas2004robust]. Even though these hand-crafted methods have proven to be successful and to reach state-of-the-art performance for some applications, recent research focus on learning-based methods. One of the first learned detector is TILDE [@verdie2015tilde], trained under drastic changes of light and weather on the Webcam dataset. They use supervision to learn saliency maps which maxima are keypoint locations. Ground-truth saliency maps are generated with ‘good keypoints’: they use SIFT and filter out keypoints that are not repeated in more than 100 images. One drawback of this method is the need for supervision that relies on another detector. However, there is no universal explicit definition of what a good keypoint is. This lack of specification inspires Quad-Networks [@savinov2017quad] to adopt an unsupervised approach: they train a neural network to rank keypoints according to their robustness to random hand-crafted transformations. They keep the top/bottom quantile of the ranking as keypoints. ELF is similar in that it does not requires supervision but differs in that it does not need to further train the CNN. Other learned detectors are trained within full detection/description pipelines such as LIFT [@yi2016lift], SuperPoint [@detone18superpoint] and LF-Net [@ono2018lf]. LIFT contribution lies in their original training method of three CNNs. The detector CNN learns a saliency map where the most salient points are keypoints. They then crop patches around these keypoints, compute their orientations and descriptors with two other CNNs. They first train the descriptor with patches around ground-truth matching points with contrastive loss, then the orientation CNN together with the descriptor and finally with the detector. One drawback of this method is the need for ground-truth matching keypoints to initiate the training. In [@detone18superpoint], the problem is avoided by pre-training the detector on a synthetic geometric dataset made of polygons on which they detect mostly corners. The detector is then finetuned during the descriptor training on image pairs from COCO [@lin2014microsoft] with synthetic homographies and the correspondence contrastive loss introduced in [@choy2016universal]. LF-Net relies on another type of supervision: it uses ground-truth camera poses and image depth maps that are easier to compute with laser or standard SfM than ground-truth matching keypoints. Its training pipeline builds over LIFT and employs the projective camera model to project detected keypoints from one image to the other. These keypoint pairs form the ground-truth matching points to train the network. ELF differs in that the CNN model is already trained on a standard task. It then extracts the relevant information embedded inside the network for local feature detection, which requires no training nor supervision. The detection method of this paper is mainly inspired from the initial observation in [@simonyan2013deep]: given a CNN trained for classification, the gradient of a class score *w.r.t* the image is the saliency map of the class object in the input image. A line of works aims at visualizing the CNN representation by inverting it into the image space through optimization [@mahendran2015understanding; @gatys2016image]. Our work differs in that we backpropagate the feature map itself and not a feature loss. Following works use these saliency maps to better understand the CNN training process and justify the CNN outputs. Efforts mostly focus on the gradient definitions [@smilkov2017smoothgrad; @springenberg2015striving; @sundararajan2017axiomatic; @zeiler2014visualizing]. They differ in the way they handle the backpropagation of the non-linear units such as Relu. Grad-CAM [@selvaraju2017grad] introduces a variant where they fuse several gradients of the classification score *w.r.t* feature maps and not the image space. Instead, ELF computes the gradient of the feature map, and not a classification score, *w.r.t* the image. Also we run simple backpropagation which differs in the non-linearity handling: all the signal is backpropagated no matter whether the feature maps or the gradients are positive or not. Finally, as far as we know, this is the first work to exploit the localisation information present in these gradients for feature detection. The simple descriptor introduced for the sake of the matchability evaluation is taken from UCN [@choy2016universal]. Given a feature map and the keypoints to describe, it interpolates the feature map on the keypoints location. Using a trained CNN for feature description is one of the early applications of CNN [@fischer2014descriptor]. Later, research has taken on specifically training the CNN to generate features suitable for keypoint matching either with patch-based approaches, among which [@simo2015discriminative; @melekhov2016siamese; @han2015matchnet; @zagoruyko2015learning], or image-based approaches [@taira2018inloc; @choy2016universal]. We choose the description method from UCN [@choy2016universal], also used by SuperPoint, for its complexity is only $O(1)$ compared to patch-based approaches that are $O(N)$ with $N$ the number of keypoints. We favor UCN to InLoc [@taira2018inloc] as it is simpler to compute. The motivation here is only to get a simple descriptor easy to integrate with all detectors for fair comparison of the *detector* matching performances. So we overlook the description performance. Method ====== This section defines ELF, a detection method valid for any trained CNN. Keypoints are local maxima of a saliency map computed as the feature gradient *w.r.t* the image. We use the data adaptive Kapur method [@kapur1985new] to automatically threshold the saliency map and keep only the most salient locations, then run NMS for local maxima detection. ![(Bigger version Figure \[fig:big\_saliency\_coco\].) Saliency maps computed from the feature map gradient $\left| ^TF^l(x) \cdot \frac{\partial F^l}{\partial \mathbf{I}} \right|$. Enhanced image contrast for better visualisation. Top row: gradients of VGG $pool_2$ and $pool_3$ show a loss of resolution from $pool_2$ to $pool_3$. Bottom: $(pool_i)_{i \in [1,2,5]}$ of VGG on Webcam, HPatches and Coco images. Low level saliency maps activate accurately whereas higher saliency maps are blurred.[]{data-label="fig:saliency_coco"}](fig2_saliency_bis.png){width="\linewidth"} Feature Specific Saliency ------------------------- We generate a saliency map that activates on the most informative image region for a specific CNN feature level $l$. Let $\mathbf{I}$ be a vector image of dimension $D_I = H_I \cdot W_I \cdot C_I$. Let $F^l$ be a vectorized feature map of dimension $D_F= H_l \cdot W_l \cdot C_l$. The saliency map $S^l$, of dimension $D_I$, is $S^l(\mathbf{I})=\left| ^tF^l(\mathbf{I}) \cdot \nabla_I F^l \right|$, with $\nabla_I F^l$ a $D_F \times D_I$ matrix. The saliency activates on the image regions that contribute the most to the feature representation $F^l(\mathbf{I})$. The term $\nabla_I F^l$ explicits the correlation between the feature space of $F^l$ and the image space in general. The multiplication by $F^l(\mathbf{I})$ applies the correlation to the features $F^l(\mathbf{I})$ specifically and generate a visualisation in image space $S^l(\mathbf{I})$. From a geometrical point of view, this operation can be seen as the projection $\nabla_I F^l$ of a feature signal $F^l(\mathbf{I})$ into the image space. From a signal processing approach, $F^l(\mathbf{I})$ is an input signal filtered through $\nabla_I F^l$ into the image space. If $C_I>1$, $S^l$ is converted into a grayscale image by averaging it across channels. Feature Map Selection --------------------- We provide visual guidelines to choose the feature level $l$ so that $F^l$ still holds high resolution localisation information while providing a useful high-level representation. CNN operations such as convolution and pooling increase the receptive field of feature maps while reducing their spatial dimensions. This means that $F^{l}$ has less spatial resolution than $F^{l-1}$ and the backpropagated signal $S^l$ ends up more spread than $S^{l-1}$. This is similar to when an image is too enlarged and it can be observed in Figure \[fig:saliency\_coco\], which shows the gradients of the VGG feature maps. On the top row, $pool_2$’s gradient (left) better captures the location details of the dome whereas $pool_3$’s gradient (right) is more spread. On the bottom rows, the images lose their resolution as we go higher in the network. Another consequence of this resolution loss is that small features are not embedded in $F^l$ if $l$ is too high. This would reduce the space of potential keypoint to only large features which would hinder the method. This observation motivates us to favor low-level feature maps for feature detection. We chose the final $F^l$ by taking the highest $l$ which provides accurate localisation. This is visually observable by sparse high intensity signal contrary to the blurry aspect of higher layers. Automatic Data-Adaptive Thresholding ------------------------------------ The threshold is automatic and adapts to the saliency map distribution to keep only the most informative regions. Figure \[fig:heatmap\_coco\] shows saliency maps before and after thresholding using Kapur’s method [@kapur1985new], which we briefly recall below. It chooses the threshold to maximize the information between the image background and foreground *i.e.* the pixel distribution below and above the threshold. This method is especially relevant in this case as it aims at maintaining as much information on the distribution above the threshold as possible. This distribution describes the set of local maxima among which we choose our keypoints. More formally, for an image $\mathbf{I}$ of $N$ pixels with $n$ sorted gray levels and $(f_i)_{i \in n}$ the corresponding histogram, $p_i=\frac{f_i}{N}$ is the empirical probability of a pixel to hold the value $f_i$. Let $s \in n$ be a threshold level and $A,B$ the empirical background and foreground distributions. The level $s$ is chosen to maximize the information between A and B and the threshold value is set to $f_s$: $A = \left( \frac{p_i}{\sum_{i<s}pi}\right)_{i<s}$ and $B = \left(\frac{p_i}{\sum_{i>=s}pi}\right)_{i>s}$. For better results, we blur the image with a Gaussian of parameters $(\mu_{thr}, \sigma_{thr})$ before computing the threshold level. Once the threshold is set, we denoise the image with a second Gaussian blur of parameters $(\mu_{noise}, \sigma_{noise})$ and run standard NMS (the same as for SuperPoint) where we iteratively select decreasing global maxima while ensuring that their nearest neighbor distance is higher than the window $w_{\textrm{NMS}} \in \mathbb{N}$. Also we ignore the $b_{\textrm{NMS}} \in \mathbb{N}$ pixels around the image border. Simple descriptor ----------------- As mentioned in the introduction, the repeatability score does not discriminate among detectors anymore. So they are also evaluated on how ‘matchable’ their detected keypoints are with the matching score. To do so, the ELF detector is completed with a simple descriptor inspired by SuperPoint’s descriptor. The use of this simple descriptor over existing competitive ones avoids unfairly boosting ELF’s perfomance. Inspired by SuperPoint, we interpolate a CNN feature map on the detected keypoints. Although simple, experiments show that this simple descriptor completes ELF into a competitive feature detection/description method. The feature map used for description may be different from the one for detection. High-level feature maps have wider receptive field hence take higher context into account for the description of a pixel location. This leads to more informative descriptors which motivates us to favor higher level maps. However we are also constrained by the loss of resolution previously described: if the feature map level is too high, the interpolation of the descriptors generate vector too similar to each other. For example, the VGG $pool_4$ layer produces more discriminative descriptors than $pool_5$ even though $pool_5$ embeds information more relevant for classification. Empirically we observe that there exists a layer level $l'$ above which the description performance stops increasing before decreasing. This is measured through the matching score metric introduced in [@mikolajczyk2005comparison]. The final choice of the feature map is done by testing some layers $l'>l$ and select the lowest feature map before the descriptor performance stagnates. The compared detectors are evaluated with both their original descriptor and this simple one. We detail the motivation behind this choice: detectors may be biased to sample keypoints that their respective descriptor can describe ‘well’ [@yi2016lift]. So it is fair to compute the matching score with the original detector/descriptor pairs. However, a detector can sample ‘useless points’ (e.g. sky pixels for 3d reconstructions) that its descriptor can characterise ‘well’. In this case, the descriptor ‘hides’ the detector default. This motivates the integration of a common independent descriptor with all detectors to evaluate them. Both approaches are run since each is as fair as the other. Experiments =========== This section describes the evaluation metrics and datasets as well as the method’s tuning. Our method is compared to detectors with available public code: the fully hand-crafted SIFT [@lowe2004distinctive], SURF [@bay2006surf], ORB [@rublee2011orb], KAZE [@alcantarilla2012kaze], the learning-based LIFT [@yi2016lift], SuperPoint [@detone18superpoint], LF-Net [@ono2018lf], the individual detectors TILDE [@verdie2015tilde], MSER [@matas2004robust]. Metrics ------- We follow the standard validation guidelines [@mikolajczyk2005comparison] that evaluates the detection performance with *repeatability (rep)*. It measures the percentage of keypoints common to both images. We also compute the *matching score (ms)* as an additional *detector* metric. It captures the percentage of keypoint pairs that are nearest neighbours in both image space and descriptor space i.e. the ratio of keypoints correctly matched. For fair completeness, the mathematical definitions of the metrics are provided in Appendix and their implementation in the soon-to-be released code. A way to reach perfect *rep* is to sample all the pixels or sample them with a frequency higher than the distance threshold $\epsilon_{kp}$ of the metric. One way to prevent the first flaw is to limit the number of keypoints but it does not counter the second. Since detectors are always used together with descriptors, another way to think the detector evaluation is: *’a good keypoint is one that can be discriminatively described and matched’*. One could think that such a metric can be corrupted by the descriptor. But we ensure that a detector flaw cannot be hidden by a very performing descriptor with two guidelines. One experiment must evaluate all detector with one fixed descriptor (the simple one defined in 3.4). Second, *ms* can never be higher than *rep* so a detector with a poor *rep* leads to a poor *ms*. Here the number of detected keypoints is limited to 500 for all methods. As done in [@detone18superpoint; @ono2018lf], we replace the overlap score in [@mikolajczyk2005comparison] to compute correspondences with the 5-pixel distance threshold. Following [@yi2016lift], we also modify the matching score definition of [@mikolajczyk2005comparison] to run a greedy bipartite-graph matching on all descriptors and not just the descriptor pairs for which the distance is below an arbitrary threshold. We do so to be able to compare all state-of-the-art methods even when their descriptor dimension and range vary significantly. (More details in Appendix.) Datasets -------- All images are resized to the 480$\times$640 pixels and the image pair transformations are rectified accordingly. ![Left-Right: HPatches: planar viewpoint. Webcam: light. HPatches: rotation. HPatches: scale. Strecha: 3D viewpoint.[]{data-label="fig:datasets"}](fig13.png){width="\linewidth"} **General performances.** The HPatches dataset [@balntas2017hpatches] gathers a subset of standard evaluation images such as DTU and OxfordAffine [@aanaes2012interesting; @mikolajczyk2005performance]: it provides a total of 696 images, 6 images for 116 scenes and the corresponding homographies between the images of a same scene. For 57 of these scenes, the main changes are photogrammetric and the remaining 59 show significant geometric deformations due to viewpoint changes on planar scenes. **Illumination Robustness.** The Webcam dataset [@verdie2015tilde] gathers static outdoor scenes with drastic natural light changes contrary to HPatches which mostly holds artificial light changes in indoor scenes. **Rotation and Scale Robustness.** We derive two datasets from HPatches. For each of the 116 scenes, we keep the first image and rotate it with angles from $0^{\circ}$ to $210^{\circ}$ with an interval of $40^{\circ}$. Four zoomed-in version of the image are generated with scales $[1.25, 1.5, 1.75, 2]$. We release these two datasets together with their ground truth homographies for future comparisons. **3D Viewpoint Robustness.** We use three Strecha scenes [@strecha2008benchmarking] with increasing viewpoint changes: *Fountain, Castle entry, Herzjesu-P8*. The viewpoint changes proposed by HPatches are limited to planar scenes which does not reflect the complexity of 3D structures. Since the ground-truth depths are not available anymore, we use COLMAP [@schonberger2016structure] 3D reconstruction to obtain ground-truth scaleless depth. We release the obtained depth maps and camera poses together with the evaluation code. ELF robustness is additionally tested in the CVPR19 Image Matching Challenge [@cvpr19challenge] (see results sections). Baselines --------- We describe the rationale behind the evaluation. The tests run on a QuadroM2200 with Tensorflow 1.4, Cuda8, Cudnn6 and Opencv3.4. We use the OpenCV implementation of SIFT, SURF, ORB, KAZE, MSER with the default parameters and the author’s code for TILDE, LIFT, SuperPoint, LF-Net with the provided models and parameters. When comparing detectors in the feature matching pipeline, we measure their matching score with both their original descriptor and ELF simple descriptor. For MSER and TILDE, we use the VGG simple descriptor. **Architecture influence.** ELF is tested on five networks: three classification ones trained on ImageNet (AlexNet, VGG, Xception [@krizhevsky2012imagenet; @simonyan2014very; @chollet17xception]) as well as the trained SuperPoint’s and LF-Net’s descriptor ones. We call each variant with the network’s names prefixed with ELF as in saliency. The paper compares the influence of i) architecture for a fixed task (ELF-AlexNet [@krizhevsky2012imagenet] *vs.* ELF-VGG [@simonyan2014very] *v.s.* ELF-Xception [@chollet17xception]), ii) the task (ELF-VGG *vs.* ELF-SuperPoint (SP) descriptor), iii) the training dataset (ELF-LFNet on phototourism *vs.* ELF-SP on MS-COCO). This study is being refined with more independent comparisons of tasks, datasets and architectures soon available in a journal extension. We use the author’s code and pre-trained models which we convert to Tensorflow [@abadi2016tensorflow] except for LF-Net. We search the blurring parameters $(\mu_{thr}, \sigma_{thr})$, $(\mu_{noise}, \sigma_{noise})$ in the range $ [\![3,21]\!]^2$ and the NMS parameters $(w_{NMS}, b_{NMS})$ in $[\![4,13]\!]^2$. **Individual components comparison.** Individual detectors are compared with the matchability of their detection and the description of the simple VGG-pool3 descriptor. This way, the *m.s.* only depends on the detection performance since the description is fixed for all detectors. The comparison between ELF and recent deep methods raises the question of whether triplet-like losses are relevant to train CNN descriptors. Indeed, these losses constrain the CNN features directly so that matching keypoints are near each other in descriptor space. Simpler loss, such as cross-entropy for classification, only the constrain the CNN output on the task while leaving the representation up to the CNN. ELF-VGG detector is also integrated with existing descriptors. This evaluates how useful the CNN self-learned feature localisation compares with the hand-crafted and the learned ones. **Gradient Baseline.** Visually, the feature gradient map is reminiscent of the image gradients computed with the Sobel or Laplacian operators. We run two variants of our pipeline where we replace the feature gradient with them. This aims at showing whether CNN feature gradients embed more information than image intensity gradients. Results ======= Experiments show that ELF compares with the state-of-the-art on HPatches and demonstrates similar robustness properties with recent learned methods. It generates saliency maps visually akin to a Laplacian on very structured images (HPatches) but proves to be more robust on outdoor scenes with natural conditions (Webcam). When integrated with existing feature descriptors, ELF boosts the matching score. Even integrating ELF simple descriptor improves it with the exception of SuperPoint for which results are equivalent. This sheds new light on the representations learnt by CNNs and suggests that deep description methods may underexploit the information embedded in their trained networks. Another suggestion may be that the current metrics are not relevant anymore for deep learning methods. Indeed, all can detect repeatable keypoints with more or less the same performances. Even though the matchability of the points (*m.s*) is a bit more discriminative, neither express how ‘useful’ the *kp* are for the end-goal task. One way to do so is to evaluate an end-goal task (*e.g.* Structure-from-Motion). However, for the evaluation to be rigorous all the other steps should be fixed for all papers. Recently, the Image Matching CVPR19 workshop proposed such an evaluation but is not fully automatic yet. These results also challenge whether current descriptor-training loss are a strong enough signal to constrain CNN features better than a simple cross-entropy. The tabular version of the following results is provided in Appendix. The graph results are better seen with color on a computer screen. Unless mentioned otherwise, we compute repeatability for each detector, and the matching score of detectors with their respective descriptors, when they have one. We use ELF-VGG-$pool_4$ descriptor for TILDE, MSER, ELF-VGG, ELF-SuperPoint, and ELF-LFNet. We use AlexNet and Xception feature maps to build their respective simple descriptors. The meta-parameters for each variants are provided in Appendix. **General performances.** Figure \[fig:hpatch\_gle\_perf\] (top) shows that the *rep* variance is low across detectors whereas *ms* is more discriminative, hence the validation method (Section 4.1). On HPatches, SuperPoint (SP) reaches the best *rep*-*ms* \[68.6, 57.1\] closely followed by ELF (e.g. ELF-VGG: \[63.8, 51.8\]) and TILDE \[66.0, 46.7\]. In general, we observe that learning-based methods all outperform hand-crafted ones. Still, LF-Net and LIFT curiously underperform on HPatches: one reason may be that the data they are trained on differs too much from this one. LIFT is trained on outdoor images only and LF-Net on either indoor or outdoor datasets, whereas HPatches is made of a mix of them. We compute metrics for both LF-Net models and report the highest one (indoor). Even though LF-Net and LIFT fall behind the top learned methods, they still outperform hand-crafted ones which suggests that their framework learn feature specific information that hand-crafted methods can not capture. This supports the recent direction towards trained detectors and descriptors. **Light Robustness** Again, *ms* is a better discriminant on Webcam than *rep* (Figure \[fig:hpatch\_gle\_perf\] bottom). ELF-VGG reaches top *rep*-*ms* \[53.2, 43.7\] closely followed by TILDE \[52.5, 34.7\] which was the state-of-the-art detector. Overall, there is a performance degradation ($\sim$20%) from HPatches to Webcam. HPatches holds images with standard features such as corners that state-of-the-art methods are made to recognise either by definition or by supervision. There are less such features in the Webcam dataset because of the natural lighting that blurs them. Also there are strong intensity variations that these models do not handle well. One reason may be that the learning-based methods never saw such lighting variations in their training set. But this assumption is rejected as we observe that even SuperPoint, which is trained on Coco images, outperforms LIFT and LF-Net, which are trained on outdoor images. Another justification can be that what matters the most is the pixel distribution the network is trained on, rather than the image content. The top methods are classifier-based ELF and SuperPoint: the first ones are trained on the huge Imagenet dataset and benefit from heavy data augmentation. SuperPoint also employs a considerable data strategy to train their network. Thus these networks may cover a much wider pixel distribution which would explain their robustness to pixel distribution changes such as light modifications. **Architecture influence** ELF is tested on three classification networks as well as the descriptor networks of SuperPoint and LF-Net (Figure \[fig:hpatch\_gle\_perf\], bars under ‘ELF’). For a fixed training task (classification) on a fixed dataset (ImageNet), VGG, AlexNet and Xception are compared. As could be expected, the network architecture has a critical impact on the detection and ELF-VGG outperforms the other variants. The *rep* gap can be explained by the fact that AlexNet is made of wider convolutions than VGG, which induces a higher loss of resolution when computing the gradient. As for *ms*, the higher representation space of VGG may help building more informative features which are a stronger signal to backpropagate. This could also justify why ELF-VGG outperforms ELF-Xception that has less parameters. Another explanation is that ELF-Xception’s gradient maps seem smoother. Salient locations are then less emphasized which makes the keypoint detection harder. One could hint at the depth-wise convolution to explain this visual aspect but we could not find an experimental way to verify it. Surprisingly, ELF-LFNet outperforms the original LF-Net on both HPatches and Webcam and ELF-SuperPoint variant reaches similar results as the original. ![HPatches scale. Left-Right: rep, ms.[]{data-label="fig:robust_scale"}](fig7_scale.png){width="\linewidth"} **Scale Robustness.** ELF-VGG is compared with state-of-the art detectors and their respective descriptors (Figure \[fig:robust\_scale\]). Repeatability is mostly stable for all methods: SIFT and SuperPoint are the most invariant whereas ELF follows the same variations as LIFT and LF-Net. Once again, *ms* better assesses the detectors performance: SuperPoint is the most robust to scale changes, followed by LIFT and SIFT. ELF and LF-Net lose 50% of their matching score with the increasing scale. It is surprising to observe that LIFT is more scale-robust than LF-Net when the latter’s global performance is higher. A reasonable explanation is that LIFT detects keypoints at 21 scales of the same image whereas LF-Net only runs its detector CNN on 5 scales. Nonetheless, ELF outperforms LF-Net without manual multi-scale processing. ![HPatches rotation. Left-Right: rep, ms.[]{data-label="fig:robust_rotation"}](fig7_angle.png){width="\linewidth"} **Rotation Robustness.** Even though *rep* shows little variations (Figure \[fig:robust\_rotation\]), all learned methods’ *ms* crash while only SIFT survives the rotation changes. This can be explained by the explicit rotation estimation step of SIFT. However LIFT and LF-Net also run such a computation. This suggests that either SIFT’s hand-crafted orientation estimation is more accurate or that HOG are more rotation invariant than learned features. LF-Net still performs better than LIFT: this may be because it learns the keypoint orientation on the keypoint features representation rather than the keypoint pixels as done in LIFT. Not surprisingly, ELF simple descriptor is not rotation invariant as the convolutions that make the CNN are not. This also explains why SuperPoint also crashes in a similar manner. These results suggest that the orientation learning step in LIFT and LF-Net is needed but its robustness could be improved. ![Robustness analysis: 3D viewpoint.[]{data-label="fig:robust_strecha"}](fig7_strecha.png){width="\linewidth"} **3D Viewpoint Robustness.** While SIFT shows a clear advantage of pure-rotation robustness, it displays similar degradation as other methods on realistic rotation-and-translation on 3D structures. Figure \[fig:robust\_strecha\] shows that all methods degrade uniformly. One could assume that this small data sample is not representative enough to run such robustness analysis. However, we think that these results rather suggest that all methods have the same robustness to 3D viewpoint changes. Even though previous analyses allows to rank the different feature matching pipelines, each has advantages over others on certain situations: ELF or SuperPoint on general homography matches, or SIFT on rotation robustness. This is why this paper only aims at showing ELF reaches the same performances and shares similar properties to existing methods as there is no generic ranking criteria. The recent evaluation run by the CVPR19 Image Matching Challenge [@cvpr19challenge] supports the previous conclusions. ![Left-Middle-Right bars: original method, integration of ELF detection, integration of ELF description.[]{data-label="fig:ind_component"}](fig11.png){width="\linewidth"} **Individual components performance.** First, all methods’ descriptor are replaced with the simple ELF-VGG-$pool_3$ one. We then compute their new *ms* and compare it to ELF-VGG on HPatches and Webcam (Figure \[fig:ind\_component\], stripes). The description is based on $pool_3$ instead of $pool_4$ here for it produces better results for the other methods while preserving ours. ELF reaches higher *ms* \[51.3\] for all methods except for SuperPoint \[53.7\] for which it is comparable. This shows that ELF is as relevant, if not more, than previous hand-crafted or learned detectors. This naturally leads to the question: *’What kind of keypoints does ELF detect ?’* There is currently no answer to this question as it is complex to explicitly characterize properties of the pixel areas around keypoints. Hence the open question *’What makes a good keypoint ?’* mentioned at the beginning of the paper. Still, we observe that ELF activates mostly on high intensity gradient areas although not all of them. One explanation is that as the CNN is trained on the vision task, it learns to ignore image regions useless for the task. This results in killing the gradient signals in areas that may be unsuited for matching. Another surprising observation regards CNN descriptors: SuperPoint (SP) keypoints are described with the SP descriptor in one hand and the simple ELF-VGG one in the other hand. Comparing the two resulting matching scores is one way to compare the SP and ELF descriptors. Results show that both approaches lead to similar *ms*. This result is surprising because SP specifically trains a description CNN so that its feature map is suitable for keypoint description [@choy2016universal]. In VGG training, there is no explicit constraints on the features from the cross-entropy loss. Still, both feature maps reach similar numerical description performance. This raises the question of whether contrastive-like losses, which input are CNN features, can better constrain the CNN representation than simpler losses, such as cross-entropy, which inputs are classification logits. This also shows that there is more to CNNs than only the task they are trained on: they embed information that can prove useful for unrelated tasks. Although the simple descriptor was defined for evaluation purposes, these results demonstrate that it can be used as a description baseline for feature extraction. The integration of ELF detection with other methods’ descriptor (Figure \[fig:ind\_component\], circle) boosts the *ms*. [@yi2016lift] previously suggested that there may be a correlation between the detector and the descriptor within a same method, i.e. the LIFT descriptor is trained to describe only the keypoints output by its detector. However, these results show that ELF can easily be integrated into existing pipelines and even boost their performances. **Gradient Baseline** The saliency map used in ELF is replaced with simple Sobel or Laplacian gradient maps. The rest of the detection pipeline stays the same and we compute their performance (Figure \[fig:gradient\_perf\] Left). They are completed with simple ELF descriptors from the VGG, AlexNet and Xception networks. These new hybrids are then compared to their respective ELF variant (Right). Results show that these simpler gradients can detect systematic keypoints with comparable *rep* on very structured images such as HPatches. However, the ELF detector better overcomes light changes (Webcam). On HPatches, the Laplacian-variant reaches similar *ms* as ELF-VGG (55 *vs* 56) and outperforms ELF-AlexNet and ELF-Xception. These scores can be explained with the images structure: for heavy textured images, high intensity gradient locations are relevant enough keypoints. However, on Webcam, all ELF detectors outperform Laplacian and Sobel with a factor of 100%. This shows that ELF is more robust than Laplacian and Sobel operators. Also, feature gradient is a sparse signal which is better suited for local maxima detection than the much smoother Laplacian operator (Figure \[fig:sobel\_visu\]). ![Feature gradient (right) provides a sparser signal than Laplacian (middle) which is more selective of salient areas.[]{data-label="fig:sobel_visu"}](fig5_sobel_similar_ter.png){height="3cm"} **Qualitative results** Green lines show putative matches based only on nearest neighbour matching of descriptors. More qualitative results are available in the video [^3]. ![Green lines show putative matches of the simple descriptor before RANSAC-based homography estimation.[]{data-label="fig:matching_pic"}](fig6_matching_ter.png){width="\linewidth"} **CVPR19 Image Matching Challenge [@cvpr19challenge]** This challenge evaluates detection/description methods on two standard tasks: 1) wide stereo matching and 2) structure from motion from small image sets. The *matching score* evaluates the first task, and the camera pose estimation is used for both tasks. Both applications are evaluated on the photo-tourism image collections of popular landmarks [@thomee59yfcc100m; @heinly2015reconstructing]. More details on the metrics definition are available on the challenge website [@cvpr19challenge]. *Wide stereo matching:* Task 1 matches image pairs across wide baselines. It is evaluated with the keypoints *ms* and the relative camera pose estimation between two images. The evaluators run COLMAP to reconstruct dense ‘ground-truth’ depth which they use to translate keypoints from one image to another and compute the matching score. They use the RANSAC inliers to estimate the camera pose and measure performance with the “angular difference between the estimated and ground-truth vectors for both rotation and translation. To reduce this to one value, they use a variable threshold to determine each pose as correct or not, then compute the area under the curve up to the angular threshold. This value is thus the mean average precision up to x, or mAPx. They consider 5, 10, 15, 20, and 25 degrees" [@cvpr19challenge]. Submissions can contain up to 8000 keypoints and we submitted entries to the sparse category i.e. methods with up to 512 keypoints. ![*Wide stereo matching.* Left: matching score (%) of sparse methods (up to 512 keypoints) on photo-tourism. Right: Evolution of mAP of camera pose for increasing tolerance threshold (degrees).[]{data-label="fig:cvpr19_task1"}](fig14.png){width="\linewidth"} Figure \[fig:cvpr19\_task1\] (left) shows the *ms* (%) of the submitted sparse methods. It compares ELF-VGG detection with DELF [@noh2017largescale] and SuperPoint, where ELF is completed with either the simple descriptor from pool3 or pool4, and SIFT. The variant are dubbed respectively ELF-256, ELF-512 and ELF-SIFT. This allows us to sketch a simple comparison of descriptor performances between the simple descriptor and standard SIFT. As previously observed on HPatches and Webcam, ELF and SuperPoint reach similar scores on Photo-Tourism. ELF-performance slightly increases from 25% to 26.4% when switching descriptors from VGG-pool3 to VGG-pool4. One explanation is that the feature space size is doubled from the first to the second. This would allow the pool4 descriptors to be more discriminative. However, the 1.4% gain may not be worth the additional memory use. Overall, the results show that ELF can compare with the SoA on this additional dataset that exhibits more illumination and viewpoint changes than HPatches and Webcam. This observation is reinforced by the camera pose evaluation (Figure \[fig:cvpr19\_task1\] right). SuperPoint shows as slight advantage over others that increases from 1% to 5% across the error tolerance threshold whereas ELF-256 exhibits a minor under-performance. Still, these results show ELF compares with SoA performance even though it is not trained explicitly for detection/description. ![*SfM from small subsets*. Evolution of mAP of camera pose for increasing tolerance threshold.[]{data-label="fig:cvpr19_task2"}](fig15.png){width="0.7\linewidth"} *Structure-from-Motion from small subsets.* Task 2 “proposes to to build SfM reconstructions from small (3, 5, 10, 25) subsets of images and use the poses obtained from the entire (much larger) set as ground truth" [@cvpr19challenge]. Figure \[fig:cvpr19\_task2\] shows that SuperPoint reaches performance twice as big as the next best method ELF-SIFT. This suggests that when few images are available, SuperPoint performs better than other approaches. One explanation is that even in ’sparse-mode’, *i.e.* when the number of keypoints is restricted up to 512, SuperPoint samples points more densely than the others ($\sim$383 *v.s.* $\sim$210 for the others). Thus, SuperPoint provides more keypoints to triangulate i.e. more 2D-3D correspondences to use when estimating the camera pose. This suggests that high keypoint density is a crucial characteristic of the detection method for Structure-from-Motion. In this regard, ELF still has room for improvement compared to SuperPoint. Conclusion ========== We have introduced ELF, a novel method to extract feature locations from pre-trained CNNs, with no further training. Extensive experiments show that it performs as well as state-of-the art detectors. It can easily be integrated into existing matching pipelines and proves to boost their matching performances. Even when completed with a simple feature-map-based descriptor, it turns into a competitive feature matching pipeline. These results shed new light on the information embedded inside trained CNNs. This work also raises questions on the descriptor training of deep-learning approaches: whether their losses actually constrain the CNN to learn better features than the ones it would learn on its own to complete a vision task. Preliminary results show that the CNN architecture, the training task and the dataset have substantial impact on the detector performances. A further analysis of these correlations is the object of a future work. ![image](fig2_saliency_bis.png){width="\linewidth"} Metrics definition ================== We explicit the repeatability and matching score definitions introduced in [@mikolajczyk2005comparison] and our adaptations using the following notations: let $(\mathbf{I}^1, \mathbf{I}^2)$, be a pair of images and $\mathcal{KP}^i = (kp_j^i)_{j<N_i}$ the set of $N_i$ keypoints in image $\mathbf{I_i}$. Both metrics are in the range $[0,1]$ but we express them as percentages for better expressibility. #### Repeatability Repeatability measures the percentage of keypoints common to both images. We first warp $\mathcal{KP}^1$ to $\mathbf{I}^2$ and note $\mathcal{KP}^{1,w}$ the result. A naive definition of repeatability is to count the number of pairs $(kp^{1,w}, kp^2) \in \mathcal{KP}^{1,w} \times \mathcal{KP}^2$ such that $\|kp^{1,w}-kp^2\|_2 < \epsilon$, with $\epsilon$ a distance threshold. As pointed by [@verdie2015tilde], this definition overestimates the detection performance for two reasons: a keypoint close to several projections can be counted several times. Moreover, with a large enough number of keypoints, even simple random sampling can achieve high repeatability as the density of the keypoints becomes high. We instead use the definition implemented in VLBench [@lenc12vlbenchmarks]: we define a weighted graph $(V,E)$ where the edges are all the possible keypoint pairs between $\mathcal{KP}^{1,w}$ and $\mathcal{KP}^2$ and the weights are the euclidean distance between keypoints. $$\label{eq: graph_dfn} \begin{split} V &= (kp^{1,w} \in \mathcal{KP}^{1,w}) \cup (kp^2 \in \mathcal{KP}^2) \\ E &= (kp^{1,w}, kp^2, \|kp^{1,w} - kp^2\|_2) \in \mathcal{KP}^{1,w} \times \mathcal{KP}^2 \end{split}$$ We run a greedy bipartite matching on the graph and count the matches with a distance less than $\epsilon_{kp}$. With $\mathcal{M}$ be the resulting set of matches: $$\label{rep_dfn} repeatability = \frac{\mathcal{M}}{\textrm{min}(|\mathcal{KP}^1|, |\mathcal{KP}^2|)}$$ We set the distance threshold $\epsilon=5$ as is done in LIFT [@yi2016lift] and LF-Net [@ono2018lf]. #### Matching score The matching score definition introduced in [@mikolajczyk2005comparison] captures the percentage of keypoint pairs that are nearest neighbours both in image space and in descriptor space, and for which these two distances are below their respective threshold $\epsilon_{kp}$ and $\epsilon_{d}$. Let $\mathcal{M}$ be defined as in the previous paragraph and $\mathcal{M}_d$ be the analog of $\mathcal{M}$ when the graph weights are descriptor distances instead of keypoint euclidean distances. We delete all the pairs with a distance above the thresholds $\epsilon$ and $\epsilon_d$ in $\mathcal{M}$ and $\mathcal{M}_d$ respectively. We then count the number of pairs which are both nearest neigbours in image space and descriptor space i.e. the intersection of $\mathcal{M}$ and $\mathcal{M}_d$: $$\label{MS} matching \; score = \frac{\mathcal{M} \cap \mathcal{M}_d}{\textrm{min}(|\mathcal{KP}^1|, |\mathcal{KP}^2|)}$$ One drawback of this definition is that there is no unique descriptor distance threshold $\epsilon_d$ valid for all methods. For example, the SIFT descriptor as computed by OpenCV is a $[0,255]^{128}$ vector for better computational precision, the SuperPoint descriptor is a $[0,1]^{256}$ vector and the ORB descriptor is a 32 bytes binary vector. Not only the vectors are not defined over the same normed space but their range vary significantly. To avoid introducing human bias by setting a descriptor distance threshold $\epsilon_d$ for each method, we choose to set $\epsilon_d = \infty$ and compute the matching score as in [@mikolajczyk2005comparison]. This means that we consider any descriptor match valid as long as they match corresponding keypoints even when the descriptor distance is high. Tabular results =============== ---------------- ------------------------ -------------------- ------------------------ -------------------- -- -- [@balntas2017hpatches] [@verdie2015tilde] [@balntas2017hpatches] [@verdie2015tilde] ELF-VGG 63.81 ELF-AlexNet 51.30 38.54 35.21 31.92 ELF-Xception 48.06 29.81 ELF-SuperPoint 59.7 46.29 44.32 18.11 ELF-LFNet 60.1 41.90 44.56 33.43 LF-Net 61.16 48.27 34.19 18.10 SuperPoint 46.35 32.44 LIFT 54.66 42.21 34.02 17.83 SURF 54.51 33.93 26.10 10.13 SIFT 51.19 28.25 24.58 8.30 ORB 53.44 31.56 14.76 1.28 KAZE 56.88 41.04 29.81 13.88 TILDE 52.53 46.71 34.67 MSER 47.82 52.23 21.08 6.14 ---------------- ------------------------ -------------------- ------------------------ -------------------- -- -- : Generic performances on HPatches [@balntas2017hpatches]. Robustness to light (Webcam [@verdie2015tilde]). (Fig. 5).[]{data-label="tab:whole_pipeline"} -- ----------- ----------- ----------- ----------- ----------- ----------- 34.19 **57.11** 34.02 24.58 26.10 14.76 **44.19** 53.71 **39.48** **27.03** **34.97** **20.04** 18.10 32.44 17.83 10.13 8.30 1.28 **30.71** **34.60** **26.84** **13.21** **21.43** **13.91** -- ----------- ----------- ----------- ----------- ----------- ----------- : Individual component performance (Fig. \[fig:ind\_component\]-stripes). Matching score for the integration of the VGG $pool_3$ simple-descriptor with other’s detection. Top: Original description. Bottom: Integration of simple-descriptor. HPatches: [@balntas2017hpatches]. Webcam: [@verdie2015tilde][]{data-label="tab:cross_res_des"} -- ----------- ----------- ----------- ----------- ----------- ----------- 34.19 **57.11** 34.02 24.58 26.10 14.76 **39.16** 54.44 **42.48** **50.63** **30.91** **36.96** 18.10 32.44 17.83 10.13 8.30 1.28 **26.70** **39.55** **30.82** **36.83** **19.14** **6.60** -- ----------- ----------- ----------- ----------- ----------- ----------- : Individual component performance (Fig. \[fig:ind\_component\]-circle). Matching score for the integration of ELF-VGG (on $pool_2$) with other’s descriptor. Top: Original detection. Bottom: Integration of ELF. HPatches: [@balntas2017hpatches]. Webcam: [@verdie2015tilde][]{data-label="tab:cross_res_det"} ---------------- ------------------------ -------------------- ------------------------ -------------------- -- -- [@balntas2017hpatches] [@verdie2015tilde] [@balntas2017hpatches] [@verdie2015tilde] Sobel-VGG 56.99 33.74 42.11 20.99 Lapl.-VGG **65.45** 33.74 **55.25** 22.79 VGG 63.81 **53.23** 51.84 **43.73** Sobel-AlexNet 56.44 33.74 30.57 15.42 Lapl.-AlexNet **65.93** 33.74 **40.92** 15.42 AlexNet 51.30 **38.54** 35.21 **31.92** Sobel-Xception 56.44 33.74 34.14 16.86 Lapl.-Xception **65.93** 33.74 **42.52** 16.86 Xception 48.06 **49.84** 29.81 **35.48** ---------------- ------------------------ -------------------- ------------------------ -------------------- -- -- : Gradient baseline on HPatches [@balntas2017hpatches] and Webcam [@verdie2015tilde] (Fig. \[fig:gradient\_perf\] ).[]{data-label="tab:cmp_sobel"} ELF Meta Parameters =================== This section specifies the meta parameters values for the ELF variants. For all methods, $(w_{NMS}, b_{NMS})=(10,10)$. - Denoise: $(\mu_{noise}, \sigma_{noise})$. - Threshold: $(\mu_{thr}, \sigma_{thr})$. - $F^l$: the feature map which gradient is used for detection. - simple-des: the feature map used for simple-description. Unless mentioned otherwise, the feature map is taken from the same network as the detection feature map $F^l$. Nets Denoise Threshold $F^l$ simple-desc ------------ --------- ----------- -------------- ------------- -- VGG (5,5) (5,4) pool2 pool4 Alexnet (5,5) (5,4) pool1 pool2 Xception (9,3) (5,4) block2-conv1 block4-pool SuperPoint (7,2) (17,6) conv1a VGG-pool3 LF-Net (5,5) (5,4) block2-BN VGG-pool3 : Generic performances on HPatches (Fig. \[fig:hpatch\_gle\_perf\]). (BN: Batch Norm)[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ------------ --------- ----------- -------------- ------------- -- VGG (5,5) (5,4) pool2 pool4 Alexnet (5,5) (5,4) pool1 pool2 Xception (9,9) (5,4) block2-conv1 block4-pool SuperPoint (7,2) (17,6) conv1a VGG-pool3 LF-Net (5,5) (5,4) block2-conv VGG-pool3 : Robustness to light on Webcam (Fig. \[fig:hpatch\_gle\_perf\]).[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ------ --------- ----------- ------- ------------- -- VGG (5,2) (17,6) pool2 pool4 : Robustness to scale on HPatches (Fig. \[fig:robust\_scale\]).[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ------ --------- ----------- ------- ------------- -- VGG (5,2) (17,6) pool2 pool4 : Robustness to rotation on HPatches (Fig. \[fig:robust\_rotation\]).[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ------ --------- ----------- ------- ------------- -- VGG (5,2) (17,6) pool2 pool4 : Robustness to 3D viewpoint on Strecha (Fig. \[fig:robust\_strecha\]).[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ------ --------- ----------- ------- ------------- -- VGG (5,5) (5,5) pool2 pool3 : Individual component analysis (Fig. \[fig:ind\_component\])[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ----------- --------- ----------- ------- ------------- -- VGG (5,5) (5,4) pool2 pool4 Sobel (9,9) (5,4) - pool4 Laplacian (9,9) (5,4) - pool4 : Gradient baseline on HPatches and Webcam (Fig. \[fig:gradient\_perf\]).[]{data-label="tab:meta_params"} [^1]: ELF code:<https://github.com/ELF-det/elf> [^2]: Rotation and scale dataset: <https://bit.ly/31RAh1S> [^3]: <https://youtu.be/oxbG5162yDs>
--- abstract: 'Spermatozoa self-propel by propagating bending waves along an active elastic flagellum. The structure in the distal flagellum is likely incapable of actively bending, and as such is largely neglected. Through elastohydrodynamic modeling we show that an inactive distal region confers a significant propulsive advantage when compared with a fully active flagellum of the same length. The optimal inactive length, typically 2–5% (but up to 37% in extremes), depends on both wavenumber and viscous-elastic ratio. Potential implications in evolutionary biology and clinical assessment are discussed.' author: - 'Cara V. Neal' - 'Atticus L. Hall-McNair' - 'Meurig T. Gallagher' - 'Jackson Kirkman-Brown' - 'David J. Smith' date: October 2019 title: 'Doing more with less: the flagellar end piece enhances the propulsive effectiveness of spermatozoa' --- Spermatozoa, alongside their crucial role in sexual reproduction, are a principal motivating example of inertialess propulsion in the very low Reynolds number regime. The time-irreversible motion required for effective motility is achieved through the propagation of bending waves along the eukaryotic axoneme, which forms the active elastic internal core of the slender flagellum. While sperm morphology varies significantly between species [@austin1995evolution; @fawcett1975mammalian; @werner2008insect; @mafunda2017sperm; @nelson2010tardigrada; @anderson1975form], there are clear conserved features which can be seen in humans, most mammals, and also our evolutionary ancestors [@cummins1985mammalian]. In gross structural terms, sperm comprise (i) the head, which contains the genetic cargo; (ii) the midpiece of the flagellum, typically a few microns in length, containing the ATP-generating mitochondria; (iii) the principal piece of the flagellum, typically 40–50$\,\mu$m in length (although much longer in some species [@bjork2006intensity]), the core of which is a “9+2” axoneme, producing and propagating active bending waves through dynein-ATPase activity [@machin1958wave]; and (iv) the end piece, typically a few microns in length, which consists of singleton microtubules only [@zabeo2019axonemal]. Lacking the predominant “9+2” axonemal structure it appears unlikely that the end piece is a site of molecular motor activity. Since the end piece is unactuated, we will refer to it as ‘inactive’, noting however that this does not mean it is necessarily ineffective. Correspondingly, the actuated principal piece will be referred to as ‘active’. A detailed review of human sperm morphology can be found in [@gaffney2011mammalian; @lauga2009hydrodynamics]. While the end piece can be observed through transmission electron and atomic force microscopy [@fawcett1975mammalian; @ierardi2008afm], live imaging to determine its role in propelling the cell is currently challenging. Futhermore, because the end piece has been considered to not have a role in propelling the cell, it has received relatively little attention. However, we know that waveform has a significant impact on propulsive effectiveness, and moreover changes to the waveform have an important role in enabling cells to penetrate the highly viscous cervical mucus encountered in internal fertilization [@smith2009bend]. This leads us to ask: *does the presence of a mechanically inactive region at the end of the flagellum help or hinder the cell’s progressive motion?* The emergence of elastic waves on the flagellum can be described by a constitutively linear, geometrically nonlinear filament, with the addition of an active moment per unit length $m$, which models the internal sliding produced by dynein activity, and a hydrodynamic term $\bm{f}$ which describes the force per unit length exerted by the filament onto the fluid. Many sperm have approximately planar waveforms, especially upon approaching and collecting at surfaces [@gallagher2018casa; @woolley2003motility]. As such, their shape can be fully described by the angle made between the tangent and the head centreline, denoted $\theta$, as shown in Fig. \[fig:sperm-schematic\]. Following [@moreau2018asymptotic; @hall2019efficient] we parameterize the filament by arclength $s$, with $s=0$ corresponds to the head-flagellum joint and $s=L^{*}$ to the distal end of the flagellum, and apply force and moment free boundary conditions at $s=L^{*}$ to get $$E(s)\,\partial_s \theta(s,t) - \bm{e}_3\cdot\int_s^{L^{*}} \partial_{s'} \bm{X}(s',t) \times \left(\int_{s'}^{L^{*}} \bm{f}(s'',t) ds''\right) ds' - \int_s^{L^{*}} m(s',t)\,ds' =0, \label{eq:elasticity0}$$ with the elastic stiffness given, following [@gaffney2011mammalian], by $$E(s)= \begin{cases} (E_p^*-E_d^*)\left( \frac{s-s_d^*}{s_d^*}\right)^2 + E_d^* & s \leq s_d^*, \\ E_d^* & s>s_d^*, \end{cases}$$ where parameters $E_p^* = 8\times10^{-21}$Nm$^2$, $E_d^*=2.2\times10^{-21}\,$Nm$^2$ and $s_d^*=39\,\mu$m$=3.9\times10^{-5}\,$m have been chosen to model the tapering structure of mammalian sperm flagella and match to experimental stiffness measurements [@Lesich2004; @Lesich2008]. The position vector $\bm{X}=\bm{X}(s,t)$ describes the flagellar waveform at time $t$, so that $\partial_s\bm{X}$ is the tangent vector, and $\bm{e}_3$ is a unit vector pointing perpendicular to the plane of beating. Integrating by parts leads to the elasticity integral equation $$E(s)\,\partial_s \theta(s,t) + \bm{e}_3\cdot\int_s^{L^{*}} (\bm{X}(s',t)-\bm{X}(s,t)) \times \bm{f}(s',t) \, ds' - \int_s^{L^{*}} m(s',t)\,ds' =0. \label{eq:elasticity}$$ The active moment density can be described to a first approximation by a sinusoidal traveling wave $m(s,t)=m_0^* \cos(k^*s-\omega^* t)$, where $k^*$ is wavenumber and $\omega^*$ is radian frequency. The inactive end piece can be modeled by taking the product with a Heaviside function, so that $m(s,t) = m_0^* \cos(k^*s-\omega^* t)H(\ell^* -s)$ where $0<\ell^*\leqslant L^*$ is the length of the active tail segment. At very low Reynolds number, neglecting non-Newtonian influences on the fluid, the hydrodynamics are described by the Stokes flow equations $$-\bm{\nabla}p + \mu^* \nabla^2\bm{u} = \bm{0}, \quad \bm{\nabla}\cdot \bm{u} = 0,$$ where $p=p(\bm{x},t)$ is pressure, $\bm{u}=\bm{u}(\bm{x},t)$ is velocity and $\mu^*$ is dynamic viscosity. These equations are augmented by the no-slip, no-penetration boundary condition ${\bm{u}(\bm{X}(s,t),t)=\partial_t\bm{X}(s,t)}$, i.e. the fluid in contact with the filament moves at the same velocity as the filament. A convenient and accurate numerical method to solve these equations for biological flow problems with deforming boundaries is based on the ‘regularized stokeslet’ [@cortez2001method; @cortez2005method], i.e. the solution to the exactly incompressible Stokes flow equations driven by a spatially-concentrated but smoothed force $$-\bm{\nabla}p + \mu^* \nabla^2\bm{u} + \psi_\varepsilon(\bm{x},\bm{y})\bm{e}_3 = 0, \quad \bm{\nabla}\cdot \bm{u} = 0,$$ where $\varepsilon\ll 1$ is a regularization parameter, $\bm{y}$ is the location of the force, $\bm{x}$ is the evaluation point and $\psi_\varepsilon$ is a smoothed approximation to a Dirac delta function. The choice $$\psi_\varepsilon(\bm{x},\bm{y})=15\varepsilon^4/r_\varepsilon^{7},$$ leads to the regularized stokeslet [@cortez2005method] $$S_{ij}^\varepsilon(\bm{x},\bm{y})=\frac{1}{8\pi\mu}\left(\frac{\delta_{ij}(r^2+2\varepsilon^2)+r_ir_j}{r_\varepsilon^3} \right) ,$$ where $r_i=x_i-y_i$, $r^2=r_i r_i$, $r_\varepsilon^2=r^2+\varepsilon^2$. The flow $u_j(\bm{x},t)$ produced by a filament $\bm{X}(s,t)$ exerting force per unit length $\bm{f}(s,t)$ is then given by the line integral $\int_0^{L^*} S_{jk}^\varepsilon(\bm{x},\bm{X}(s,t))f_k(s,t)\,ds$. The flow due to the surface of the sperm head $\partial H$, exerting force per unit area $\bm{\varphi}(\bm{Y},t)$ for $\bm{Y}\in\partial H$, is given by the surface integral $\iint_{\partial H} S_{jk}^\varepsilon(\bm{x},\bm{Y})\varphi_k(\bm{Y})\,dS_{\bm{Y}}$, yielding the boundary integral equation [@smith2009boundaryelement] for the hydrodynamics, namely $$u_j(\bm{x},t)=\int_0^{L^*} S^\varepsilon_{jk}(\bm{x},\bm{X}(s,t))f_k(s,t)\,ds+\iint_{\partial H} S_{jk}^\varepsilon(\bm{x},\bm{Y})\varphi_k(\bm{Y},t)\,dS_{\bm{Y}}. \label{eq:flow}$$ The position and shape of the cell can be described by the location $\bm{X}_0(t)$ of the head-flagellum join and the waveform $\theta(s,t)$, so that the flagellar curve is $$\bm{X}(s,t)=\bm{X}_0(t)+\int_0^s [\cos\theta(s',t),\sin\theta(s',t),0]^T \,ds'. \label{eq:geometry}$$ Differentiating with respect to time, the flagellar velocity is then given by $$\bm{u}(\bm{X}(s,t),t)=\dot{\bm{X}}_0(t)+\int_0^s\partial_t\theta(s',t)[-\sin\theta(s',t),\cos\theta(s',t),0]^T \,ds'. \label{eq:kinematic1}$$ Modeling the head as undergoing rigid body motion around the head-flagellum joint, the surface velocity of a point $\bm{Y}\in\partial H$ is given by $$\bm{u}(\bm{Y},t)=\dot{\bm{X}}_0(t)+\partial_t \theta(0,t)\,\bm{e}_3 \times (\bm{Y}-\bm{X}_0). \label{eq:kinematic2}$$ Eqs.  and couple with fluid mechanics (Eq. ), active elasticity (Eq. ), and total force and moment balance across the cell to yield a model for the unknowns $\theta(s,t)$, $\bm{X}_0(t)$, $\bm{f}(s,t)$ and $\bm{\varphi}(\bm{Y},t)$. Non-dimensionalising with lengthscale $L^*$, timescale $1/\omega^*$ and force scale $\mu^*\omega^* {L^*}^2$ yields the equation in scaled variables (dimensionless variables denoted with $\,\hat{}\,$ ) $$\partial_{\hat{s}} \theta(\hat{s},\hat{t})+ \bm{e}_3\cdot\mathcal{S}^4\int_{\hat{s}}^1 (\hat{\bm{X}}(\hat{s}',\hat{t})\,-\hat{\bm{X}}(\hat{s},\hat{t})) \times \hat{\bm{f}}(\hat{s}',\hat{t}) \,d\hat{s}' - \mathcal{M}\int_{\hat{s}}^1 \cos(\hat{k}\hat{s}'-\hat{t})H(\ell-\hat{s}') \,d\hat{s}' =0, \label{eq:elasticityND}$$ where $\mathcal{S}=L^*(\mu^*\omega^*/E^{L})^{1/4}$ is a dimensionless group comparing viscous and elastic forces (related, but not identical to, the commonly-used ‘sperm number’), $\mathcal{M}=m_0^*{L^*}^2/E^{L}$ is a dimensionless group comparing active and elastic forces, and $\ell=\ell^*/L^*$ is the dimensionless length of the active segment. Here $E^L$ is the stiffness at the distal tip of the flagellum ($\hat{s}=1$) and the dimensionless wavenumber is $\hat{k}=k^*L^*$. The problem is numerically discretised as described by Hall-McNair *et al*. [@hall2019efficient], accounting for non-local hydrodynamics via the method of regularized stokeslets [@cortez2005method]. This framework is modified to take into account the presence of the head via the nearest-neighbor discretisation of Gallagher & Smith [@gallagher2018meshfree]. The head-flagellum coupling is enforced via the dimensionless moment balance boundary condition $$\partial_{\hat{s}}\theta(0,\hat{t}\,)-\bm{e}_3\cdot\mathcal{S}^4\iint_{\partial \hat{H}}(\hat{\bm{Y}}(\hat{t}\,)-\hat{\bm{X}}_0(\hat{t}\,))\times\hat{\bm{\varphi}}(\hat{\bm{Y}},\hat{t}\,)\,dS_{\hat{\bm{Y}}}=0.$$ Note that for the remainder of this letter we will work with the dimensionless model but drop the $\,\hat{}\,$ notation used to represent dimensionless variables for clarity. The initial value problem for the flagellar trajectory, discretised waveform and force distributions is solved in MATLAB^^ using the built in solver $\mathtt{ode15s}$. At any point in time, the sperm cell’s position and shape can be reconstructed completely from $\bm{X}_0(t)$ and $\theta(s,t)$ through equation (\[eq:geometry\]).\ *Results.* The impact of the length of the inactive end piece on propulsion is quantified by the swimming speed and efficiency. Velocity along a line (VAL) is used as a measure of swimming speed, calculated via $$% \text{VAL}^{(j)} = \frac{||\bm{X}_0^{(j)}-\bm{X}_0^{(j-1)}||}{T}, \text{VAL}^{(j)} = \|\bm{X}_0^{(j)}-\bm{X}_0^{(j-1)}\| / T ,$$ where $T=2\pi$ is the period of the driving wave and $\bm{X}_0^{(j)}$ represents the position of the head-flagellum joint after $j$ periods. Lighthill efficiency [@lighthill1975mathematical] is calculated as $$% \eta^{(j)} = \frac{\left(\text{VAL}^{(j)}\right)^2}{\overline{W}}, \eta^{(j)} = \left(\text{VAL}^{(j)}\right)^2 / \,\overline{W}^{(j)},$$ where $\overline{W}^{(j)} = \left< \int_{0}^{1}\bm{u} \cdot \bm{f} ds' \right>$ is the average work done by the cell over the $j$^th^ period. In the following, $ j $ is chosen sufficiently large so that the cell has established a regular beat before its statistics are calculated ($j=3$ is sufficient for what follows). ![ The effect of the inactive end piece length on swimming speed and efficiency of propulsion at various wavenumbers, along with velocity-optimal waveforms and example data for human sperm. (column 1) Velocity along a line (VAL) versus active length $\ell$ for viscous-elastic parameter choices $\mathcal{S}=18,\,13.5,\,9$, and wavenumbers $k=3\pi,\,4\pi,\,5\pi$; (column 2) Lighthill efficiency versus active length $\ell$ for the same choices of $\mathcal{S}$ and $k$; (column 3) velocity-optimal cell waveforms for each $\mathcal{S}$ and $k$; (column 4) experimental data showing the instantaneous waveform of a human sperm in high viscosity medium, with centerline plotted in purple (tracked with FAST [@gallagher2019rapid]), scale bar denotes 5$\mu$m. []{data-label="fig:results-main"}](results_main_mtg.png){width="90.00000%"} The effects of varying the dimensionless active tail length on sperm swimming speed and efficiency for three choices of dimensionless wavenumber $k$ are shown in Fig. \[fig:results-main\]. Here $ \ell=1 $ corresponds to an entirely active flagellum and $ \ell = 0 $ to an entirely inactive flagellum. Values $ 0.5 \leqslant \ell \leqslant 1 $ are considered so that the resulting simulations produce cells that are likely to be biologically realistic. Higher wavenumbers are considered as they are typical of mammalian sperm flagella in higher viscosity media [@smith2009bend]. Results are calculated by taking $ m_0^* = 0.01 \,\mu^* \omega^* {L^{*}}^2 k / \mathcal{S} $ ($k$ dimensionless, $m_0^*$ dimensional) and hence $ \mathcal{M} = 0.01\, k \mathcal{S}^3 $, the effect of which is to produce waveforms of realistic amplitude across a range of values of $k$ and $\mathcal{S}$. Optimal active lengths for swimming speed, $\ell_{\text{VAL}}$, and efficiency, $\ell_{\eta}$, occur for each parameter pair $(\mathcal{S},k)$; crucially, in all cases considered in Fig. \[fig:results-main\], the optima are less than 1, indicating that by either measure some length of inactive flagellum is generally always better than a fully active flagellum. Values of $ \ell_{\text{VAL}} $ and $ \ell_{\eta} $ for the $ (\mathcal{S},k) $ parameter pairs considered in Fig. \[fig:results-main\] are given in Table \[table:lopt\]. Typically $ \ell_{\text{VAL}} \neq \ell_{\eta} $ for a given swimmer. For each metric, optimum active length remains approximately consistent regardless of the choice of $ \mathcal{S} $ when $ k=3\pi $ or $4\pi $. When $k=5\pi$, much higher variability in optimum active length is observed. In Fig. \[fig:results-colormaps\], the relationship between optimum active flagellum length and each of VAL and $ \eta $ is further investigated by simulating cells over a finer gradation in $ \mathcal{S} \in [9,18] $. When $ k=3\pi $ and $ 4\pi $, we again observe that a short inactive distal region is beneficial to the cell regardless of $ \mathcal{S} $. For $ k=5\pi $, there is a clear sensitivity of $ \ell_{\text{VAL}} $ to $ \mathcal{S} $, which is not observed between $ \ell_{\eta} $ and $ \mathcal{S} $. In all cases, the optimum values $ \ell_{\text{VAL}} $ and $ \ell_{\eta} $ are strictly less than 1. ![ Normalized VAL (top row) and normalized Lighthill efficiency (bottom row) values for varying $ 0.5 \leqslant \ell \leqslant 1 $ and $ 9 \leqslant \mathcal{S} \leqslant 18 $, for three values of dimensionless wavenumber $ k $. Values in each subplot are normalized with respect to either the maximum VAL or maximum $ \eta $ for each $ k $. []{data-label="fig:results-colormaps"}](results_colormaps.pdf){width="65.00000%"} The waveform and velocity field associated with flagella that are fully-active and optimally-inactive for propulsion are shown in Fig. \[fig:results-streamlines\]. The qualitative features of both waveform and the velocity field are similar, however the optimally-inactive flagellar waveform has reduced curvature and tangent angle in the distal region, and increased velocity in both ‘oblique’ regions (i.e. where $\theta\approx\pm\pi/4$).\ ![ Comparison of the normalized flow fields around the end of a simulated sperm for (a) a fully active flagellum, and (b) a flagellum featuring an inactive distal region of length $ 1-\ell_{\text{VAL}} $. The active part of the flagellum is drawn in red and the inactive region in black. Fluid velocity is scaled against the maximum velocity across both frames, with magnitude indicated by the colorbar and direction by the field lines. Here, $ k=4\pi $, $ \mathcal{S}=13.5 $ and $ \ell_{\text{VAL}} = 0.95$. []{data-label="fig:results-streamlines"}](results_streamlines.pdf){width="75.00000%"} *Discussion.* In simulations, we observe that spermatozoa which feature a short, inactive region at the end of their flagellum swim faster and more efficiently than those without. For $k=3\pi $ and $ k=4\pi $, cell motility is optimized when $ \approx 5\% $ of the distal flagellum length is inactive, regardless of $\mathcal{S}$. Experimental measurements of human sperm indicate an average combined length of the midpiece and principal piece of $ \approx 54\mu $m and an average end piece length of $ \approx 3\mu $m [@cummins1985mammalian], suggesting that the effects uncovered here are biologically important. Results for waveforms that are characteristic of those in higher viscosity fluids indicate that in some cases ($k=5\pi$) much longer inactive regions are optimal, up to $ \approx\SI{22}{\micro\meter}/\SI{57}{\micro\meter} $ or $\approx 37\%$ of the flagellum – substantially longer than the end piece observed of human spermatozoa. Sperm move through a variety of fluids during migration, in particular encountering a step change in viscosity when penetrating the interface between semen and cervical mucus, and having to swim against physiological flows [@Tung2015]. Cells featuring an optimally-sized inactive end piece may form better candidates for fertilization, being able to swim faster and for longer when traversing the female reproductive tract [@holt2015sperm]. The basic mechanism by which the flagellar wave produces propulsion is through the interaction of segments of the filament moving obliquely through the fluid [@gray1955propulsion]. Analysis of the flow field (Fig. \[fig:results-streamlines\]) suggests that the lower curvature associated with the inactive end piece enhances the strength of the interaction between the obliquely moving region and the fluid. At high viscous-elastic ratio and wavenumber, a ‘normal’ flagellar waveform can be produced by a relatively large inactive region of around one wavelength (Fig. \[fig:results-main\]). This effect may have physiological relevance in respect of biochemical energy transport requirements from the mitochondria, which are situated in the midpiece. An inactive region of flagellum is not a feature unique to human gametes - its presence can also be observed in the sperm of other species [@fawcett1975mammalian], as well as other microorganisms. In particular, the axomenal structures of the bi-flagellated algae *chlamydomonas reinhardtii* deplete at the distal tips [@jeanneret2016], suggesting the presence of an inactive region. The contribution to swimming speed and cell efficiency due to the inactive distal section in these cases remains unknown. By contrast, the tip of the “9+2" cilium is a more organized “crown" structure [@kuhn1978structure], which will interact differently with fluid than the flagellar end piece modeled here. Understanding this distinction between cilia and flagella, as well as the role of the inactive region in other microorganisms, may provide further insight into underlying biological phenomena, such as chemotaxis [@ALVAREZ2014198] and synchronization [@guo2018bistability; @goldstein2016elastohydrodynamic]. Further work should investigate how this phenomenon changes when more detailed models of the flagellar ultrastructure are considered, taking into account the full “9+2” structure [@ishijima2019modulatory], sliding resistance associated with filament connections [@coy2017counterbend], and the interplay of these factors with biochemical signalling in the cell [@carichino2018]. The ability to qualitatively assess and model the inactive end piece of a human spermatozoon could have important clinical applications. In live imaging for diagnostic purposes, the end piece is often hard to resolve due to its depleted axonemal structure. Lacking more sophisticated imaging techniques, which are often expensive or impractical in a clinical environment, modeling of the end piece combined with flagellar tracking software, such as FAST [@gallagher2019rapid], could enable more accurate sperm analysis, and help improve cell selection in assisted reproductive technologies. Furthermore, knowledge of the function of an inactive distal region has wider applications across synthetic microbiology, particularly in the design and fabrication of artificial swimmers [@dreyfus2005microscopic] and flexible filament microbots used in targeted drug delivery [@montenegro2018microtransformers].\ *Summary and Conclusions.* In this letter, we have revealed the propulsive advantage conferred by an inactive distal region of a unipolar “pusher” actuated elastic flagellum, characteristic of mammalian sperm. The optimal inactive flagellum length depends on the balance between elastic stiffness and viscous resistance, and the wavenumber of actuation. The optimal inactive fraction mirrors that seen in humans sperm ($\approx 3\,\mu$m$/57\,\mu$m, or $ \approx 5\% $). These findings have a range of potential applications. They motivate the development of new methodology for improving the analysis of flagellar imaging data; by model fitting the experimentally visible region it may be possible to resolve the difficult to image distal segment. Inclusion of an inactive region may be an interesting avenue to explore when improving the efficiency of artificial microswimmer design. Finally, important biological questions may now be posed, for example does the presence of the inactive end piece confer an advantage to cells penetrating highly viscous cervical mucus?\ *Acknowledgments.* D.J.S. and M.T.G. acknowledge funding from the Engineering and Physical Sciences Research Council (EPSRC) Healthcare Technologies Award (EP/N021096/1). C.V.N. and A.L.H-M. acknowledge support from the EPSRC for funding via PhD scholarships (EP/N509590/1). J.C.K-B. acknowledges support from the National Institute for Health Research (NIHR) U.K. The authors also thank Hermes Gadêlha (University of Bristol, UK), Thomas D. Montenegro-Johnson (University of Birmingham, UK) for stimulating discussions around elastohydrodynamics and Gemma Cupples (University of Birmingham, UK) for the experimental still in Fig. \[fig:results-main\] (provided by a donor recruited at Birmingham Women’s and Children’s NHS Foundation Trust after giving informed consent). [10]{} C.R. Austin. Evolution of human gametes: spermatozoa. , 1995:1–19, 1995. D.W. Fawcett. The mammalian spermatozoon. , 44(2):394–436, 1975. M. Werner and L.W. Simmons. Insect sperm motility. , 83(2):191–208, 2008. P. S. Mafunda, L. Maree, A. Kotze, and G. van der Horst. Sperm structure and sperm motility of the african and rockhopper penguins with special reference to multiple axonemes of the flagellum. , 99:1–9, 2017. D. R. Nelson, R. Guidetti, and L. Rebecchi. Tardigrada. In [*Ecology and classification of North American freshwater invertebrates*]{}, pages 455–484. Elsevier, 2010. W. A. Anderson, P. Personne, and A. BA. The form and function of spermatozoa: a comparative view. , pages 3–13, 1975. J.M. Cummins and P.F. Woodall. On mammalian sperm dimensions. , 75(1):153–175, 1985. A. Bjork and S. Pitnick. Intensity of sexual selection along the anisogamy–isogamy continuum. , 441(7094):742, 2006. K.E. Machin. Wave propagation along flagella. , 35(4):796–806, 1958. D. Zabeo, J.T. Croft, and J.L. H[ö]{}[ö]{}g. Axonemal doublet microtubules can split into two complete singlets in human sperm flagellum tips. , 593(9):892–902, 2019. E.A. Gaffney, H. Gad[ê]{}lha, D.J. Smith, J.R. Blake, and J.C. Kirkman-Brown. Mammalian sperm motility: observation and theory. , 43:501–528, 2011. E. Lauga and T.R. Powers. The hydrodynamics of swimming microorganisms. , 72(9):096601, 2009. V. Ierardi, A. Niccolini, M. Alderighi, A. Gazzano, F. Martelli, and R. Solaro. characterization of rabbit spermatozoa. , 71(7):529–535, 2008. D.J. Smith, E.A. Gaffney, H. Gad[ê]{}lha, N. Kapur, and J.C. Kirkman-Brown. Bend propagation in the flagella of migrating human sperm, and its modulation by viscosity. , 66(4):220–236, 2009. M.T. Gallagher, D.J. Smith, and J.C. Kirkman-Brown. : tracking the past and plotting the future. , 30(6):867–874, 2018. D.M. Woolley. Motility of spermatozoa at surfaces. , 126(2):259–270, 2003. C. Moreau, L. Giraldi, and H. Gad[ê]{}lha. The asymptotic coarse-graining formulation of slender-rods, bio-filaments and flagella. , 15(144):20180235, 2018. A.L. Hall-McNair, T.D. Montenegro-Johnson, H. Gad[ê]{}lha, D.J. Smith, and M.T. Gallagher. Efficient [I]{}mplementation of [E]{}lastohydrodynamics via [I]{}ntegral [O]{}perators. , 2019. K. Lesich and C. Lindemann. Direct measurement of the passive stiffness of rat sperm and implications to the mechanism of the calcium response. , 59:169–79, 11 2004. K. Lesich, D. Pelle, and C. Lindemann. Insights into the [M]{}echanism of [ADP]{} [A]{}ction on [F]{}lagellar [M]{}otility [D]{}erived from [S]{}tudies on [B]{}ull [S]{}perm. , 95:472–82, 08 2008. R. Cortez. The method of regularized [S]{}tokeslets. , 23(4):1204–1225, 2001. R. Cortez, L. Fauci, and A. Medovikov. The method of regularized [S]{}tokeslets in three dimensions: analysis, validation, and application to helical swimming. , 17(3):031504, 2005. D. J. Smith. A boundary element regularized stokeslet method applied to cilia- and flagella-driven flow. , 465(2112):3605–3626, 2009. M. T. Gallagher and D. J. Smith. Meshfree and efficient modeling of swimming cells. , 3(5):053101, 2018. J. Lighthill. . SIAM, 1975. M.T. Gallagher, G. Cupples, E.H Ooi, J.C Kirkman-Brown, and D.J. Smith. Rapid sperm capture: high-throughput flagellar waveform analysis. , 34(7):1173–1185, 06 2019. C-k. Tung, F. Ardon, A. Roy, D. L. Koch, S. Suarez, and M. Wu. Emergence of upstream swimming via a hydrodynamic transition. , 114:108102, 2015. W. V. Holt and A. Fazeli. Do sperm possess a molecular passport? mechanistic insights into sperm selection in the female reproductive tract. , 21(6):491–501, 2015. J. Gray and G.J. Hancock. The propulsion of sea-urchin spermatozoa. , 32(4):802–814, 1955. R. Jeanneret, M. Contino, and M. Polin. A brief introduction to the model microswimmer [*hlamydomonas reinhardtii*]{}. , 225, 06 2016. C. Kuhn III and W. Engleman. The structure of the tips of mammalian respiratory cilia. , 186(3):491–498, 1978. L. Alvarez, B.M. Friedrich, G. Gompper, and K. U.Benjamin. The computational sperm cell. , 24(3):198 – 207, 2014. H. Guo, L. Fauci, M. Shelley, and E. Kanso. Bistability in the synchronization of actuated microfilaments. , 836:304–323, 2018. R. E. Goldstein, E. Lauga, A. I. Pesci, and M.R.E Proctor. Elastohydrodynamic synchronization of adjacent beating flagella. , 1(7):073201, 2016. S. Ishijima. Modulatory mechanisms of sliding of nine outer doublet microtubules for generating planar and half-helical flagellar waves. , 25(6):320–328, 2019. R. Coy and H. Gad[ê]{}lha. The counterbend dynamics of cross-linked filament bundles and flagella. , 14(130):20170065, 2017. L. Carichino and S. D. Olson. Emergent three-dimensional sperm motility: coupling calcium dynamics and preferred curvature in a kirchhoff rod model. , 2018. R. Dreyfus, J. Baudry, M.L. Roper, M. Fermigier, H.A. Stone, and J. Bibette. Microscopic artificial swimmers. , 437(7060):862, 2005. T.D. Montenegro-Johnson. Microtransformers: [C]{}ontrolled microscale navigation with flexible robots. , 3(6):062201, 2018.
--- abstract: 'This paper is to prove the asymptotic normality of a statistic for detecting the existence of heteroscedasticity for linear regression models without assuming randomness of covariates when the sample size $n$ tends to infinity and the number of covariates $p$ is either fixed or tends to infinity. Moreover our approach indicates that its asymptotic normality holds even without homoscedasticity.' address: - 'KLASMOE and School of Mathematics and Statistics, Northeast Normal University, Changchun, P.R.C., 130024.' - 'School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, 637371' - 'KLASMOE and School of Mathematics and Statistics, Northeast Normal University, Changchun, P.R.C., 130024.' author: - Zhidong Bai - Guangming Pan - Yanqing Yin title: 'Homoscedasticity tests for both low and high-dimensional fixed design regressions' --- [ emptyifnotempty[@addto@macro]{} @addto@macro]{} [^1] [ emptyifnotempty[@addto@macro]{} @addto@macro]{} [^2] [ emptyifnotempty[Corresponding author]{}[@addto@macro]{} @addto@macro]{} [^3] Introduction ============ A brief review of homoscedasticity test --------------------------------------- Consider the classical multivariate linear regression model of $p$ covariates $$\begin{aligned} y_i={\mathbf}x_i{\mathbf}\beta+{\mathbf}\varepsilon_i,\ \ \ \ \ i=1,2,\cdots,n,\end{aligned}$$ where $y_i$ is the response variable, ${\mathbf}x_i=(x_{i,1},x_{i,2},\cdots,x_{i,p})$ is the $p$-dimensional covariates, $\beta={\left(}\beta_1,\beta_2,\cdots,\beta_p{\right)}'$ is the $p$ dimensional regression coefficient vector and $\varepsilon_i$ is the independent random errors obey the same distribution with zero mean and variance $\sigma_i^2$. In most applications of the linear regression models the homoscedasticity is a very important assumption. Without it, the loss in efficiency in using ordinary least squares (OLS) may be substantial and even worse, the biases in estimated standard errors may lead to invalid inferences. Thus, it is very important to examine the homoscedasticity. Formally, we need to test the hypothesis $$\label{a1} H_0: \ \sigma_1^2=\sigma_2^2=\cdots=\sigma_n^2=\sigma^2,$$ where $\sigma^2$ is a positive constant. In the literature there are a lot of work considering this hypothesis test when the dimension $p$ is fixed. Indeed, many popular tests have been proposed. For example Breusch and Pagan [@breusch1979simple] and White [@white1980heteroskedasticity] proposed statistics to investigate the relationship between the estimated errors and the covariates in economics. While in statistics, Dette and Munk [@dette1998estimating], Glejser [@glejser1969new], Harrison and McCabe [@harrison1979test], Cook and Weisberg [@cook1983diagnostics], Azzalini and Bowman[@azzalini1993use] proposed nonparametric statistics to conduct the hypothesis. One may refer to Li and Yao [@li2015homoscedasticity] for more details in this regard. The development of computer science makes it possible for people to collect and deal with high-dimensional data. As a consequence, high-dimensional linear regression problems are becoming more and more common due to widely available covariates. Note that the above mentioned tests are all developed under the low-dimensional framework when the dimension $p$ is fixed and the sample size $n$ tends to infinity. In Li and Yao’s paper, they proposed two test statistics in the high dimensional setting by using the regression residuals. The first statistic uses the idea of likelihood ratio and the second one uses the idea that “the departure of a sequence of numbers from a constant can be efficiently assessed by its coefficient of variation", which is closely related to John’s idea [@john1971some]. By assuming that the distribution of the covariates is ${\mathbf}N({\mathbf}0, {\mathbf}I_p)$ and that the error obey the normal distribution, the “coefficient of variation" statistic turns out to be a function of residuals. But its asymptotic distribution missed some part as indicated from the proof of Lemma 1 in [@li2015homoscedasticity] even in the random design. The aim of this paper is to establish central limit theorem for the “coefficient of variation" statistic without assuming randomness of the covariates by using the information in the projection matrix (the hat matrix). This ensures that the test works when the design matrix is both fixed and random. More importantly we prove that the asymptotic normality of this statistics holds even without homoscedasticity. That assures a high power of this test. The structure of this paper is as follows. Section 2 is to give our main theorem and some simulation results, as well as two real data analysis. Some calculations and the proof of the asymptotic normality are presented in Section 3. Main Theorem, Simulation Results and Real Data Analysis ======================================================= The Main Theorem ---------------- Suppose that the parameter vector $\beta$ is estimated by the OLS estimator $$\hat{ {\beta}}={\left(}{\mathbf}X'{\mathbf}X{\right)}^{-1}{\mathbf}X'{\mathbf}Y.$$ Denote the residuals by $$\hat{{\mathbf}\varepsilon}={\left(}\hat{{\mathbf}\varepsilon_1},\hat{{\mathbf}\varepsilon_2},\cdots,\hat{{\mathbf}\varepsilon_n}{\right)}'={\mathbf}Y-{\mathbf}X\hat \beta={\mathbf}P\varepsilon,$$ with ${\mathbf}P=(p_{ij})_{n\times n}={\mathbf}I_n-{\mathbf}X({\mathbf}X'{\mathbf}X)^{-1}{\mathbf}X'$ and $\varepsilon={\left(}\varepsilon_1,\varepsilon_2,\cdots,\varepsilon_n{\right)}'$. Let ${\mathbf}D$ be an $n\times n$ diagonal matrix with its $i$-th diagonal entry being $\sigma_i$, set ${\mathbf}A=(a_{ij})_{n\times n}={\mathbf}P{\mathbf}D$ and let $\xi={\left(}\xi_1,\xi_2,\cdots,\xi_n{\right)}'$ stand for a standard $n$ dimensional random vector whose entries obey the same distribution with $\varepsilon$. It follows that the distribution of $\hat \varepsilon$ is the same as that of ${\mathbf}A\xi$. In the following, we use ${{\rm Diag}}{\left(}{\mathbf}B{\right)}={\left(}b_{1,1},b_{2,2},\cdots,b_{n,n}{\right)}'$ to stand for the vector formed by the diagonal entries of ${\mathbf}B$ and ${{\rm Diag}}'{\left(}{\mathbf}B{\right)}$ as its transpose, use ${\mathbf}D_{{\mathbf}B}$ stand for the diagonal matrix of ${\mathbf}B$, and use ${\mathbf}1$ stand for the vector ${\left(}1,1,\cdots,1{\right)}'$. Consider the following statistic $$\label{a2} {\mathbf}T=\frac{\sum_{i=1}^n{\left(}\hat{\varepsilon_i}^2-\frac{1}{n}\sum_{i=1}^n\hat{\varepsilon_i}^2{\right)}^2}{\frac{1}{n}{\left(}\sum_{i=1}^n\hat{\varepsilon_i}^2{\right)}^2}.$$ We below use ${\mathbf}A {\circ}{\mathbf}B$ to denote the Hadamard product of two matrices ${\mathbf}A$ and ${\mathbf}B$ and use ${\mathbf}A ^{{\circ}k}$ to denote the Hadamard product of k ${\mathbf}A$. \[th1\] Under the condition that the distribution of $\varepsilon_1$ is symmetric, ${{\rm E}}|\varepsilon_1|^8\leq \infty$ and $p/n\to y\in [0,1)$ as $n\to \infty$, we have $$\frac{{\mathbf}T-a}{\sqrt b}\stackrel{d}{\longrightarrow}{\mathbf}N(0,1)$$ where $a$, $b$ are determined by $n$, $p$ and ${\mathbf}A$. Under $H_0$, we further have $$a={\left(}\frac{n{\left(}3{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^2{\right)}}{{\left(}{\left(}n-p{\right)}^2+2{\left(}n-p{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P){\right)}}-1{\right)},\ b=\Delta'\Theta\Delta,$$ where $$\Delta'=(\frac{n}{{\left(}{\left(}n-p{\right)}^2+2{\left(}n-p{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P){\right)}},-\frac{n^2{\left(}3{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^2{\right)}}{{{\left(}{\left(}n-p{\right)}^2+2{\left(}n-p{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P){\right)}}^2})$$ and $$\Theta=\left( \begin{array}{cc} \Theta_{11} & \Theta_{12} \\ \Theta_{21} & \Theta_{22} \\ \end{array} \right),$$ where $$\begin{aligned} \Theta_{11}=&72{{\rm Diag}}'({\mathbf}P) {\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}{{\rm Diag}}({\mathbf}P)+24{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}^2\\\notag &+\nu_4{\left(}96{{\rm tr}}{\mathbf}P {\mathbf}{D_P} {\mathbf}P {\mathbf}P^{{\circ}3}+72{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^3+36{{\rm Diag}}'({\mathbf}P) {\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}^2{{\rm Diag}}({\mathbf}P) {\right)}\\\notag &+\nu^2_4{\left(}18{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^4+16{{\rm tr}}({\mathbf}P^{{\circ}3}{\mathbf}P)^2){\right)}\\\notag &+\nu_6{\left(}12{{\rm tr}}{\left(}{\left(}{\mathbf}P{\mathbf}D_{{\mathbf}P}{\mathbf}P {\right)}{\circ}{\left(}{\mathbf}P^{{\circ}2}{\mathbf}P^{{\circ}2}{\right)}{\right)}+16{{\rm tr}}{\mathbf}P {\mathbf}P^{{\circ}3}{\mathbf}P^{{\circ}3}{\right)}+\nu_8{\mathbf}1'({\mathbf}P^{{\circ}4}{\mathbf}P^{{\circ}4}){\mathbf}1,\end{aligned}$$ $$\begin{aligned} \Theta_{22}=\frac{8{\left(}n-p{\right)}^3+4\nu_4{\left(}n-p{\right)}^2{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)}{n^2},\end{aligned}$$ $$\begin{aligned} &\Theta_{12}=\Theta_{21}\\\notag =&\frac{{\left(}n-p{\right)}}{n}{\left(}24{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}+16\nu_4{{\rm tr}}({\mathbf}P {\mathbf}P^{{\circ}3})+12\nu_4{{\rm tr}}{\left(}{\left(}{\mathbf}P{\mathbf}D_{{\mathbf}p}{\mathbf}P{\right)}{\circ}{\mathbf}P{\right)}+2\nu_6[{{\rm Diag}}({\mathbf}P)'({\mathbf}P^{{\circ}4}){\mathbf}1]{\right)},\end{aligned}$$ $\nu_4=M_4-3$ , $\nu_6=M_6-15 M_4+30$ and $\nu_8=M_8-28 M_6-35M_4^2+420M_4-630$ are the corresponding cumulants of random variable $\varepsilon_1$. The existence of the 8-th moment is necessary because it determines the asymptotic variance of the statistic. The explicit expressions of $a$ and $b$ are given in Theorem \[th1\] under $H_0$. However the explicit expressions of $a$ and $b$ are quite complicated under $H_1$. Nevertheless one may obtain them from (\[e1\])-(\[e2\]) and (\[t10\])-(\[ct12\]) below. In Li and Yao’s paper, under the condition that the distribution of $\varepsilon$ is normal, they also did some simulations when the design matrices are non-Gaussian. Specifically speaking, they also investigated the test when the entries of design matrices are drawn from gamma distribution $G(2,2)$ and uniform distribution $U(0,1)$ respectively. There is no significant difference in terms of size and power between these two non-normal designs and the normal design. This seems that the proposed test is robust against the form of the distribution of the design matrix. But according to our main theorem, it is not always the case. In our main theorem, one can find that when the error $\varepsilon$ obey the normal distribution, under $H_0$ and given $p$ and $n$, the expectation of the statistics is only determined by ${{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)$. We conduct some simulations to investigate the influence of the distribution of the design matrix on this term when $n=1000$ and $p=200$. The simulation results are presented in table \[table1\]. $N(0,1)$ $G(2,2)$ $U(0,1)$ $F(1,2)$ $exp(N(5,3))/100$ ------------------------------------------- ---------- ---------- ---------- ---------- ------------------- ${{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)$ 640.3 640.7 640.2 712.5 708.3 : The value of ${{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)$ corresponding to different design distributions[]{data-label="table1"} It suggests that even if the entries of the design matrix are drawn from some common distribution, the expectation of the statistics may deviate far from that of the normal case. This will cause a wrong test result. Moreover, even in the normal case, our result is more accurate since we do not use any approximate value in the mean of the statistic $T$. Let’s take an example to explain why this test works. For convenient, suppose that $\varepsilon_1$ obey the normal distribution. From the calculation in Section \[exp\] we know that the expectation of the statistic ${\mathbf}T$ defined in (\[a2\]) can be represented as $${{\rm E}}{\mathbf}T=\frac{3n\sum_{i=1}^np_{ii}^2\sigma_i^4}{(\sum_{i=1}^np_{ii}\sigma_i^2)^2}-1+o(1).$$ Now assume that $p_{ii}=\frac{n-p}{n}$ for all $i=1,\cdots,n$. Moreover, without loss of generality, suppose that $\sigma_1=\cdots=\sigma_n=1$ under $H_0$ so that we get ${{\rm E}}{\mathbf}T\to 2$ as $n \to \infty$. However, when $\sigma_1=\cdots=\sigma_{[n/2]}=1$ and $\sigma_{[n/2]+1}=\cdots=\sigma_{n}=2$, one may obtain ${{\rm E}}{\mathbf}T\to 3.08$ as $n\to \infty$. Since ${\rm Var}({\mathbf}T)=O(n^{-1})$ this ensures a high power as long as $n$ is large enough. Some simulation results ----------------------- We next conduct some simulation results to investigate the performance of our test statistics. Firstly, we consider the condition when the random error obey the normal distribution. Table \[table2\] shows the empirical size compared with Li and Yao’s result in [@li2015homoscedasticity] under four different design distributions. We use $``{\rm CVT}"$ and $``{\rm FCVT}"$ to represent their test and our test respectively. The entries of design matrices are $i.i.d$ random samples generated from $N(0,1)$, $t(1)$ ($t$ distribution with freedom degree 1), $F(3,2)$ ($F$ distribution with parameters 3 and 2) and logarithmic normal distribution respectively. The sample size $n$ is 512 and the dimension of covariates varies from 4 to 384. We also follow [@dette1998testing] and consider the following two models: Model 1 : $y_i={\mathbf}x_i{\mathbf}\beta+{\mathbf}\varepsilon_i(1+{\mathbf}x_i {\mathbf}h),\ \ \ \ \ i=1,2,\cdots,n$,\ where ${\mathbf}h=(1,{\mathbf}0_{(p-1)})$, Model 2 : $y_i={\mathbf}x_i{\mathbf}\beta+{\mathbf}\varepsilon_i(1+{\mathbf}x_i {\mathbf}h),\ \ \ \ \ i=1,2,\cdots,n $\ where ${\mathbf}h=({\mathbf}1_{(p/2)},{\mathbf}0_{(p/2)})$. Tables \[table3\] and \[table4\] show the empirical power compared with Li and Yao’s results under four different regressors distributions mentioned above. Then, we consider the condition that the random error obey the two-point distribution. Specifically speaking, we suppose $P(\varepsilon_1=-1)=P(\varepsilon_1=1)=1/2$. Since Li and Yao’s result is unapplicable in this situation, Table \[table5\] just shows the empirical size and empirical power under Model 2 of our test under four different regressors distributions mentioned above. According to the simulation result, it is showed that when $p/n\to [0,1)$ as $n\to \infty$, our test always has good size and power under all regressors distributions. N(0,1) t(1) $F(3,2)$ $e^{(N(5,3))}$ ----- -------- -------- -------- -------- ---------- -------- ---------------- -------- -- -- p FCVT CVT FCVT CVT FCVT CVT FCVT CVT 4 0.0582 0.0531 0.0600 0.0603 0.0594 0.0597 0.0590 0.0594 16 0.0621 0.0567 0.0585 0.0805 0.0585 0.0824 0.0595 0.0803 64 0.0574 0.0515 0.0605 0.2245 0.0586 0.2312 0.0578 0.2348 128 0.0597 0.0551 0.0597 0.5586 0.0568 0.5779 0.0590 0.5934 256 0.0551 0.0515 0.0620 0.9868 0.0576 0.9908 0.0595 0.9933 384 0.0580 0.0556 0.0595 1.0000 0.0600 1.0000 0.0600 1.0000 : empirical size under different distributions[]{data-label="table2"} N(0,1) t(1) $F(3,2)$ $e^{(N(5,3))}$ ----- -------- -------- -------- -------- ---------- -------- ---------------- -------- -- -- p FCVT CVT FCVT CVT FCVT CVT FCVT CVT 4 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 16 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 64 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 128 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 256 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 384 0.8113 0.8072 0.9875 1.0000 0.9876 1.0000 0.9905 1.0000 : empirical power under model 1[]{data-label="table3"} N(0,1) t(1) $F(3,2)$ $e^{(N(5,3))}$ ----- -------- -------- -------- -------- ---------- -------- ---------------- -------- -- -- p FCVT CVT FCVT CVT FCVT CVT FCVT CVT 4 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 16 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 64 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 128 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 256 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 384 0.9066 0.9034 0.9799 1.0000 0.9445 1.0000 0.8883 1.0000 : empirical power under model 2[]{data-label="table4"} N(0,1) t(1) $F(3,2)$ $e^{(N(5,3))}$ ----- -------- -------- -------- -------- ---------- -------- ---------------- -------- -- -- p Size Power Size Power Size Power Size Power 4 0.0695 1.0000 0.0726 1.0000 0.0726 1.0000 0.0664 1.0000 16 0.0695 1.0000 0.0638 1.0000 0.0706 1.0000 0.0556 1.0000 64 0.0646 1.0000 0.0606 1.0000 0.0649 1.0000 0.0622 1.0000 128 0.0617 1.0000 0.0705 1.0000 0.0597 1.0000 0.0630 1.0000 256 0.0684 1.0000 0.0685 1.0000 0.0608 1.0000 0.0649 1.0000 384 0.0610 0.8529 0.0748 1.0000 0.0758 1.0000 0.0742 1.0000 : empirical size and power under different distributions[]{data-label="table5"} Two Real Rata Analysis ---------------------- ### The Death Rate Data Set In [@mcdonald1973instabilities], the authors fitted a multiple linear regression of the total age adjusted mortality rate on 15 other variables (the average annual precipitation, the average January temperature, the average July temperature, the size of the population older than 65, the number of members per household, the number of years of schooling for persons over 22, the number of households with fully equipped kitchens, the population per square mile, the size of the nonwhite population, the number of office workers, the number of families with an income less than \$3000, the hydrocarbon pollution index, the nitric oxide pollution index, the sulfur dioxide pollution index and the degree of atmospheric moisture). The number of observations is 60. To investigate whether the homoscedasticity assumption in this models is justified, we applied our test and got a p-value of 0.4994, which strongly supported the assumption of constant variability in this model since we use the one side test. The data set is available at <http://people.sc.fsu.edu/~jburkardt/datasets/regression/regression.html>. ### The 30-Year Conventional Mortgage Rate Data Set The 30-Year Conventional Mortgage Rate data [@Mortgage] contains the economic data information of USA from 01/04/1980 to 02/04/2000 on a weekly basis (1049 samples). The goal is to predict the 30-Year Conventional Mortgage Rate by other 15 features . We used a multiple linear regression to fit this data set and got a good result. The adjusted R-squared is 0.9986, the P value of the overall F-test is 0. Our homoscedasticity test reported a p-value 0.4439. Proof Of The Main Theorem ========================= This section is to prove the main theorem. The first step is to establish the asymptotic normality of ${\mathbf}T_1$, ${\mathbf}T_2$ and $\alpha{\mathbf}T_1+\beta{\mathbf}T_2$ with $\alpha^2+\beta^2 \neq 0$ by the moment convergence theorem. Next we will calculate the expectations, variances and covariance of the statistics ${\mathbf}T_1=\sum_{i=1}^n\hat{\varepsilon_i}^4$ and ${\mathbf}T_2=\frac{1}{n}{\left(}\sum_{i=1}^n\hat{\varepsilon_i}^2{\right)}^2$. The main theorem then follows by the delta method. Note that without loss of generality, under $H_0$, we can assume that $\sigma=1$. The asymptotic normality of the statistics. {#clt} ------------------------------------------- We start by giving a definition in Graph Theory. A graph ${\mathbf}G={\left(}{\mathbf}V,{\mathbf}E,{\mathbf}F{\right)}$ is called two-edge connected, if removing any one edge from $G$, the resulting subgraph is still connected. The next lemma is a fundamental theorem for Graph-Associated Multiple Matrices without the proof. For the details of this theorem, one can refer to the section ${\mathbf}A.4.2$ in [@bai2010spectral]. \[lm2\] Suppose that ${\mathbf}G={\left(}{\mathbf}V,{\mathbf}E, {\mathbf}F{\right)}$ is a two-edge connected graph with $t$ vertices and $k$ edges. Each vertex $i$ corresponds to an integer $m_i \geq 2$ and each edge $e_j$ corresponds to a matrix ${\mathbf}T^{(j)}={\left(}t_{\alpha,\beta}^{(j)}{\right)},\ j=1,\cdots,k$, with consistent dimensions, that is, if $F(e_j)=(f_i(e_j),f_e(e_j))=(g,h),$ then the matrix ${\mathbf}T^{{\left(}j{\right)}}$ has dimensions $m_g\times m_h$. Define ${\mathbf}v=(v_1,v_2,\cdots,v_t)$ and $$\begin{aligned} T'=\sum_{{\mathbf}v}\prod_{j=1}^kt_{v_{f_i(e_j)},v_{f_e(e_j)}}^{(j)}, \end{aligned}$$ where the summation $\sum_{{\mathbf}v}$ is taken for $v_i=1,2,\cdots, m_i, \ i=1,2,\cdots,t.$ Then for any $i\leq t$, we have $$|T'|\leq m_i\prod_{j=1}^k\|{\mathbf}T^{(j)}\|.$$ Let $\mathcal{T}=({\mathbf}T^{(1)},\cdots,{\mathbf}T^{(k)})$ and define $G(\mathcal{T})=(G,\mathcal{T})$ as a Graph-Associated Multiple Matrices. Write $T'=sum(G(\mathcal{T}))$, which is referred to as the summation of the corresponding Graph-Associated Multiple Matrices. We also need the following truncation lemma \[lm3\] Suppose that $\xi_n={\left(}\xi_1,\cdots,\xi_n{\right)}$ is an i.i.d sequence with ${{\rm E}}|\xi_1|^r \leq \infty$, then there exists a sequence of positive numbers $(\eta_1,\cdots,\eta_n)$ satisfy that as $n \to \infty$, $\eta_n \to 0$ and $$P(\xi_n\neq\widehat \xi_n,\ {\rm i.o.})=0,$$ where $\widehat \xi_n={\left(}\xi_1 I(|\xi_1|\leq \eta_n n^{1/r}),\cdots,\xi_n I(|\xi_n|\leq \eta_n n^{1/r}){\right)}.$ And the convergence rate of $\eta_n$ can be slower than any preassigned rate. ${{\rm E}}|\xi_1|^r \leq \infty$ indicated that for any $\epsilon>0$, we have $$\sum_{m=1}^\infty 2^{2m}P(|\xi_1|\geq\epsilon2^{2m/r})\leq \infty.$$ Then there exists a sequence of positive numbers $\epsilon=(\epsilon_1,\cdots,\epsilon_m)$ such that $$\sum_{m=1}^\infty 2^{2m}P(|\xi_1|\geq\epsilon_m2^{2m/r})\leq \infty,$$ and $\epsilon_m \to 0$ as $m \to 0$. And the convergence rate of $\epsilon_m$ can be slower than any preassigned rate. Now, define $\delta_n=2^{1/r}\epsilon_m$ for $2^{2m-1}\leq n\leq 2^{2m}$, we have as $n\to \infty$ $$\begin{aligned} P(\xi_n\neq\widehat \xi_n,\ {\rm i.o.})\leq &\lim_{k\to \infty}\sum_{m=k}^{\infty}P\Big(\bigcup_{2^{2m-1}\leq n\leq 2^{2m}}\bigcup_{i=1}^n{\left(}|\xi_i|\geq\eta_nn^{1/r}{\right)}\Big)\\\notag \leq&\lim_{k\to \infty}\sum_{m=k}^{\infty}P\Big(\bigcup_{2^{2m-1}\leq n\leq 2^{2m}}\bigcup_{i=1}^{2^{2m}}{\left(}|\xi_i|\geq\epsilon_m 2^{1/r}2^{\frac{{\left(}2m-1{\right)}}{r}}{\right)}\Big)\\\notag \leq&\lim_{k\to \infty}\sum_{m=k}^{\infty}P\Big(\bigcup_{2^{2m-1}\leq n\leq 2^{2m}}\bigcup_{i=1}^{2^{2m}}{\left(}|\xi_i|\geq\epsilon_m 2^{{2m}/{r}}{\right)}\Big)\\\notag =&\lim_{k\to \infty}\sum_{m=k}^{\infty}P\Big(\bigcup_{i=1}^{2^{2m}}{\left(}|\xi_i|\geq\epsilon_m 2^{{2m}/{r}}{\right)}\Big)\\\notag \leq&\lim_{k\to \infty}\sum_{m=k}^{\infty}2^{2m}P\Big(|\xi_1|\geq\epsilon_m 2^{{2m}/{r}}\Big)=0.\end{aligned}$$ We note that the truncation will neither change the symmetry of the distribution of $\xi_1$ nor change the order of the variance of ${\mathbf}T$. Now, we come to the proof of the asymptotic normality of the statistics. We below give the proof of the asymptotic normality of $\alpha{\mathbf}T_1+\beta{\mathbf}T_2$ , where $\alpha^2+\beta^2\neq 0$. The asymptotic normality of either ${\mathbf}T_1$ or ${\mathbf}T_2$ is a result of setting $\alpha=0$ or $\beta=0$ respectively. Denote $\mu_1={{\rm E}}{\mathbf}T_1={{\rm E}}\sum_{i=1}^n\hat \varepsilon_i^4$, $\mu_2={{\rm E}}{\mathbf}T_2={{\rm E}}n^{-1}{\left(}\sum_{i=1}^n\hat \varepsilon_i^2{\right)}^2$ and $S=\sqrt{{\rm {Var}}{\left(}\alpha {\mathbf}T_1+\beta{\mathbf}T_2{\right)}}$. Below is devote to calculating the moments of ${\mathbf}T_0=\frac{\alpha {\mathbf}T_1+\beta {\mathbf}T_2-{\left(}\alpha \mu_1+\beta \mu_2{\right)}}{S}=\frac{\alpha {\left(}{\mathbf}T_1-\mu_1{\right)}+\beta {\left(}{\mathbf}T_2-\mu_2{\right)}}{S}$. Note that by Lemma \[lm3\], we can assume that $\xi_1$ is truncated at $\eta_n n^{1/8}$. Then we have for large enough $n$ and $l>4$, $$M_{2l}\leq \eta_n M_8{\sqrt n}^{2l/4-1}.$$ Let’s take a look at the random variable $$\begin{aligned} &\alpha T_1+\beta T_2=\alpha \sum_{i=1}^n{\left(}\sum_{j=1}^n a_{ij}\xi_j{\right)}^4+(n^{-1})\beta {\left(}\sum_{i=1}^n{\left(}\sum_{j=1}^n a_{ij}\xi_j{\right)}^2{\right)}^2\\\notag =&\alpha \sum_{i,j_1,\cdots,j_4} a_{i,j_1}a_{i,j_2}a_{i,j_3}a_{i,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}+(n^{-1})\beta\sum_{i_1,i_2,j_1,\cdots,j_4} a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_3}a_{i_2,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}\\\notag =&\alpha \sum_{i,j_1,\cdots,j_4} a_{i,j_1}a_{i,j_2}a_{i,j_3}a_{i,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}+(n^{-1})\beta\sum_{u_1,u_2,v_1,\cdots,v_4} a_{u_1,v_1}a_{u_1,v_2}a_{u_2,v_3}a_{u_2,v_4}\xi_{v_1}\xi_{v_2}\xi_{v_3}\xi_{v_4}.\end{aligned}$$ We next construct two type of graphs for the last two sums. For given integers $i,j_1,j_2,j_3,j_4\in [1,n]$, draw a graph as follows: draw two parallel lines, called the $I$-line and the $J$-line respectively; plot $i$ on the $I$-line and $j_1,j_2,j_3$ and $j_4$ on the $J$-line; finally, we draw four edges from $i$ to $j_t$, $t=1,2,3,4$ marked with $\textcircled{1}$. Each edge $(i,j_t)$ represents the random variable $a_{i,j_t}\xi_{j_t}$ and the graph $G_1(i,{\mathbf}j)$ represents $\prod_{\rho=1}^{4}a_{i,j_\rho}\xi_{j_\rho}$. For any given integer $k_1$, we draw $k_1$ such graphs between the $I$-line and the $J$-line denoted by $G_1(\tau)=G_1(i_\tau,{\mathbf}j_\tau)$, and write $G_{(1,k_1)}=\cup_{\tau} G_1(\tau)$. For given integers $u_1,u_2,v_1,v_2,v_3,v_4\in [1,n]$, draw a graph as follows: plot $u_1$ and $u_2$ on the $I$-line and $v_1,v_2,v_3$ and $v_4$ on the $J$-line; then, we draw two edges from $u_1$ to $v_1$ and $v_2$ marked with $\textcircled{2}$ , draw two edges from $u_2$ to $v_3$ and $v_4$ marked with $\textcircled{2}$. Each edge $(u_l,v_t)$ represents the random variable $a_{u_l,v_t}\xi_{v_t}$ and the graph $G_2({\mathbf}u,{\mathbf}v)$ represents $a_{u_1,v_1}a_{u_1,v_2}a_{u_2,v_3}a_{u_2,v_4}$. For any given integer $k_2$, we draw $k_2$ such graphs between the $I$-line and the $J$-line denoted by $G_2(\psi)=G_2({\mathbf}u_\psi,{\mathbf}v_\psi)$, and write $G_{(2,k_2)}=\cup_{\psi} G_2(\psi)$, $G_{k}=G_{(1,k_1)}\cup G_{(2,k_2)}$. Then the $k$-th order moment of ${\mathbf}T_0$ is $$\begin{aligned} M_k'=&S^{-k}\sum_{k_1+k_2=k}{k\choose k_1}\alpha^{k_1}\beta^{k_2}\sum_{\substack{\{i_1,{\mathbf}j_1,\cdots,i_{k_1},{\mathbf}j_{k_1}\} \\ \{{\mathbf}u_1,{\mathbf}v_1,\cdots,{\mathbf}u_{k_2},{\mathbf}v_{k_2}\}}}\\ &n^{-k_2}{{\rm E}}\Big[\prod_{\tau=1}^{k_1}[G_1(i_\tau,{\mathbf}j_\tau)-{{\rm E}}(G_1(i_\tau,{\mathbf}j_\tau))]\prod_{\phi=1}^{k_2}[G_2({\mathbf}u_\psi,{\mathbf}v_\psi)-{{\rm E}}(G_2({\mathbf}u_\psi,{\mathbf}v_\psi))]\Big].\end{aligned}$$ We first consider a graph $G_k$ for the given set of integers $k_1,k_2$, $i_1, {\mathbf}j_1,\cdots,i_{k_1},{\mathbf}j_{k_1}$ and ${\mathbf}u_1,{\mathbf}v_1,\cdots,{\mathbf}u_{k_2},{\mathbf}v_{k_2}$. We have the following simple observations: Firstly, if $G_k$ contains a $j$ vertex of odd degree, then the term is zero because odd-ordered moments of random variable $\xi_j$ are 0. Secondly, if there is a subgraph $G_1(\tau)$ or $G_2(\psi)$ that does not have an $j$ vertex coinciding with any $j$ vertices of other subgraphs, the term is also 0 because $G_1(\tau)$ or $G_2(\psi)$ is independent of the remainder subgraphs. Then, upon these two observations, we split the summation of non-zero terms in $M_k'$ into a sum of partial sums in accordance of isomorphic classes (two graphs are called isomorphic if one can be obtained from the other by a permutation of $(1,2,\cdots,n)$, and all the graphs are classified into isomorphic classes. For convenience, we shall choose one graph from an isomorphic class as the canonical graph of that class). That is, we may write $$\begin{aligned} M_k'=&S^{-k}\sum_{k_1+k_2=k}{k\choose k_1}\alpha^{k_1}\beta^{k_2}n^{-k_2}\sum_{G_k'}M_{G_k'},\end{aligned}$$ where $$M_{G_k'}=\sum_{G_k\in G_k'}{{\rm E}}G_k.$$ Here $G_k'$ is a canonical graph and $\sum_{G_k\in G_k'}$ denotes the summation for all graphs $G_k$ isomorphic to $G_k'$. In that follows, we need the fact that the variances of ${\mathbf}T_1$ and ${\mathbf}T_2$ and their covariance are all of order n. This will be proved in Section \[var\]. Since all of the vertices in the non-zero canonical graphs have even degrees, every connected component of them is a circle, of course a two-edge connected graph. For a given isomorphic class with canonical graph $G_k'$, denote by $c_{G_k'}$ the number of connected components of the canonical graph $G_k'$. For every connected component $G_0$ that has $l$ non-coincident $J$-vertices with degrees $d_1,\cdots,d_l$, let $d'=\max\{d_1-8,\cdots,d_l-8,0\}$, denote $\mathcal{T}=(\underbrace{{\mathbf}A,\cdots,{\mathbf}A}_{\sum_{t=1}^l d_t})$ and define $G_0(\mathcal{T})=(G_0,\mathcal{T})$ as a Graph-Associated Multiple Matrices. By Lemma \[lm2\] we then conclude that the contribution of this canonical class is at most ${\left(}\prod_{t=1}^l M_{d_t}{\right)}sum(G(\mathcal{T}))=O(\eta_n^{d'}n\sqrt n^{d'/4})$. Noticing that $\eta_n \to 0$, if $c_{G_k'}$ is less than $k/2+k_2$, then the contribution of this canonical class is negligible because $S^k\asymp n^{k/2}$ and $M_{G_k'}$ in $M_k'$ has a factor of $n^{-k_2}$. However one can see that $c_{G_k'}$ is at most $[k/2]+k_2$ for every ${G_k'}$ by the argument above and noticing that every $G_2(\bullet)$ has two $i$ vertices. Therefore, $M_k'\to 0$ if $k$ is odd. Now we consider the limit of $M_k'$ when $k=2s$. We shall say that [*the given set of integers $i_1, {\mathbf}j_1,\cdots,i_{k_1},{\mathbf}j_{k_1}$ and ${\mathbf}u_1,{\mathbf}v_1,\cdots,{\mathbf}u_{k_2},{\mathbf}v_{k_2}$ (or equivalent the graph $G_k$) satisfies the condition $c(s_1,s_2,s_3)$ if in the graph $G_k$ plotted by this set of integers there are $2s_1$ $G_1{{\left(}\bullet{\right)}}$ connected pairwisely, $2s_2$ $G_2{{\left(}\bullet{\right)}}$ connected pairwisely and $s_3$ $G_1{{\left(}\bullet{\right)}}$ connected with $s_3$ $G_2{{\left(}\bullet{\right)}}$, where $2s_1+s_3=k_1$, $2s_2+s_3=k_2$ and $s_1+s_2+s_3=s$, say $G_1{{\left(}2\tau-1{\right)}}$ connects $G_1{{\left(}2\tau{\right)}}$, $\tau=1,2,\cdots,s_1$, $G_2{{\left(}2\psi-1{\right)}}$ connects $G_1{{\left(}2\psi{\right)}}$, $\psi=1,2,\cdots,s_2$ and $G_1{{\left(}2s_1+\varphi{\right)}}$ connects $G_2{{\left(}2s_2+\varphi{\right)}}$, $\varphi=1,2,\cdots,s_3$, and there are no other connections between subgraphs.*]{} Then, for any $G_k$ satisfying $c(s_1,s_2,s_3)$, we have $$\begin{aligned} {{\rm E}}G_k=&\prod_{\tau=1}^{s_1}{{\rm E}}[(G_1{{\left(}2\tau-1{\right)}}-{{\rm E}}(G_1{{\left(}2\tau-1{\right)}}))(G_1{{\left(}2\tau{\right)}}-{{\rm E}}(G_1{\left(}{2\tau}{\right)}))]\times\\\notag &\prod_{\psi=1}^{s_2}{{\rm E}}[(G_2{{\left(}2\psi-1{\right)}}-{{\rm E}}(G_2{{\left(}2\psi-1{\right)}}))(G_2{{\left(}2\psi{\right)}}-{{\rm E}}(G_2{\left(}{2\psi}{\right)}))]\times\\\notag &\prod_{\varphi=1}^{s_3}{{\rm E}}[(G_1{{\left(}2s_1+\varphi{\right)}}-{{\rm E}}(G_1{{\left(}2s_1+\varphi{\right)}}))(G_2{{\left(}2s_2+\varphi{\right)}}-{{\rm E}}(G_2{\left(}{2s_2+\varphi}{\right)}))].\end{aligned}$$ Now, we compare $$\begin{aligned} &n^{-k_2}\sum_{G_k\in c(s_1,s_2,s_3)} {{\rm E}}G_k\\\notag =&n^{-k_2}\sum_{G_k\in c(s_1,s_2,s_3)}\prod_{\tau=1}^{s_1}{{\rm E}}[(G_1{{\left(}2\tau-1{\right)}}-{{\rm E}}(G_1{{\left(}2\tau-1{\right)}}))(G_1{{\left(}2\tau{\right)}}-{{\rm E}}(G_1{\left(}{2\tau}{\right)}))]\times\\\notag &\prod_{\psi=1}^{s_2}{{\rm E}}[(G_2{{\left(}2\psi-1{\right)}}-{{\rm E}}(G_2{{\left(}2\psi-1{\right)}}))(G_2{{\left(}2\psi{\right)}}-{{\rm E}}(G_2{\left(}{2\psi}{\right)}))]\times \\\notag &\prod_{\varphi=1}^{s_3}{{\rm E}}[(G_1{{\left(}2s_1+\varphi{\right)}}-{{\rm E}}(G_1{{\left(}2s_1+\varphi{\right)}}))(G_2{{\left(}2s_2+\varphi{\right)}}-{{\rm E}}(G_2{\left(}{2s_2+\varphi}{\right)}))], \end{aligned}$$ with $$\begin{aligned} &{\left(}{{\rm E}}{\left(}{\mathbf}T_1-\mu_1{\right)}^2{\right)}^{s_1}{\left(}{{\rm E}}{\left(}{\mathbf}T_2-\mu_2{\right)}^2{\right)}^{s_2}{\left(}{{\rm E}}{\left(}{\mathbf}T_1-\mu_1{\right)}{\left(}{\mathbf}T_2-\mu_2{\right)}{\right)}^{s_3}\\\notag =&n^{-k_2}\sum_{G_k}\prod_{\tau=1}^{s_1}{{\rm E}}[(G_1{{\left(}2\tau-1{\right)}}-{{\rm E}}(G_1{{\left(}2\tau-1{\right)}}))(G_1{{\left(}2\tau{\right)}}-{{\rm E}}(G_1{\left(}{2\tau}{\right)}))]\times\\\notag &\prod_{\psi=1}^{s_2}{{\rm E}}[(G_2{{\left(}2\psi-1{\right)}}-{{\rm E}}(G_2{{\left(}2\psi-1{\right)}}))(G_2{{\left(}2\psi{\right)}}-{{\rm E}}(G_2{\left(}{2\psi}{\right)}))]\times \\\notag &\prod_{\varphi=1}^{s_3}{{\rm E}}[(G_1{{\left(}2s_1+\varphi{\right)}}-{{\rm E}}(G_1{{\left(}2s_1+\varphi{\right)}}))(G_2{{\left(}2s_2+\varphi{\right)}}-{{\rm E}}(G_2{\left(}{2s_2+\varphi}{\right)}))],\end{aligned}$$ where $\sum_{G_k\in c(s_1,s_2,s_3)}$ stands for the summation running over all graph $G_k$ satisfying the condition $c(s_1,s_2,s_3)$. If $G_k$ satisfies the two observations mentioned before, then ${{\rm E}}G_k=0$, which does not appear in both expressions; if $G_k$ satisfies the condition $c(s_1,s_2,s_3)$, then the two expressions both contain ${{\rm E}}G_k$. Therefore, the second expression contains more terms that $G_k$ have more connections among subgraphs than the condition $c(s_1,s_2,s_3)$. Therefore, by Lemma \[lm2\], $${\left(}{{\rm E}}{\left(}{\mathbf}T_1-\mu_1{\right)}^2{\right)}^{s_1}{\left(}{{\rm E}}{\left(}{\mathbf}T_2-\mu_2{\right)}^2{\right)}^{s_2}{\left(}{{\rm E}}{\left(}{\mathbf}T_1-\mu_1{\right)}{\left(}{\mathbf}T_2-\mu_2{\right)}{\right)}^{s_3}=n^{-k_2}\sum_{G_k\in c(s_1,s_2,s_3)} {{\rm E}}G_k+o(S^k). \label{map1}$$ If $G_k\in G_k'$ with $c_{G_k'}=s+k_2$, for any nonnegative integers $s_1,s_2,s_3$ satisfying $k_1=2s_1+s_3$, $k_2=2s_2+s_3$ and $s_1+s_2+s_3=s$, we have ${k_1\choose s_3}{k_2 \choose s_3}(2s_1-1)!!(2s_2-1)!!s_3!$ ways to pairing the subgraphs satisfying the condition $c(s_1,s_2,s_3)$. By (\[map1\]), we then have $$\begin{aligned} &&\sum_{c_{G_k'}=s+k_2}n^{-k_2}{{\rm E}}G_k+o(S^k)\\ &=&\sum_{s_1+s_2+s_3=s\atop 2s_1+s_3=k_1, 2s_2+s_3=k_2}{k_1\choose s_3}{k_2 \choose s_3}(2s_1-1)!!(2s_2-1)!!s_3! (Var({\mathbf}T_1))^{s_1}(Var({\mathbf}T_2))^{s_2}(Cov({\mathbf}T_1,{\mathbf}T_2))^{s_3}\end{aligned}$$ It follows that $$\begin{aligned} M_k'=&S^{-k}\sum_{k_1+k_2=k}{k\choose k_1}\alpha^{k_1}\beta^{k_2}n^{-k_2}\sum_{c_{G_k'}=s+k_2}{{\rm E}}G_k+o(1)\\ =&\Big(S^{-2s}\sum_{k_1=0}^{2s}\sum_{s_3=0}^{\min\{k_1,k_2\}}{2s\choose k_1}{k_1 \choose s_3}{k_2 \choose s_3}{\left(}2s_1-1{\right)}!!{\left(}2s_2-1{\right)}!!s_3!\\ &{\left(}\alpha^2Var({\mathbf}T_1){\right)}^{s_1}{\left(}\beta^2Var({\mathbf}T_2){\right)}^{s_2}{\left(}\alpha\beta Cov({\mathbf}T_1,{\mathbf}T_2){\right)}^{s_3}\Big)+o(1)\\ =&\Big(S^{-2s}\sum_{s_1+s_2+s_3=s}{2s\choose 2s_1+s_3}{2s_1+s_3 \choose s_3}{2s_2+s_3 \choose s_3}{\left(}2s_1-1{\right)}!!{\left(}2s_2-1{\right)}!!s_3!\\ &{\left(}\alpha^2Var({\mathbf}T_1){\right)}^{s_1}{\left(}\beta^2Var({\mathbf}T_2){\right)}^{s_2}{\left(}\alpha\beta Cov({\mathbf}T_1,{\mathbf}T_2){\right)}^{s_3}\Big)+o(1)\\ =&\Big(S^{-2s}\sum_{s_1+s_2+s_3=s}\frac{(2s)!(2s_1+s_3)!(2s_2+s_3)!}{{\left(}2s_1+s_3{\right)}!(2s_2+s_3)!s_3!(2s_1)!s_3!(2s_2)!}{\left(}2s_1-1{\right)}!!{\left(}2s_2-1{\right)}!!s_3!\\ &{\left(}\alpha^2Var({\mathbf}T_1){\right)}^{s_1}{\left(}\beta^2Var({\mathbf}T_2){\right)}^{s_2}{\left(}\alpha\beta Cov({\mathbf}T_1,{\mathbf}T_2){\right)}^{s_3}\Big)+o(1)\\ =&\Big(S^{-2s}\sum_{s_1+s_2+s_3=s}(2s-1)!!\frac{s!}{s_1!s_2!s_3!}\\ &{\left(}\alpha^2Var({\mathbf}T_1){\right)}^{s_1}{\left(}\beta^2Var({\mathbf}T_2){\right)}^{s_2}{\left(}2\alpha\beta Cov({\mathbf}T_1,{\mathbf}T_2){\right)}^{s_3}\Big)+o(1),\end{aligned}$$ which implies that $$M'_{2s}\to (2s-1)!!.$$ Combining the arguments above and the moment convergence theorem we conclude that $$\frac{{\mathbf}T_1-{{\rm E}}{\mathbf}T_1}{\sqrt{{\rm Var}{\mathbf}T_1}}\stackrel{d}{\rightarrow} {\rm N}{\left(}0,1{\right)},\ \frac{{\mathbf}T_2-{{\rm E}}{\mathbf}T_2}{\sqrt{{\rm Var}{\mathbf}T_2}}\stackrel{d}{\rightarrow} {\rm N}{\left(}0,1{\right)}, \ \frac{ {\left(}\alpha {\mathbf}T_1+\beta {\mathbf}T_2{\right)}-{{\rm E}}{\left(}\alpha {\mathbf}T_1+\beta {\mathbf}T_2{\right)}}{\sqrt{{\rm Var}{\left(}\alpha {\mathbf}T_1+\beta {\mathbf}T_2{\right)}}}\stackrel{d}{\rightarrow} {\rm N}{\left(}0,1{\right)},$$ where $\alpha^2+\beta^2\neq 0.$ Let $$\Sigma=\left( \begin{array}{cc} {\rm {Var}}({\mathbf}T_1) & \rm {Cov}({\mathbf}T_1,{\mathbf}T_2) \\ \rm {Cov}({\mathbf}T_1,{\mathbf}T_2) & \rm {Var}({\mathbf}T_2) \\ \end{array} \right) .$$ We conclude that $\Sigma^{-1/2}{\left(}{\mathbf}T_1-{{\rm E}}{\mathbf}T_1,{\mathbf}T_2-{{\rm E}}{\mathbf}T_2{\right)}'$ is asymptotic two dimensional gaussian vector. The expectation {#exp} --------------- In the following let ${\mathbf}B={\mathbf}A{\mathbf}A'$. Recall that $${\mathbf}{T_1}=\sum_{i=1}^n\widehat{\varepsilon_i}^4=\sum_{i=1}^n{\left(}\sum_{j=1}^n a_{i,j}\xi_j{\right)}^4=\sum_{i=1}^n\sum_{j_1,j_2,j_3,j_4}a_{i,j_1}a_{i,j_2}a_{i,j_3}a_{i,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4},$$ $${\mathbf}T_2=n^{-1}{\left(}\sum_{i=1}^n{\left(}\sum_{j=1}^n a_{i,j}\xi_j{\right)}^2{\right)}^2 =n^{-1}\sum_{i_1,i_2}\sum_{j_1,j_2,j_3,j_4}a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_3}a_{i_2,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}.$$ Since all odd moments of $\xi_1,\cdots,\xi_n$ are 0, we know that ${{\rm E}}{\mathbf}T_1$ and ${{\rm E}}{\mathbf}T_2$ are only affected by terms whose multiplicities of distinct values in the sequence $(j_1,\cdots,j_4)$ are all even. We need to evaluate the mixed moment ${{\rm E}}{\left(}{\mathbf}T_1^{\gamma}{\mathbf}T_2^{\omega}{\right)}$. For simplifying notations particularly in Section \[var\] we introduce the following notations $$\begin{aligned} &\Omega_{\{\omega_1,\omega_2,\cdots,\omega_s\}}^{{\left(}\gamma_1,\gamma_2,\cdots,\gamma_t{\right)}}[\underbrace{{\left(}\phi_{1,1},\cdots,\phi_{1,s}{\right)},{\left(}\phi_{2,1},\cdots,\phi_{2,s}{\right)}, \cdots,{\left(}\phi_{t,1},\cdots,\phi_{t,s}{\right)}}_{t \ groups}]_0 \\ =&\sum_{i_1,\cdots,i_{t},j_1\neq\cdots\neq j_{s}}\prod_{\tau=1,\cdots, t}\prod_{\rho=1,\cdots, s} a_{i_{\tau},j_{\rho}}^{\phi_{\tau,\rho}},\end{aligned}$$ where $i_1,\cdots,i_{t}$ and $j_1,\cdots,j_{s}$ run over $1,\cdots,n$ and are subject to the restrictions that $j_1,\cdots,j_s$ are distinct; $\sum_{l=1}^t \gamma_l=\sum_{l=1}^s \omega_l=\theta,$ and for any $k=1,\cdots, s$, $\sum_{l=1}^t \phi_{l,k}=\theta$. Intuitively, $t$ is the number of distinct $i$-indices and $s$ that of distinct $j$’s; $\gamma_\tau$ is the multiplicity of the index $i_\tau$ and $\omega_\rho=\sum_{l=1}^t\phi_{l,\rho}$ that of $j_\rho$; $\phi_{\tau,\rho}$ the multiplicity of the factor $a_{i_\tau,j_\rho}$; and $\theta=4(\gamma+\omega)$. Define $$\begin{aligned} &\Omega_{\{\omega_1,\omega_2,\cdots,\omega_s\}}^{{\left(}\gamma_1,\gamma_2,\cdots,\gamma_t{\right)}}[\underbrace{{\left(}\phi_{1,1},\cdots,\phi_{1,s}{\right)},{\left(}\phi_{2,1},\cdots,\phi_{2,s}{\right)}, \cdots,{\left(}\phi_{t,1},\cdots,\phi_{t,s}{\right)}}_{t \ groups}] \\ =&\sum_{i_1,\cdots,i_{t},j_1,\cdots, j_{s}}\prod_{\tau=1,\cdots, t}\prod_{\rho=1,\cdots, s} a_{i_{\tau},j_{\rho}}^{\phi_{\tau,\rho}}.\end{aligned}$$ The definition above is similar to that of $$\Omega_{\{\omega_1,\omega_2,\cdots,\omega_s\}}^{{\left(}\gamma_1,\gamma_2,\cdots,\gamma_t{\right)}}[\underbrace{{\left(}\phi_{1,1},\cdots,\phi_{1,s}{\right)},{\left(}\phi_{2,1},\cdots,\phi_{2,s}{\right)}, \cdots,{\left(}\phi_{t,1},\cdots,\phi_{t,s}{\right)}}_{t \ groups}]_0$$ without the restriction that the indices $j_1,\cdots,j_s$ are distinct from each other. To help understand these notations we demonstrate some examples as follows. $$\begin{aligned} \Omega_{\{2,2,2,2\}}^{(4,4)}[(2,2,0,0),(0,0,2,2)]=\sum_{i_1,i_2,j_1,\cdots,j_4}a_{i_1,j_1}^2a_{i_1,j_2}^2a_{i_2,j_3}^2a_{i_2,j_3}^2,\end{aligned}$$ $$\begin{aligned} \Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]=\sum_{i_1,i_2,j_1,\cdots,j_4}a_{i_1,j_1}^2a_{i_1,j_2}a_{i_1,j_3}a_{i_2,j_2}a_{i_2,j_3}a_{i_2,j_4}^2,\end{aligned}$$ $$\begin{aligned} \Omega_{\{2,2,2,2\}}^{(4,4)}[(2,2,0,0),(0,0,2,2)]_0=\sum_{i_1,i_2,j_1\neq\cdots\neq j_4}a_{i_1,j_1}^2a_{i_1,j_2}^2a_{i_2,j_3}^2a_{i_2,j_3}^2,\end{aligned}$$ $$\begin{aligned} \Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]_0=\sum_{i_1,i_2,j_1\neq\cdots\neq j_4}a_{i_1,j_1}^2a_{i_1,j_2}a_{i_1,j_3}a_{i_2,j_2}a_{i_2,j_3}a_{i_2,j_4}^2.\end{aligned}$$ We further use $M_k$ to denote the $k$-th order moment of the error random variable. We also use ${\mathbf}C_{n}^k$ to denote the combinatorial number $n \choose k$. We then obtain $$\begin{aligned} \label{e1} &{{\rm E}}{\mathbf}T_1={{\rm E}}\sum_{i=1}^n\sum_{j_1,j_2,j_3,j_4}a_{i,j_1}a_{i,j_2}a_{i,j_3}a_{i,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}\\\notag =&M_4\Omega_{\{4\}}^{(4)}+M_2^2{\Omega_{\{2,2\}}^{(4)}}_0=M_4\Omega_{\{4\}}^{(4)}+\frac{{\mathbf}C_4^2}{2!}{\Omega_{\{2,2\}}^{(4)}}[{\left(}2,0{\right)},{\left(}0,2{\right)}]_0\\\notag =&M_4\Omega_{\{4\}}^{(4)}+\frac{{\mathbf}C_4^2}{2!}{\left(}{\Omega_{\{2,2\}}^{(4)}}[{\left(}2,0{\right)},{\left(}0,2{\right)}]-\Omega_{\{4\}}^{(4)}{\right)}\\\notag =&\frac{{\mathbf}C_4^2}{2!}{\Omega_{\{2,2\}}^{(4)}}[{\left(}2,0{\right)},{\left(}0,2{\right)}]+\nu_4 \Omega_{\{4\}}^{(4)} =3\sum_i{\left(}\sum_{j}a_{i,j}^2{\right)}^2+\nu_4\sum_{ij}a_{ij}^4\\\notag =&3\sum_ib_{i,i}^2+\nu_4\sum_{ij}a_{ij}^4=3{{\rm tr}}{\left(}{\mathbf}B{\circ}{\mathbf}B{\right)}+\nu_4{{\rm tr}}({\mathbf}A{\circ}{\mathbf}A)'({\mathbf}A{\circ}{\mathbf}A),\end{aligned}$$ where $\nu_4=M_4-3$ and $$\begin{aligned} \label{e2} &{{\rm E}}{\mathbf}T_2=n^{-1}{{\rm E}}\sum_{i_1,i_2}\sum_{j_1,j_2,j_3,j_4}a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_3}a_{i_2,j_4}\xi_{j_1}\xi_{j_2}\xi_{j_3}\xi_{j_4}\\\notag =&n^{-1}{\left(}M_4\Omega_{\{4\}}^{(2,2)}+M_2^2{\Omega_{\{2,2\}}^{(2,2)}}_0{\right)}\\\notag =&n^{-1}{\left(}M_4\Omega_{\{4\}}^{(2,2)}+{\left(}{\Omega_{\{2,2\}}^{(2,2)}}[{\left(}2,0{\right)},{\left(}0,2{\right)}]_0+2\Omega_{\{2,2\}}^{(2,2)}[{\left(}1,1{\right)},{\left(}1,1{\right)}]_0{\right)}{\right)}\\\notag =&n^{-1}{\left(}M_4\Omega_{\{4\}}^{(2,2)}+{\left(}{\Omega_{\{2,2\}}^{(2,2)}}[{\left(}2,0{\right)},{\left(}0,2{\right)}]+2\Omega_{\{2,2\}}^{(2,2)}[{\left(}1,1{\right)},{\left(}1,1{\right)}]{\right)}-3\Omega_{\{4\}}^{(2,2)}{\right)}\\\notag =&n^{-1}{\left(}\sum_{i_1,i_2,j_1,j_2}a_{i_1,j_1}^2a_{i_2,j_2}^2+2\sum_{i_1,i_2,j_1,j_2}a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_1}a_{i_2,j_2} +\nu_4\sum_{i_1,i_2,j}a_{i_1j}^2a_{i_2j}^2{\right)}\\\notag =&n^{-1}{\left(}{\left(}\sum_{i,j}a_{i,j}^2{\right)}^2+2\sum_{i_1,i_2}{\left(}\sum_{j}a_{i_1,j}a_{i_2,j}{\right)}^2+\nu_4\sum_{i_1,i_2,j}a_{i_1j}^2a_{i_2j}^2{\right)}\\\notag =&n^{-1}{\left(}{\left(}\sum_{i,j}a_{i,j}^2{\right)}^2+2\sum_{i_1,i_2}b_{i_1,i_2}^2+\nu_4\sum_{j=1}^nb_{jj}^2{\right)}\\ \notag =&n^{-1}{\left(}{\left(}{{\rm tr}}{\mathbf}B{\right)}^2+2{{\rm tr}}{\mathbf}B^2+\nu_4{{\rm tr}}({\mathbf}B{\circ}{\mathbf}B){\right)}.\end{aligned}$$ The variances and covariance {#var} ---------------------------- We are now in the position to calculate the variances of ${\mathbf}T_1$, ${\mathbf}T_2$ and their covariance. First, we have $$\begin{aligned} \label{t10} &{\rm Var}( {\mathbf}T_1)={{\rm E}}{\left(}\sum_{i}\widehat{\varepsilon_i}^4-{{\rm E}}{\left(}\sum_{i}\widehat{\varepsilon_i}^4{\right)}{\right)}^2\\\notag =&\sum_{i_1,i_2,j_1,\cdots,j_8}[{{\rm E}}G(i_1,{\mathbf}j_1)G(i_2,{\mathbf}j_2)-{{\rm E}}G(i_1,{\mathbf}j_1){{\rm E}}G(i_2,{\mathbf}j_2)]\\\notag =&\Bigg(\Omega_{\{8\}}^{(4,4)}+\Omega_{\{2,6\}_0}^{(4,4)}+\Omega_{\{4,4\}_0}^{(4,4)}+\Omega_{\{2,2,4\}_0}^{(4,4)}+\Omega_{\{2,2,2,2\}_0}^{(4,4)}\Bigg),\\\notag\end{aligned}$$ where the first term comes from the graphs in which the 8 $J$-vertices coincide together; the second term comes from the graphs in which there are 6 $J$-vertices coincident and another two coincident and so on. Because $G(i_1,{\mathbf}j_1)$ and $G(i_2,{\mathbf}j_2)$ have to connected each other, thus, we have $$\begin{aligned} \label{t11} &\Omega_{\{2,2,2,2\}_0}^{(4,4)}\\\notag=&\frac{{\mathbf}C_4^2{\mathbf}C_4^2{\mathbf}C_2^1{\mathbf}C_2^1}{2!}\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]_0+{{\mathbf}C_4^1{\mathbf}C_3^1{\mathbf}C_2^1}\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]_0\\\notag =&72\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]-4\Omega_{\{2,2,4\}}^{(4,4)}[(2,1,1),(0,1,3)]_0-\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]_0\\\notag &-\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]_0-2\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]_0-\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag &-2\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0-2\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]_0-\Omega_{\{8\}}^{(4,4)}\Big)\\\notag &+24\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]-6\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]_0-3\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag &-4\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0-\Omega_{\{8\}}^{(4,4)}\Big).\\\notag =&72\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]-4\Omega_{\{2,2,4\}}^{(4,4)}[(2,1,1),(0,1,3)]-\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]\\\notag &-\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+2\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]_0+\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag &+4\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0+4\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]_0+5\Omega_{\{8\}}^{(4,4)}\Big)\\\notag &+24\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]-6\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+3\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag &+8\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0+5\Omega_{\{8\}}^{(4,4)}\Big).\\\notag =&72\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]-4\Omega_{\{2,2,4\}}^{(4,4)}[(2,1,1),(0,1,3)]-\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]\\\notag &-\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+2\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]+\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]\\\notag &+4\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]+4\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]-6\Omega_{\{8\}}^{(4,4)}\Big)\\\notag &+24\Big(\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]-6\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+3\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]\\\notag &+8\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]-6\Omega_{\{8\}}^{(4,4)}\Big).\end{aligned}$$ Likewise we have $$\begin{aligned} \label{t12} &{\Omega_{\{2,2,4\}}^{(4,4)}}_0\\\notag =&{{\mathbf}C_2^1{\mathbf}C_4^3{\mathbf}C_4^1{\mathbf}C_3^1}M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,2,1),(1,0,3)]_0\\\notag &+\frac{{\mathbf}C_4^2{\mathbf}C_4^2{\mathbf}C_2^1{\mathbf}C_2^1}{2!}M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]_0+{{\mathbf}C_4^2{\mathbf}C_4^2}(M_4-1)\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]_0\\\notag =&96M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,2,1),(1,0,3)]_0\\\notag &+72M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]_0+36(M_4-1)\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]_0,\\\ \notag =&96M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,2,1),(1,0,3)]\\\notag &+72M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+36(M_4-1)\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]\\\ \notag &-96M_4\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]_0-(108 M_4-36)\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag &-240M_4\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0-(168M_4-72)\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]_0-(204M_4-36)\Omega_{\{8\}}^{(4,4)}\\\notag =&96M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,2,1),(1,0,3)]\\\notag &+72M_4\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+36(M_4-1)\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]\\\ \notag &-96M_4\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]-(108 M_4-36)\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]\\\notag &-240M_4\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]-(168M_4-72)\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]+(408M_4-72)\Omega_{\{8\}}^{(4,4)}\end{aligned}$$ $$\begin{aligned} \label{t13} \Omega_{\{4,4\}_0}^{(4,4)}=&{{\mathbf}C_2^1{\mathbf}C_4^1{\mathbf}C_4^3}M_4^2\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]_0 +\frac{{\mathbf}C_4^2{\mathbf}C_4^2}{2!}(M_4^2-1)\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]_0\\\notag =&16M_4^2\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]+18(M_4^2-1)\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]\\\notag &-(34M_4^2-18)\Omega_{\{8\}}^{(4,4)},\end{aligned}$$ $$\begin{aligned} \label{t14} \Omega_{\{2,6\}_0}^{(4,4)}=&{\mathbf}C_2^1{\mathbf}C_4^2(M_6-M_4)\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]_0+{\mathbf}C_4^1{\mathbf}C_4^1M_6\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]_0\\\notag =&12(M_6-M_4)\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]+16M_6\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]\\\notag & -(28 M_6-12 M_4)\Omega_{\{8\}}^{(4,4)}.\end{aligned}$$ and $$\begin{aligned} \label{t15} \Omega_{\{8\}_0}^{(4,4)}=&(M_8-M_4^2)\Omega_{\{8\}}^{(4,4)}[(4),(4)].\end{aligned}$$ Combining (\[t10\]), (\[t11\]), (\[t12\]), (\[t13\]), (\[t14\]) and (\[t15\]), we obtain $$\begin{aligned} \label{vt1} &{\rm Var}( {\mathbf}T_1)=72\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]+24\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]\\\notag &+96(M_4-3)\Omega_{\{2,2,4\}}^{(4,4)}[(2,1,1),(0,1,3)] +36(M_4-3)\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]\\\notag &+72(M_4-3)\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]+16(M_4^2-6M_4+9)\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]\\\notag &+18(M_4^2-6M_4+9)\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]+16(M_6-15M_4+30)\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]\\\notag &+12(M_6-15M_4+30)\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)] +(M_8-28M_6-35M_4^2+420M_4-630)\Omega_{\{8\}}^{(4,4)}[(4),(4)],\end{aligned}$$ where $$\begin{aligned} &\Omega_{\{2,2,2,2\}}^{(4,4)}[(2,1,1,0),(0,1,1,2)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_1,j_2}a_{i_1,j_3}a_{i_2,j_2}a_{i_2,j_3}a_{i_2,j_4}^2\\\notag &={{\rm Diag}}'({\mathbf}B) {\left(}{\mathbf}B{\circ}{\mathbf}B{\right)}{{\rm Diag}}({\mathbf}B),\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,2,2,2\}}^{(4,4)}[(1,1,1,1),(1,1,1,1)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}a_{i_1,j_2}a_{i_1,j_3}a_{i_1,j_4}a_{i_2,j_1}a_{i_2,j_2}a_{i_2,j_3}a_{i_2,j_4}\\\notag &={{\rm tr}}{\left(}{\mathbf}B{\circ}{\mathbf}B{\right)}^2,\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,2,4\}}^{(4,4)}[(2,1,1),(0,1,3)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_1,j_2}a_{i_1,j_3}a_{i_2,j_2}a_{i_2,j_3}^3={{\rm tr}}{\mathbf}B {\mathbf}{D_B} {\mathbf}A {\mathbf}A'^{{\circ}3},\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,2,4\}}^{(4,4)}[(2,0,2),(0,2,2)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_1,j_3}^2a_{i_2,j_2}^2a_{i_2,j_3}^2\\\notag &={{\rm Diag}}'({\mathbf}B) {\left(}{\mathbf}A{\circ}{\mathbf}A{\right)}{\left(}{\mathbf}A{\circ}{\mathbf}A{\right)}'{{\rm Diag}}({\mathbf}B) ,\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,2,4\}}^{(4,4)}[(1,1,2),(1,1,2)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}a_{i_1,j_2}a_{i_1,j_3}^2a_{i_2,j_1}a_{i_2,j_2}a_{i_2,j_3}^2\\\notag &={{\rm tr}}{\left(}{\left(}{\mathbf}B{\circ}{\mathbf}B{\right)}{\left(}{\mathbf}A{\circ}{\mathbf}A{\right)}{\left(}{\mathbf}A{\circ}{\mathbf}A{\right)}'{\right)},\end{aligned}$$ $$\begin{aligned} &\Omega_{\{4,4\}}^{(4,4)}[(3,1),(1,3)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^3a_{i_1,j_2}a_{i_2,j_1}a_{i_2,j_2}^3={{\rm tr}}{\left(}{\left(}{\mathbf}A^{{\circ}3}{\mathbf}A'{\right)}{\left(}{\mathbf}A^{{\circ}3}{\mathbf}A'{\right)}'{\right)},\end{aligned}$$ $$\begin{aligned} &\Omega_{\{4,4\}}^{(4,4)}[(2,2),(2,2)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_1,j_2}^2a_{i_2,j_1}^2a_{i_2,j_2}^2={{\rm tr}}{\left(}{\left(}{\mathbf}A {\circ}{\mathbf}A{\right)}{\left(}{\mathbf}A {\circ}{\mathbf}A{\right)}'{\right)}^2 ,\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,6\}}^{(4,4)}[(1,3),(1,3)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}a_{i_1,j_2}^3a_{i_2,j_1}a_{i_2,j_2}^3={{\rm tr}}{\left(}{\mathbf}B {\mathbf}A^{{\circ}3} {\mathbf}A'^{{\circ}3}{\right)},\end{aligned}$$ $$\begin{aligned} &\Omega_{\{2,6\}}^{(4,4)}[(2,2),(0,4)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_1,j_2}^2a_{i_2,j_2}^4={{\rm tr}}{\left(}{\left(}{\mathbf}A' {\mathbf}D_{{\mathbf}B}{\mathbf}A {\right)}{\circ}{\left(}{\mathbf}A'^{{\circ}2}{\mathbf}A^{{\circ}2}{\right)}{\right)},\end{aligned}$$ and $$\begin{aligned} &\Omega_{\{8\}}^{(4,4)}[(4),(4)]=\sum_{i_1,\cdots,i_2,j_1,\cdots, j_4}a_{i_1,j_1}^42a_{i_2,j_1}^4={\mathbf}1'{\mathbf}A^{{\circ}4}{\mathbf}A'^{{\circ}4}{\mathbf}1,\end{aligned}$$ Using the same procedure, we have $$\begin{aligned} &{\rm Var}({\mathbf}T_2)=n^{-2}{\left(}{{\rm E}}{\left(}\sum_{i}\widehat{\varepsilon_i}^2{\right)}^4-{{\rm E}}^2{\left(}\sum_{i}\widehat{\varepsilon_i}^2{\right)}^2{\right)}\\\notag =&n^{-2}\sum_{i_1,\cdots,i_4,j_1,\cdots,j_8}a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_3}a_{i_2,j_4}a_{i_3,j_5}a_{i_3,j_6}a_{i_4,j_7}a_{i_4,j_8} {\left(}{{\rm E}}\prod_{t=1}^8\xi_{j_t}-{{\rm E}}\prod_{t=1}^4\xi_{j_t}{{\rm E}}\prod_{t=5}^8\xi_{j_t}{\right)}\\\notag =&n^{-2}(P_{2,1}+P_{2,2})+O(1),\end{aligned}$$ where $$\begin{aligned} P_{2,1}= {\mathbf}C_2^1{\mathbf}C_2^1{\mathbf}C_2^1\sum_{i_1,\cdots,i_4,j_1,\cdots, j_4}a^2_{i_1,j_1}a_{i_2,j_2}a_{i_3,j_2}a_{i_2,j_3}a_{i_3,j_3}a_{i_4,j_4}^2 =8{\left(}{{\rm tr}}{\mathbf}B{\right)}^2{{\rm tr}}{\mathbf}B^2,\end{aligned}$$ $$\begin{aligned} P_{2,2}=\nu_4{\mathbf}C_2^1{\mathbf}C_2^1\sum_{i_1,\cdots,i_4,j_1,j_2, j_3}a^2_{i_1,j_1}a^2_{i_2,j_2}a^2_{i_3,j_2}a_{i_4,j_3}^2 =4\nu_4{{\rm tr}}({\mathbf}B'{\circ}{\mathbf}B'){\left(}{{\rm tr}}{\mathbf}B{\right)}^2,\end{aligned}$$ Similarly, we have $$\begin{aligned} &{\rm Cov}({\mathbf}T_1,{\mathbf}T_2)=n^{-1}{\left(}{{\rm E}}{\left(}\sum_{i}\widehat{\varepsilon_i}^2{\right)}^2\sum_{i}\widehat{\varepsilon_i}^4-{{\rm E}}{\left(}\sum_{i}\widehat{\varepsilon_i}^2{\right)}^2\sum_{i}\widehat{\varepsilon_i}^4{\right)}\\\notag =&n^{-1}\sum_{i_1,\cdots,i_3,j_1,\cdots,j_8}a_{i_1,j_1}a_{i_1,j_2}a_{i_2,j_3}a_{i_2,j_4}a_{i_3,j_5}a_{i_3,j_6}a_{i_3,j_7}a_{i_3,j_8} {\left(}{{\rm E}}\prod_{t=1}^8\xi_{j_t}-{{\rm E}}\prod_{t=1}^4\xi_{j_t}{{\rm E}}\prod_{t=5}^8\xi_{j_t}{\right)}\\ =&n^{-1}(P_{3.1}+P_{3,2}+P_{3,3}+P_{3,4})+O(1),\end{aligned}$$ where $$\begin{aligned} P_{3,1}={\mathbf}C_4^2{\mathbf}C_2^1{\mathbf}C_2^1\sum_{i_1,\cdots,i_3,j_1,\cdots, j_4}a_{i_1,j_1}^2a_{i_2,j_2}a_{i_2,j_3}a_{i_3,j_2}a_{i_3,j_3}a_{i_3,j_4}^2 =24{{\rm tr}}{\left(}{\mathbf}B^2{\circ}{\mathbf}B{\right)}{{\rm tr}}{\mathbf}B,\end{aligned}$$ $$\begin{aligned} P_{3,2}=\nu_4{\mathbf}C_4^1{\mathbf}C_2^1{\mathbf}C_2^1\sum_{i_1,\cdots,i_3,j_1,\cdots, j_3}a_{i_1,j_1}^2a_{i_2,j_2}a_{i_3,j_2}a_{i_2,j_3}a_{i_3,j_3}^3 =16\nu_4{{\rm tr}}({\mathbf}B{\mathbf}A {\mathbf}A'^{{\circ}3}){{\rm tr}}{\mathbf}B,\end{aligned}$$ $$\begin{aligned} P_{3,3}=\nu_4{\mathbf}C_4^2{\mathbf}C_2^1\sum_{i_1,\cdots,i_3,j_1,\cdots, j_3}a_{i_1,j_1}^2a^2_{i_2,j_2}a^2_{i_3,j_2}a_{i_3,j_3}^2 =12\nu_4{{\rm tr}}{\left(}{\left(}{\mathbf}A'{\mathbf}D_{{\mathbf}B}{\mathbf}A{\right)}{\circ}{\left(}{\mathbf}A'{\mathbf}A{\right)}{\right)}{{\rm tr}}{\mathbf}B,\end{aligned}$$ $$\begin{aligned} \label{ct12} P_{3,3}=\nu_6{\mathbf}C_2^1\sum_{i_1,i_2,i_3,j_1, j_2}a_{i_1,j_1}^2a^2_{i_2,j_2}a^4_{i_3,j_2} =2\nu_6[{{\rm Diag}}({\mathbf}A'{\mathbf}A)'({\mathbf}A'^{{\circ}4}){\mathbf}1]{{\rm tr}}{\mathbf}B,\end{aligned}$$ We would like to point out that we do not need the assumption that $H_0$ holds up to now. From now on, in order to simplify the above formulas we assume $H_0$ holds. Summarizing the calculations above, we obtain under $H_0$ $$\begin{aligned} {{\rm E}}{\mathbf}T_1=3\sum_ib_{i,i}^2+\nu_4\sum_{ij}a_{ij}^4=3{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^2,\end{aligned}$$ $$\begin{aligned} \label{e2} {{\rm E}}{\mathbf}T_2=n^{-1}{\left(}{\left(}n-p{\right)}^2+2{\left(}n-p{\right)}+\nu_4{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P){\right)},\end{aligned}$$ $$\begin{aligned} {\rm Var}{\mathbf}T_1=&72{{\rm Diag}}'({\mathbf}P) {\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}{{\rm Diag}}({\mathbf}P)+24{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}^2\\\notag &+\nu_4{\left(}96{{\rm tr}}{\mathbf}P {\mathbf}{D_P} {\mathbf}P {\mathbf}P^{{\circ}3}+72{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^3+36{{\rm Diag}}'({\mathbf}P) {\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}^2{{\rm Diag}}({\mathbf}P) {\right)}\\\notag &+\nu^2_4{\left(}18{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)^4+16{{\rm tr}}({\mathbf}P^{{\circ}3}{\mathbf}P)^2){\right)}\\\notag &+\nu_6{\left(}12{{\rm tr}}{\left(}{\left(}{\mathbf}P{\mathbf}D_{{\mathbf}P}{\mathbf}P {\right)}{\circ}{\left(}{\mathbf}P^{{\circ}2}{\mathbf}P^{{\circ}2}{\right)}{\right)}+16{{\rm tr}}{\mathbf}P {\mathbf}P^{{\circ}3}{\mathbf}P^{{\circ}3}{\right)}+\nu_8{\mathbf}1'({\mathbf}P^{{\circ}4}{\mathbf}P^{{\circ}4}){\mathbf}1,\end{aligned}$$ $$\begin{aligned} {\rm Var}({\mathbf}T_2)=\frac{8{\left(}n-p{\right)}^3+4\nu_4{\left(}n-p{\right)}^2{{\rm tr}}({\mathbf}P{\circ}{\mathbf}P)}{n^2}+O(1),\end{aligned}$$ and $$\begin{aligned} &{\rm Cov}({\mathbf}T_1,{\mathbf}T_2)\\\notag =&\frac{{\left(}n-p{\right)}}{n}{\left(}24{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}+16\nu_4{{\rm tr}}({\mathbf}P {\mathbf}P^{{\circ}3})+12\nu_4{{\rm tr}}{\left(}{\left(}{\mathbf}P{\mathbf}D_{{\mathbf}p}{\mathbf}P{\right)}{\circ}{\mathbf}P{\right)}+2\nu_6[{{\rm Diag}}({\mathbf}P)'({\mathbf}P^{{\circ}4}){\mathbf}1]{\right)}.\end{aligned}$$ The proof of the main theorem ----------------------------- Define a function $f(x,y)=\frac{x}{y}-1$. One may verify that $f_x(x,y)=\frac{1}{y},$ $f_y(x,y)=-\frac{x}{y^2}$, where $f_x(x,y)$ and $f_y(x,y)$ are the first order partial derivative. Since ${\mathbf}T=\frac{{\mathbf}T_1}{{\mathbf}T_2}-1,$ using the delta method, we have under $H_0$, $${{\rm E}}{\mathbf}T=f({{\rm E}}{\mathbf}T_1,{{\rm E}}{\mathbf}T_2)={\left(}\frac{3n{{\rm tr}}{\left(}{\mathbf}P{\circ}{\mathbf}P{\right)}}{(n-p)^2+2{\left(}n-p{\right)}}-1{\right)},$$ $$\begin{aligned} {\rm {Var}} {\mathbf}T=(f_x({{\rm E}}{\mathbf}T_1,{{\rm E}}{\mathbf}T_2),f_y({{\rm E}}{\mathbf}T_1,{{\rm E}}{\mathbf}T_2))\Sigma(f_x({{\rm E}}{\mathbf}T_1,{{\rm E}}{\mathbf}T_2),f_y({{\rm E}}{\mathbf}T_1,{{\rm E}}{\mathbf}T_2))'.\end{aligned}$$ The proof of the main theorem is complete. [10]{} Adelchi Azzalini and Adrian Bowman. On the use of nonparametric regression for checking linear relationships. , pages 549–557, 1993. Zhidong Bai and Jack W Silverstein. , volume 20. Springer, 2010. Trevor S Breusch and Adrian R Pagan. A simple test for heteroscedasticity and random coefficient variation. , pages 1287–1294, 1979. R Dennis Cook and Sanford Weisberg. Diagnostics for heteroscedasticity in regression. , 70(1):1–10, 1983. Holger Dette and Axel Munk. Testing heteroscedasticity in nonparametric regression. , 60(4):693–708, 1998. Holger Dette, Axel Munk, and Thorsten Wagner. Estimating the variance in nonparametric regression—what is a reasonable choice? , 60(4):751–764, 1998. Herbert Glejser. A new test for heteroskedasticity. , 64(325):316–323, 1969. Michael J Harrison and Brendan PM McCabe. A test for heteroscedasticity based on ordinary least squares residuals. , 74(366a):494–499, 1979. S John. Some optimal multivariate tests. , 58(1):123–127, 1971. Zhaoyuan Li and Jianfeng Yao. Homoscedasticity tests valid in both low and high-dimensional regressions. , 2015. Gary C McDonald and Richard C Schwing. Instabilities of regression estimates relating air pollution to mortality. , 15(3):463–481, 1973. Halbert White. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. , pages 817–838, 1980. H.Altay Guvenir and I.Uysal. Bilkent University Function Approximation Repository. , 2000 [^1]: Zhidong Bai is partially supported by a grant NSF China 11571067 [^2]: G. M. Pan was partially supported by a MOE Tier 2 grant 2014-T2-2-060 and by a MOE Tier 1 Grant RG25/14 at the Nanyang Technological University, Singapore. [^3]: Yanqing Yin was partially supported by a project of China Scholarship Council
--- abstract: 'In this article we relate word and subgroup growth to certain functions that arise in the quantification of residual finiteness. One consequence of this endeavor is a pair of results that equate the nilpotency of a finitely generated group with the asymptotic behavior of these functions. The second half of this article investigates the asymptotic behavior of two of these functions. Our main result in this arena resolves a question of Bogopolski from the Kourovka notebook concerning lower bounds of one of these functions for nonabelian free groups.' author: - 'K. Bou-Rabee[^1]  and D. B. McReynolds[^2]' title: | **Asymptotic growth and\ least common multiples in groups** --- 1991 MSC classes: 20F32, 20E26 .05in keywords: *free groups, hyperbolic groups, residual finiteness, subgroup growth, word growth.* Introduction ============ The goals of the present article are to examine the interplay between word and subgroup growth, and to quantify residual finiteness, a topic motivated and described by the first author in [@Bou]. These two goals have an intimate relationship that will be illustrated throughout this article. Our focus begins with the interplay between word and subgroup growth. Recall that for a fixed finite generating set $X$ of $\Gamma$ with associated word metric ${\left\vert \left\vert \cdot\right\vert\right\vert}_X$, word growth investigates the asymptotic behavior of the function $$\operatorname{w}_{\Gamma,X}(n) = {\left\vert{\left\{\gamma \in \Gamma~:~ {\left\vert \left\vert \gamma\right\vert\right\vert}_X \leq n\right\}}\right\vert},$$ while subgroup growth investigates the asymptotic behavior of the function $$\operatorname{s}_\Gamma(n) = {\left\vert{\left\{\Delta \lhd \Gamma~:~ [\Gamma:\Delta]\leq n\right\}}\right\vert}.$$ To study the interaction between word and subgroup growth we propose the first of a pair of questions: **Question 1.** *What is the smallest integer $\operatorname{F}_{\Gamma,X}(n)$ such that for every word $\gamma$ in $\Gamma$ of word length at most $n$, there exists a finite index normal subgroup of index at most $\operatorname{F}_{\Gamma,X}(n)$ that fails to contain $\gamma$?* To see that the asymptotic behavior of $\operatorname{F}_{\Gamma,X}(n)$ measures the interplay between word and subgroup growth, we note the following inequality (see Section \[Preliminary\] for a simple proof): $$\label{BasicInequality} \log (\operatorname{w}_{\Gamma,X}(n)) \leq \operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(2n))\log (\operatorname{F}_{\Gamma,X}(2n)).$$ Our first result, which relies on Inequality (\[BasicInequality\]), is the following. \[DivisibilityLogGrowth\] If $\Gamma$ is a finitely generated linear group, then the following are equivalent: - $\operatorname{F}_{\Gamma,X}(n) \leq (\log(n))^r$ for some $r$. - $\Gamma$ is virtually nilpotent. For finitely generated linear groups that is not virtually nilpotent, Theorem \[DivisibilityLogGrowth\] implies $\operatorname{F}_{\Gamma,X}(n) \nleq (\log(n))^r$ for any $r >0$. For this class of groups, we can improve this lower bound. Precisely, we have the following result—see Section \[Preliminary\] for the definition of $\preceq$. \[basiclowerbound\] Let $\Gamma$ be a group that contains a nonabelian free group of rank $m$. Then $$n^{1/3} \preceq \operatorname{F}_{\Gamma,X}(n).$$ The motivation for the proof of Theorem \[basiclowerbound\] comes from the study of $\operatorname{F}_{{\ensuremath{{\ensuremath{\mathbf{Z}}}}},X}(n)$, where the Prime Number Theorem and least common multiples provide lower and upper bounds for $\operatorname{F}_{{\ensuremath{{\ensuremath{\mathbf{Z}}}}},X}(n)$. In Section \[FreeGroupGrowth\], we extend this approach by generalizing least common multiples to finitely generated groups (a similar approach was also taken in the article of Hadad [@Hadad]). Indeed with this analogy, Theorem \[basiclowerbound\] and the upper bound of $n^3$ established in [@Bou], [@Rivin] can be viewed as a weak Prime Number Theorem for free groups since the Prime Number Theorem yields $\operatorname{F}_{\ensuremath{{\ensuremath{\mathbf{Z}}}}}(n) \simeq \log(n)$. Recently, Kassabov–Matucci [@KM] improved the lower bound of $n^{1/3}$ to $n^{2/3}$. A reasonable guess is that $\operatorname{F}_{F_m,X}(n) \simeq n$, though presently neither the upper or lower bound is known. We refer the reader to [@KM] for additional questions and conjectures. There are other natural ways to measure the interplay between word and subgroup growth. Let $B_{\Gamma,X}(n)$ denote $n$–ball in $\Gamma$ for the word metric associated to the generating set $X$. Our second measurement is motivated by the following question—in the statement, $B_{\Gamma,X}(n)$ is the metric $n$–ball with respect to the word metric ${\left\vert \left\vert \cdot\right\vert\right\vert}_X$: **Question 2.** *What is the cardinality $\operatorname{G}_{\Gamma,X}(n)$ of the smallest finite group $Q$ such that there exists a surjective homomorphism ${\varphi}{\ensuremath{\colon}}\Gamma \to Q$ with the property that ${\varphi}$ restricted to $B_{\Gamma,X}(n)$ is injective?* We call $\operatorname{G}_{\Gamma,X}(n)$ the *residual girth function* and relate $\operatorname{G}_{\Gamma,X}(n)$ to $\operatorname{F}_{\Gamma,X}$ and $\operatorname{w}_{\Gamma,X}(n)$ for a class of groups containing non-elementary hyperbolic groups; Hadad [@Hadad] studied group laws on finite groups of Lie type, a problem that is related to residual girth and the girth of a Cayley graph for a finite group. Specifically, we obtain the following inequality (see Section \[FreeGroupGrowth\] for a precise description of the class of groups for which this inequality holds): $$\label{BasicGirthEquation} \operatorname{G}_{\Gamma,X}(n/2) \leq \operatorname{F}_{\Gamma,X}{\left( 6n(\operatorname{w}_{\Gamma,X}(n))^{2} \right) }.$$ Our next result shows that residual girth functions enjoy the same growth dichotomy as word and subgroup growth—see [@gromov] and [@lubsegal-2003]. \[GirthPolynomialGrowth\] If $\Gamma$ is a finitely generated group then the following are equivalent. - $\operatorname{G}_{\Gamma,X}(n) \leq n^r$ for some $r$. - $\Gamma$ is virtually nilpotent. The asymptotic growth of $\operatorname{F}_{\Gamma,X}(n)$, $\operatorname{G}_{\Gamma,X}(n)$, and related functions arise in quantifying residual finiteness, a topic introduced in [@Bou] (see also the recent articles of the authors [@BM], Hadad [@Hadad], Kassabov–Mattucci [@KM], and Rivin [@Rivin]). Quantifying residual finiteness amounts to the study of so-called divisibility functions. Given a finitely generated, residually finite group $\Gamma$, we define the *divisibility function* $\operatorname{D}_\Gamma{\ensuremath{\colon}}\Gamma^\bullet {\longrightarrow}{\ensuremath{{\ensuremath{\mathbf{N}}}}}$ by $$\operatorname{D}_\Gamma(\gamma) = \min {\left\{[\Gamma:\Delta] ~:~ \gamma \notin \Delta\right\}}.$$ The associated *normal divisibility function* for normal, finite index subgroups is defined in an identical way and will be denoted by $\operatorname{D}_{\Gamma}^\lhd$. It is a simple matter to see that $\operatorname{F}_{\Gamma,X}(n)$ is the maximum value of $\operatorname{D}_{\Gamma}^\lhd$ over all non-trivial elements in $B_{\Gamma,X}(n)$. We will denote the associated maximum of $\operatorname{D}_\Gamma$ over this set by $\max \operatorname{D}_\Gamma (n)$. The rest of the introduction is devoted to a question of Oleg Bogopolski, which concerns $\max \operatorname{D}_{\Gamma,X}(n)$. It was established in [@Bou] that $\log(n) \preceq \max \operatorname{D}_{\Gamma,X}(n)$ for any finitely generated group with an element of infinite order (this was also shown by [@Rivin]). For a nonabelian free group $F_m$ of rank $m$, Bogopolski asked whether $\max \operatorname{D}_{F_m,X}(n) \simeq \log(n)$ (see Problem 15.35 in the Kourovka notebook [@TheBook]). Our next result answers Bogopolski’s question in the negative—we again refer the reader to Section \[Preliminary\] for the definition of $\preceq$. \[toughlowerbound\] If $m>1$, then $\max \operatorname{D}_{F_m,X}(n) \npreceq \log(n)$. We prove Theorem \[toughlowerbound\] in Section \[toughlowerboundSection\] using results from Section \[FreeGroupGrowth\]. The first part of the proof of Theorem \[toughlowerbound\] utilizes the material established for the derivation of Theorem \[basiclowerbound\]. The second part of the proof of Theorem \[toughlowerbound\] is topological in nature, and involves a careful study of finite covers of the figure eight. It is also worth noting that our proof only barely exceeds the proposed upper bound of $\log(n)$. In particular, at present we cannot rule out the upper bound $(\log(n))^2$. In addition, to our knowledge the current best upper bound is $n/2 + 2$, a result established recently by Buskin [@Bus]. In comparison to our other results, Theorem \[toughlowerbound\] is the most difficult to prove and is also the most surprising. Consequently, the reader should view Theorem \[toughlowerbound\] as our main result. #### **Acknowledgements.** Foremost, we are extremely grateful to Benson Farb for his inspiration, comments, and guidance. We would like to thank Oleg Bogopolski, Emmanuel Breuillard, Jason Deblois, Jordan Ellenberg, Tsachik Gelander, Uzy Hadad, Frédéric Haglund, Ilya Kapovich, Martin Kassabov, Larsen Louder, Justin Malestein, Francesco Matucci, and Igor Rivin for several useful conversations and their interest in this article. Finally, we extend thanks to Tom Church, Blair Davey, and Alex Wright for reading over earlier drafts of this paper. The second author was partially supported by an NSF postdoctoral fellowship. Divisibility and girth functions {#Preliminary} ================================ In this introductory section, we lay out some of the basic results we require in the sequel. For some of this material, we refer the reader to [@Bou Section 1]. #### **Notation.** Throughout, $\Gamma$ will denote a finitely generated group, $X$ a fixed finite generating set for $\Gamma$, and ${\left\vert \left\vert \cdot\right\vert\right\vert}_X$ will denote the word metric. For $\gamma \in \Gamma$, ${\left< \gamma \right>}$ will denote the cyclic subgroup generated by $\gamma$ and $\overline{{\left< \gamma \right>}}$ the normal closure of ${\left< \gamma \right>}$. For any subset $S \subset \Gamma$ we set $S^\bullet = S-1$. #### **1. Function comparison and basic facts**. For a pair of functions $f_1,f_2{\ensuremath{\colon}}{\ensuremath{{\ensuremath{\mathbf{N}}}}}\to {\ensuremath{{\ensuremath{\mathbf{N}}}}}$, by $f_1 \preceq f_2$, we mean that there exists a constant $C$ such that $f_1(n) \leq Cf_2(Cn)$ for all $n$. In the event that $f_1 \preceq f_2$ and $f_2 \preceq f_1$, we will write $f_1 \simeq f_2$. This notion of comparison is well suited to the functions studied in this paper. We summarize some of the basic results from [@Bou] for completeness. \[DivisibilityAsymptoticLemma\] Let $\Gamma$ be a finitely generated group. - If $X,Y$ are finite generating sets for $\Gamma$ then $\operatorname{F}_{\Gamma,X} \simeq \operatorname{F}_{\Gamma,Y}$. - If $\Delta$ is a finitely generated subgroup of $\Gamma$ and $X,Y$ are finite generating sets for $\Gamma,\Delta$ respectively, then $\operatorname{F}_{\Delta,Y} \preceq \operatorname{F}_{\Gamma,X}$. - If $\Delta$ is a finite index subgroup of $\Gamma$ with $X,Y$ as in (b), then $\operatorname{F}_{\Gamma,X} \preceq (\operatorname{F}_{\Delta,Y})^{[\Gamma:\Delta]}$. We also have a version of Lemma \[DivisibilityAsymptoticLemma\] for residual girth functions. \[GirthAsymptoticLemma\] Let $\Gamma$ be a finitely generated group. - If $X,Y$ are finite generating sets for $\Gamma$, then $\operatorname{G}_{\Gamma,X} \simeq \operatorname{G}_{\Gamma,Y}$. - If $\Delta$ is a finitely generated subgroup of $\Gamma$ and $X,Y$ are finite generating sets for $\Gamma,\Delta$ respectively, then $\operatorname{G}_{\Delta,Y} \preceq \operatorname{G}_{\Gamma,X}$. - If $\Delta$ is a finite index subgroup of $\Gamma$ with $X,Y$ as in (b), then $\operatorname{G}_{\Gamma,X} \preceq (\operatorname{G}_{\Delta,Y})^{[\Gamma:\Delta]}$. As the proof of Lemma \[GirthAsymptoticLemma\] is straightforward, we have opted to omit it for sake of brevity. As a consequence of Lemmas \[DivisibilityAsymptoticLemma\] and \[GirthAsymptoticLemma\], we occasionally suppress the dependence of the generating set in our notation. #### **2. The basic inequality.** We now derive (\[BasicInequality\]) from the introduction. For the reader’s convenience, recall (\[BasicInequality\]) is $$\log (\operatorname{w}_{\Gamma,X}(n)) \leq \operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(2n))\log (\operatorname{F}_{\Gamma,X}(2n)).$$ We may assume that $\Gamma$ is residually finite as otherwise $\operatorname{F}_\Gamma(n)$ is eventually infinite for sufficiently large $n$ and the inequality is trivial. By definition, for each word $\gamma \in B_{\Gamma,X}^\bullet(2n)$, there exists a finite index, normal subgroup $\Delta_\gamma$ in $\Gamma$ such that $\gamma \notin \Delta_\gamma$ and $[\Gamma:\Delta_\gamma] \leq \operatorname{F}_{\Gamma,X}(2n)$. Setting $\Omega_{\operatorname{F}_{\Gamma,X}(2n)}(\Gamma)$ to be the intersection of all finite index, normal subgroup of index at most $\operatorname{F}_{\Gamma,X}(2n)$, we assert that $B_{\Gamma,X}(n)$ injects into quotient $\Gamma/\Omega_{\operatorname{F}_{\Gamma,X}(2n)}(\Gamma)$. Indeed, if two elements $\gamma_1,\gamma_2 \in B_{\Gamma,X}(n)$ had the same image, the element $\gamma_1\gamma_2^{-1}$ would reside in $\Omega_{\operatorname{F}_{\Gamma,X}(2n)}(\Gamma)$. However, by construction, every element of word length at most $2n$ has nontrivial image. In particular, we see that $$\begin{aligned} \operatorname{w}_{\Gamma,X}(n) &= {\left\vertB_{\Gamma,X}(n)\right\vert} \leq {\left\vert\Gamma/\Omega_{\operatorname{F}_{\Gamma,X}(2n)}(\Gamma)\right\vert} \\ & \leq \prod_{\scriptsize{\begin{matrix} \Delta \lhd \Gamma \\ [\Gamma:\Delta]\leq \operatorname{F}_{\Gamma,X}(2n)\end{matrix}}} {\left\vert\Gamma/\Delta\right\vert} \\ &\leq \prod_{\scriptsize{\begin{matrix} \Delta \lhd \Gamma \\ [\Gamma:\Delta]\leq \operatorname{F}_{\Gamma,X}(2n)\end{matrix}}} \operatorname{F}_{\Gamma,X}(2n) \\ &\leq (\operatorname{F}_{\Gamma,X}(2n))^{\operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(2n))}.\end{aligned}$$ Taking the log of both sides, we obtain $$\log(\operatorname{w}_{\Gamma,X}(n)) \leq \operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(2n))\log(\operatorname{F}_{\Gamma,X}(2n)).$$ In fact, the proof of (\[BasicInequality\]) yields the following. Let $\Gamma$ be a finitely generated, residually finite group. Then $$\log (\operatorname{G}_{\Gamma,X}(n)) \leq \operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(2n)) \log(\operatorname{F}_{\Gamma,X}(2n)).$$ #### **3. An application of (\[BasicInequality\]).** We now derive the following as an application of (\[BasicInequality\]). \[BasicInequalityMainProp\] Let $\Gamma$ be a finitely generated, residually finite group. If there exists $\alpha > 1$ such that $\alpha^n \preceq \operatorname{w}_{\Gamma,X}(n)$, then $\operatorname{F}_{\Gamma,X}(n) \npreceq (\log n)^r$ for any $r \in {\ensuremath{{\ensuremath{\mathbf{R}}}}}$. Assume on the contrary that there exists $r \in {\ensuremath{{\ensuremath{\mathbf{R}}}}}$ such that $\operatorname{F}_{\Gamma,X} \preceq (\log(n))^r$. In terms of $\preceq$ notation, inequality (\[BasicInequality\]) becomes: $$\log(\operatorname{w}_{\Gamma, X} (n)) \preceq s_\Gamma (\operatorname{F}_{\Gamma, X}(n)) \log(\operatorname{F}_{\Gamma, X}(n)).$$ Taking the log of both sides, we obtain $$\log\log(\operatorname{w}_{\Gamma, X} (n)) \preceq \log(\operatorname{s}_\Gamma (\operatorname{F}_{\Gamma, X}(n)))+ \log(\log(\operatorname{F}_{\Gamma, X}(n))).$$ This inequality, in tandem with the assumptions $$\begin{aligned} \alpha^n &\preceq \operatorname{w}_{\Gamma,X}(n), \\ \operatorname{F}_{\Gamma,X}(n) &\preceq (\log(n))^r,\end{aligned}$$ and $\log(\operatorname{s}_\Gamma(n)) \preceq (\log(n))^2$ (see [@lubsegal-2003 Corollary 2.8]) gives $$\log(n) \preceq (\log\log(n))^2 + \log\log\log(n),$$ which is impossible. With Proposition \[BasicInequalityMainProp\], we can now prove Theorem \[DivisibilityLogGrowth\]. For the direct implication, we assume that $\Gamma$ is a finitely generated linear group with $\operatorname{F}_\Gamma \preceq (\log n)^r$ for some $r$. According to the Tits’ alternative, either $\Gamma$ is virtually solvable or $\Gamma$ contains a nonabelian free subgroup. In the latter case, $\Gamma$ visibly has exponential word growth and thus we derive a contradiction via Proposition \[BasicInequalityMainProp\]. In the case $\Gamma$ is virtually solvable, $\Gamma$ must also have exponential word growth unless $\Gamma$ is virtually nilpotent (see [@harpe-2000 Theorem VII.27]). This in tandem with Proposition \[BasicInequalityMainProp\] implies $\Gamma$ is virtually nilpotent. For the reverse implication, let $\Gamma$ be a finitely generated, virtually nilpotent group with finite index, nilpotent subgroup $\Gamma_0$. According to Theorem 0.2 in [@Bou], $\operatorname{F}_{\Gamma_0} \preceq (\log n)^r$ for some $r$. Combining this with Lemma \[DivisibilityAsymptoticLemma\] (c) yields $\operatorname{F}_\Gamma \preceq (\log n)^{r[\Gamma:\Gamma_0]}$. In the next two sections, we will prove Theorem \[basiclowerbound\]. In particular, for finitely generated linear groups that are not virtually solvable, we obtain an even better lower bound for $\operatorname{F}_{\Gamma,X}(n)$ than can be obtained using (\[BasicInequality\]). Namely, $n^{1/3} \preceq \operatorname{F}_{\Gamma,X}(n)$ for such groups. The class of non-nilpotent, virtually solvable groups splits into two classes depending on whether the rank of the group is finite or not. This is not the standard notion of rank but instead $$\textrm{rk}(\Gamma) = \max{\left\{ r(\Delta)~:~ \Delta \text{ is a finitely generated subgroup of } \Gamma\right\}},$$ where $$r(\Delta) = \min{\left\{{\left\vertY\right\vert}~:~Y \text{ is a generating set for }\Delta\right\}}.$$ The class of virtually solvable groups with finite rank is known to have polynomial subgroup growth (see [@lubsegal-2003 Chapter 5]) and thus have a polynomial upper bound on normal subgroup growth. Using this upper bound with (\[BasicInequality\]) yields our next result. If $\Gamma$ is virtually solvable, finite rank, and not nilpotent, then $n^{1/d} \preceq \operatorname{F}_{\Gamma,X}(n)$ for some $d \in {\ensuremath{{\ensuremath{\mathbf{N}}}}}$. For a non-nilpotent, virtually solvable group of finite rank, we have the inequalities: $$\begin{aligned} \alpha^n &\preceq \operatorname{w}_{\Gamma,X}(n) \\ \operatorname{s}_{\Gamma,X}(n) &\preceq n^m.\end{aligned}$$ Setting $d=2m$ and assuming $\operatorname{F}_{\Gamma,X}(n) \preceq n^{1/d}$, inequality (\[BasicInequality\]) yields the impossible inequality $$n \simeq \log(\alpha^n) \preceq \log(\operatorname{w}_{\Gamma,X}(n)) \preceq \operatorname{s}_\Gamma(\operatorname{F}_{\Gamma,X}(n))\log(\operatorname{F}_{\Gamma,X}(n)) \preceq (n^{1/d})^m \log(n^{1/d}) \simeq \sqrt{n}\log(n).$$ Virtually solvable group $\Gamma$ with infinite $\textrm{rk}(\Gamma)$ cannot be handled in this way as there exist examples with $c^{n^{1/d}} \preceq \operatorname{s}_{\Gamma,X}(n)$ with $c>1$ and $d \in {\ensuremath{{\ensuremath{\mathbf{N}}}}}$. Least common multiples ====================== Let $\Gamma$ be a finitely generated group and $S {\subset}\Gamma^\bullet$ a finite subset. Associated to $S$ is the subgroup $L_S$ given by $$L_S = {\bigcap}_{\gamma \in S} \overline{{\left< \gamma \right>}}.$$ We define the *least common multiple of $S$* to be the set $$\operatorname{LCM}_{\Gamma,X}(S) = {\left\{\delta \in L_S^\bullet~:~ {\left\vert \left\vert \delta\right\vert\right\vert}_X \leq {\left\vert \left\vert \eta\right\vert\right\vert}_X \text{ for all }\eta \in L_S^\bullet\right\}}.$$ That is, $\operatorname{LCM}_{\Gamma,X}(S)$ is the set of nontrivial words in $L_S$ of minimal length in a fixed generating set $X$ of $\Gamma$. Finally, we set $$\operatorname{lcm}_{\Gamma,X}(S) = \begin{cases} {\left\vert \left\vert \delta\right\vert\right\vert}_X& \text{ if there exists }\delta \in \operatorname{LCM}_{\Gamma,X}(S), \\ 0 & \text{ if }\operatorname{LCM}_{\Gamma,X}(S) = \emptyset. \end{cases}$$ The following basic lemma shows the importance of least common multiples in the study of both $\operatorname{F}_\Gamma$ and $\operatorname{G}_\Gamma$. \[WordLengthForLCM\] Let $S {\subset}\Gamma^\bullet$ be a finite set and $\delta \in \Gamma^\bullet$ have the following property: For any homomorphism ${\varphi}{\ensuremath{\colon}}\Gamma \to Q$, if $\ker {\varphi}\cap S \ne {\emptyset}$, then $\delta \in \ker {\varphi}$. Then $\operatorname{lcm}_{\Gamma,X}(S) \leq {\left\vert \left\vert \delta\right\vert\right\vert}_X$. To prove this, for each $\gamma \in S$, note that ${\varphi}_\gamma{\ensuremath{\colon}}\Gamma \to \Gamma/\overline{{\left< \gamma \right>}}$ is homomorphism for which $\ker {\varphi}_\gamma \cap S \ne {\emptyset}$. By assumption, $\delta \in \ker {\varphi}_\gamma$ and thus in $\overline{{\left< \gamma \right>}}$ for each $\gamma \in S$. Therefore, $\delta \in L_S$ and the claim now follows from the definition of $\operatorname{lcm}_{\Gamma,X}(S)$. Lower bounds for free groups {#FreeGroupGrowth} ============================ In this section, using least common multiples, we will prove Theorem \[basiclowerbound\]. #### **1. Construct short least common multiples.** We begin with the following proposition. \[FreeCandidateLemma\] Let $\gamma_1,\dots,\gamma_n \in F_m^\bullet$ and ${\left\vert \left\vert \gamma_j\right\vert\right\vert}_X \leq d$ for all $j$. Then $$\operatorname{lcm}_{F_m,X}(\gamma_1,\dots,\gamma_n) \leq 6dn^2.$$ In the proof below, the reader will see that the important fact that we utilize is the following. For a pair of non-trivial elements $\gamma_1,\gamma_2$ in a nonabelian free group, we can conjugate $\gamma_1$ by a generator $\mu \in X$ to ensure that $\mu^{-1}\gamma_1\mu$ and $\gamma_2$ do not commute. This fact will be used repeatedly. Let $k$ be the smallest natural number such that $n \leq 2^k$ (the inequality $2^k \leq 2n$ also holds). We will construct an element $\gamma$ in $L_{{\left\{\gamma_1,\dots,\gamma_n\right\}}}$ such that $${\left\vert \left\vert \gamma\right\vert\right\vert}_X \leq 6d4^k.$$ By Lemma \[WordLengthForLCM\], this implies the inequality asserted in the statement of the proposition. To this end, we augment the set ${\left\{\gamma_1,\dots,\gamma_n\right\}}$ by adding enough additional elements $\mu \in X$ such that our new set has precisely $2^k$ elements that we label ${\left\{\gamma_1,\dots,\gamma_{2^k}\right\}}$. Note that it does not matter if the elements we add to the set are distinct. For each pair $\gamma_{2i-1},\gamma_{2i}$, we replace $\gamma_{2i}$ by a conjugate $\mu_i\gamma_{2i}\mu_i^{-1}$ for $\mu_i \in X$ such that $[\gamma_{2i-1},\mu_i^{-1}\gamma_{2i}\mu_i]\ne 1$ and in an abuse of notation, continue to denote this by $\gamma_{2i}$. We define a new set of elements ${\left\{\gamma_i^{(1)}\right\}}$ by setting $\gamma_i^{(1)} = [\gamma_{2i-1},\gamma_{2i}]$. Note that ${\left\vert \left\vert \gamma_i^{(1)}\right\vert\right\vert}_X \leq 4(d+2)$. We have $2^{k-1}$ elements in this new set and we repeat the above, again replacing $\gamma_{2i}^{(1)}$ with a conjugate by $\mu_i^{(1)}\in X$ if necessary to ensure that $\gamma_{2i-1}^{(1)}$ and $\gamma_{2i}^{(1)}$ do not commute. This yields $2^{k-2}$ non-trivial elements $\gamma_i^{(2)}=[\gamma_{2i-1}^{(1)},\gamma_{2i}^{(1)}]$ with ${\left\vert \left\vert \gamma_i^{(2)}\right\vert\right\vert}_X \leq 4(4(d+2)+2)$. Continuing this inductively, at the $k$–stage we obtain an element $\gamma_1^{(k)} \in L_S$ such that $${\left\vert \left\vert \gamma_1^{(k)}\right\vert\right\vert}_X \leq 4^kd + a_k,$$ where $a_k$ is defined inductively by $a_0=0$ and $$a_j = 4(a_{j-1}+2).$$ The assertion $$a_j = 2{\left( \sum_{\ell=1}^j 4^\ell \right) },$$ is validated with an inductive proof. Thus, we have $${\left\vert \left\vert \gamma_1^{(k)}\right\vert\right\vert}_X \leq 4^kd+a_k \leq 3{\left( 4^kd + 4^k \right) } \leq 6d(4^k).$$ An immediate corollary of Proposition \[FreeCandidateLemma\] is the following. \[PrimeNumberTheorem\] $$\operatorname{lcm}_{F_m,X}(B_{F^m,X}^\bullet(n)) \leq 6n(\operatorname{w}_{F_m,X}(n))^2.$$ #### **2. Proof of Theorem \[basiclowerbound\].** We now give a short proof of Theorem \[basiclowerbound\]. We begin with the following proposition. \[freelowerbound\] Let $\Gamma$ be a nonabelian free group of rank $m$. Then $n^{1/3} \preceq \operatorname{F}_{\Gamma,X}(n)$. For $x \in X$, set $$S = {\left\{x,x^2,\dots,x^{n}\right\}}.$$ By Proposition \[FreeCandidateLemma\], if $\delta \in \operatorname{LCM}_{F_m,X}(S)$, then $${\left\vert \left\vert \delta\right\vert\right\vert}_X \leq 6n^3.$$ On the other hand, if ${\varphi}{\ensuremath{\colon}}F_m \to Q$ is a surjective homomorphism with ${\varphi}(\delta) \ne 1$, the restriction of ${\varphi}$ to $S$ is injective. In particular, $$\operatorname{D}_{F_m,X}^\lhd(\delta) \geq n.$$ In total, this shows that $n^{1/3} \preceq \operatorname{F}_{F_m,X}$. We now prove Theorem \[basiclowerbound\]. Let $\Gamma$ be a finitely generated group with finite generating set $X$. By assumption, $\Gamma$ contains a nonabelian free group $\Delta$. By passing to a subgroup, we may assume that $\Delta$ is finitely generated with free generating set $Y$. According to Lemma \[DivisibilityAsymptoticLemma\] (b), we know that $\operatorname{F}_{\Delta,Y}(n) \preceq \operatorname{F}_{\Gamma,X}(n)$. By Proposition \[freelowerbound\], we also have $n^{1/3} \preceq \operatorname{F}_{\Delta,Y}(n)$. The marriage of these two facts yields Theorem \[basiclowerbound\]. #### **3. The basic girth inequality.** We are now ready to prove (\[BasicGirthEquation\]) for free groups. Again, for the reader’s convenience, recall that (\[BasicGirthEquation\]) is $$\operatorname{G}_{F_m,X}(n/2) \leq \operatorname{F}_{F_m,X}(n/2){\left( 6n (\operatorname{w}_{F_m,X}(n))^{2} \right) }.$$ Let $\delta \in \operatorname{LCM}(B_{F_m,X}^\bullet(n))$ and let $Q$ be a finite group of order $\operatorname{D}_{F_m,X}^\lhd(\delta)$ such that there exists a homomorphism ${\varphi}{\ensuremath{\colon}}F_m \to Q$ with ${\varphi}(\delta)\ne 1$. Since $\delta \in L_{B_{F_m,X}(n)}$, for each $\gamma$ in $B_{F_m,X}^\bullet(n)$, we also know that ${\varphi}(\gamma) \ne 1$. In particular, it must be that ${\varphi}$ restricted to $B_{F_m,X}^\bullet(n/2)$ is injective. The definitions of $\operatorname{G}_{F_m,X}$ and $\operatorname{F}_{F_m,X}$ with Corollary \[PrimeNumberTheorem\] yields $$\operatorname{G}_{F_m,X}(n/2) \leq \operatorname{D}^\lhd_{F_m,X}(\delta) \leq \operatorname{F}_{F_m,X}({\left\vert \left\vert \delta\right\vert\right\vert}_X)\leq \operatorname{F}_{F_m,X}(6n(\operatorname{w}_{F_m,X}(n))^2),$$ and thus the desired inequality. #### **4. Proof of Theorem \[GirthPolynomialGrowth\].** We are also ready to prove Theorem \[GirthPolynomialGrowth\]. We must show that a finitely generated group $\Gamma$ is virtually nilpotent if and only if $\operatorname{G}_{\Gamma,X}$ has at most polynomial growth. If $\operatorname{G}_{\Gamma,X}$ is bounded above by a polynomial in $n$, as $\operatorname{w}_{\Gamma,X} \leq \operatorname{G}_{\Gamma,X}$, it must be that $\operatorname{w}_{\Gamma,X}$ is bounded above by a polynomial in $n$. Hence, by Gromov’s Polynomial Growth Theorem, $G$ is virtually nilpotent. Suppose now that $\Gamma$ is virtually nilpotent and set $\Gamma_{\textrm{Fitt}}$ to be the Fitting subgroup of $\Gamma$. It is well known (see [@Dek]) that $\Gamma_{\textrm{Fitt}}$ is torsion free and finite index in $\Gamma$. By Lemma \[GirthAsymptoticLemma\] (c), we may assume that $\Gamma$ is torsion free. In this case, $\Gamma$ admits a faithful, linear representation $\psi$ into ${\ensuremath{\mathbf{U}}}(d,{\ensuremath{{\ensuremath{\mathbf{Z}}}}})$, the group of upper triangular, unipotent matrices with integer coefficients in $\operatorname{GL}(d,{\ensuremath{{\ensuremath{\mathbf{Z}}}}})$ (see [@Dek]). Under this injective homomorphism, the elements in $B_{\Gamma,X}(n)$ have matrix entries with norm bounded above by $Cn^k$, where $C$ and $k$ only depends on $\Gamma$. Specifically, we have $${\left\vert(\psi(\gamma))_{i,j}\right\vert} \leq C{\left\vert \left\vert \gamma\right\vert\right\vert}_X^k.$$ This is a consequence of the Hausdorff–Baker–Campbell formula (see [@Dek]). Let $r$ be the reduction homomorphism $$r {\ensuremath{\colon}}{\ensuremath{\mathbf{U}}}(d,{\ensuremath{{\ensuremath{\mathbf{Z}}}}}) {\longrightarrow}{\ensuremath{\mathbf{U}}}(d,{\ensuremath{{\ensuremath{\mathbf{Z}}}}}/ 2Cn^k {\ensuremath{{\ensuremath{\mathbf{Z}}}}})$$ defined by reducing matrix coefficients modulo $2 Cn^k$. By selection, the restriction of $r$ to $B_{\Gamma,X}^\bullet(n)$ is injective. So we have $$\label{CardinalityInequality} {\left\vertr(\psi(\Gamma))\right\vert} \leq {\left\vert{\ensuremath{\mathbf{U}}}(d,{\ensuremath{{\ensuremath{\mathbf{Z}}}}}/ 2Cn^k {\ensuremath{{\ensuremath{\mathbf{Z}}}}})\right\vert} \leq (2Cn^k)^{d^2}.$$ This inequality gives $$\operatorname{G}_{\Gamma,X}(n) \leq (2Cn^k)^{d^2} = C_1n^{kd^2}.$$ Therefore, $\operatorname{G}_{\Gamma,X}(n)$ is bounded above by a polynomial function in $n$ as claimed. #### **5. Generalities.** The results and methods for the free group in this section can be generalized. Specifically, we require the following two properties: - $\Gamma$ has an element of infinite order. - For all non-trivial $\gamma_1,\gamma_2 \in \Gamma$, there exists $\mu_{1,2} \in X$ such that $[\gamma_1,\mu_{1,2}\gamma_2\mu_{1,2}^{-1}]\ne 1$. With this, we can state a general result established with an identical method taken for the free group. Let $\Gamma$ be finitely generated group that satisfies (i) and (ii). Then - $\operatorname{G}_{\Gamma,X}(n/2) \leq \operatorname{F}_{\Gamma,X}(n/2){\left( 6n (\operatorname{w}_{\Gamma,X}(n))^{2} \right) }$. - $n^{1/3} \preceq \operatorname{F}_{\Gamma,X}$. The proof of Theorem \[toughlowerbound\] {#toughlowerboundSection} ======================================== In this section we prove Theorem \[toughlowerbound\]. For sake of clarity, before commencing with the proof, we outline the basic strategy. We will proceed via contradiction, assuming that $\max \operatorname{D}_{F_m}(n) \preceq \log n$. We will apply this assumption to a family of test elements $\delta_n$ derived from least common multiples of certain simple sets $S(n)$ to produce a family of finite index subgroups $\Delta_n$ in $F_m$. Employing the Prime Number Theorem, we will obtain upper bounds (see (\[LinearBound\]) below) for the indices $[F_m:\Delta_n]$. Using covering space theory and a simple albeit involved inductive argument, we will derive the needed contradiction by showing the impossibility of these bounds. The remainder of this section is devoted to the details. Our goal is to show $\max \operatorname{D}_{F_m}(n) \npreceq \log(n)$ for $m \geq 2$. By Lemma 1.1 in [@Bou], it suffices to show this for $m=2$. To that end, set $\Gamma = F_2$ with free generating set $X={\left\{x,y\right\}}$, and $$S(n) = {\left\{x,x^2,\dots,x^{\operatorname{lcm}(1,\dots,n)}\right\}}.$$ We proceed by contradiction, assuming that $\max \operatorname{D}_\Gamma(n) \preceq \log(n)$. By definition, there exists a constant $C>0$ such that $\max \operatorname{D}_\Gamma(n) \leq C\log(Cn)$ for all $n$. For any $\delta_n \in \operatorname{LCM}_{\Gamma,X}(S(n))$, this implies that there exists a finite index subgroup $\Delta_n < \Gamma$ such that $\delta_n \notin \Delta_n$ and $$[\Gamma:\Delta_n] \leq C\log(C{\left\vert \left\vert \delta_n\right\vert\right\vert}_X).$$ According to Proposition \[FreeCandidateLemma\], we also know that $${\left\vert \left\vert \delta_n\right\vert\right\vert}_X \leq D(\operatorname{lcm}(1,\dots,n))^3.$$ In tandem, this yields $$[\Gamma:\Delta_n] \leq C\log(CD(\operatorname{lcm}(1,\dots,n))^3).$$ By the Prime Number Theorem, we have $$\lim_{n \to {\infty}} \frac{\log(\operatorname{lcm}(1,\dots,n))}{n} = 1.$$ Therefore, there exists $N>0$ such that for all $n \geq N$ $$\frac{n}{2} \leq \log(\operatorname{lcm}(1,\dots,n)) \leq \frac{3n}{2}.$$ Combining this with the above, we see that there exists a constant $M>0$ such that for all $n\geq N$, $$\label{LinearBound} [\Gamma:\Delta_n] \leq C\log(CD) + \frac{9Cn}{2} \leq Mn.$$ Our task now is to show (\[LinearBound\]) cannot hold. In order to achieve the desired contradiction, we use covering space theory. With that goal in mind, let $S^1 \vee S^1$ be the wedge product of two circles and recall that we can realize $\Gamma$ as $\pi_1(S^1 \vee S^1,*)$ by identifying $x,y$ with generators for the fundamental groups of the respective pair of circles. Here, $*$ serves as both the base point and the identifying point for the wedge product. According to covering space theory, associated to the conjugacy class $[\Delta_n]$ of $\Delta_n$ in $\Gamma$, is a finite cover $Z_n$ of $S^1 \vee S^1$ of covering degree $[\Gamma:\Delta_n]$ (unique up to covering isomorphisms). Associated to a conjugacy class $[\gamma]$ in $\Gamma$ is a closed curve $c_\gamma$ on $S^1 \vee S^1$. The distinct lifts of $c_\gamma$ to $Z_n$ correspond to the distinct $\Delta_n$–conjugacy classes of $\gamma$ in $\Gamma$. The condition that $\gamma \notin \Delta_n$ implies that at least one such lift cannot be a closed loop. Removing the edges of $Z_n$ associated to the lifts of the closed curve associated to $[y]$, we get a disjoint union of topological circles, each of which is a union of edges associated to the lifts of the loop associated to $[x]$. We call these circles $x$–cycles and say the length of an $x$–cycle is the total number of edges of the cycle. The sum of the lengths over all the distinct $x$–cycles is precisely $[\Gamma:\Delta_n]$. For an element of the form $x^\ell$, each lift of the associated curve $c_{x^\ell}$ is contained on an $x$–cycle. Using elements of the form $x^\ell$, we will produce enough sufficiently long $x$–cycles in order to contradict (\[LinearBound\]). We begin with the element $x^{\operatorname{lcm}(1,\dots,m)}$ for $1 \leq m \leq n$. This will serve as both the base case for an inductive proof and will allow us to introduce some needed notation. By construction, some $\Gamma$–conjugate of $x^{\operatorname{lcm}(1,\dots,m)}$ is not contained in $\Delta_n$. Indeed, $x^\ell$ for any $1 \leq \ell \leq \operatorname{lcm}(1,\dots,n)$ is never contained in the intersection of all conjugates of $\Delta_n$. Setting $c_m$ to be the curve associated to $x^{\operatorname{lcm}(1,\dots,m)}$, this implies that there exists a lift of $c_m$ that is not closed in $Z_n$. Setting $C_n^{(1)}$ to be the $x$–cycle containing this lift, we see that the length of $C_n^{(1)}$ must be at least $m$. Otherwise, some power $x^\ell$ for $1 \leq \ell \leq m$ would have a closed lift for this base point and this would force this lift of $c_m$ to be closed. Setting $k_{n,m}^{(1)}$ to be the associated length, we see that $m \leq k_{n,m}^{(1)} \leq Mn$ when $n \geq N$. Using the above as the base case, we claim the following: **Claim.** *For each positive integer $i$, there exists a positive integer $N_i \geq N$ such that for all $n \geq 8N_i$, there exists disjoint $x$–cycles $C_n^{(1)},\dots,C_n^{(i)}$ in $Z_n$ with respective lengths $k_n^{(1)},\dots,k_n^{(i)}$ such that $k_n^{(j)} \geq n/8$ for all $1 \leq j \leq i$.* That this claim implies the desired contradiction is clear. Indeed, if the claim holds, we have $$\frac{ni}{8} \leq \sum_{j=1}^i k_n^{(j)} \leq [\Gamma:\Delta_n]$$ for all positive integers $i$ and all $n \geq N_i$. Taking $i > 8M$ yields an immediate contradiction of (\[LinearBound\]). Thus, we are reduced to proving the claim. For the base case $i=1$, we can take $N_1=N$ and $m=n$ in the above argument and thus produce an $x$–cycle of length $k_n^{(1)}$ with $n \leq k_n^{(1)}$ for any $n \geq N_1$. Proceeding by induction on $i$, we assuming the claim holds for $i$. Specifically, there exists $N_i \geq N$ such that for all $n \geq 8N_i$, there exists disjoint $x$–cycles $C_n^{(1)},\dots,C_n^{(i)}$ in $Z_n$ with lengths $k_n^{(j)} \geq n/8$. By increasing $N_i$ to some $N_{i+1}$, we need to produce a new $x$–cycle $C_n^{(i+1)}$ in $Z_n$ of length $k_n^{(i+1)} \geq n/8$ for all $n \geq 8N_{i+1}$. For this, set $$\ell_{n,m} = \operatorname{lcm}(1,\dots,m)\prod_{j=1}^i k_n^{(j)}.$$ By construction, the lift of the closed curve associated to $x^{\ell_{n,m}}$ to each cycle $C_n^{(j)}$ is closed. Consequently, any lift of the curve associated to $x^{\ell_{n,m}}$ that is not closed must necessarily reside on an $x$–cycle that is disjoint from the previous $i$ cycles $C_n^{(1)},\dots, C_n^{(i)}$. In addition, we must ensure that this new $x$–cycle has length at least $n/8$. To guarantee that the curve associated to $x^{\ell_{n,m}}$ has a lift that is not closed, it is sufficient to have the inequality $$\label{NonClosedLift} \ell_{n,m} \leq \operatorname{lcm}(1,\dots,n).$$ In addition, if $m\geq n/8$, then the length of $x$–cycle containing this lift must be at least $n/8$. We focus first on arranging (\[NonClosedLift\]). For this, since $k_n^{(j)} \leq Mn$ for all $j$, (\[NonClosedLift\]) holds if $$(Mn)^i\operatorname{lcm}(1,\dots,m) \leq \operatorname{lcm}(1,\dots,n).$$ This, in turn, is equivalent to $$\log(\operatorname{lcm}(1,\dots,m)) \leq \log(\operatorname{lcm}(1,\dots,n)) - i\log(Mn).$$ Set $N_{i+1}$ to be the smallest positive integer such that $$\frac{n}{8} - i\log(Mn) > 0$$ for all $n \geq 8N_{i+1}$. Taking $n>8N_{i+1}$ and $n/8 \leq m \leq n/4$, we see that $$\begin{aligned} \log(\operatorname{lcm}(1,\dots,m)) &\leq \frac{3m}{2} \\ &\leq \frac{3n}{8} \\ &\leq \frac{3n}{8} + {\left( \frac{n}{8}-i\log(Mn) \right) } \\ &= \frac{n}{2} - i\log(Mn) \\ &\leq \log(\operatorname{lcm}(1,\dots,n)) - i\log(Mn).\end{aligned}$$ In particular, we produce a new $x$–cycle $C_n^{(i+1)}$ of length $k_n^{(i+1)}\geq n/8$ for all $n \geq N_{i+1}$. Having proven the claim, our proof of Theorem \[toughlowerbound\] is complete. Just as in Theorem \[basiclowerbound\], Theorem \[toughlowerbound\] can be extended to any finitely generated group that contains a nonabelian free subgroup. Let $\Gamma$ be a finitely generated group that contains a nonabelian free subgroup. Then $$\max \operatorname{D}_{\Gamma,X}(n) \npreceq \log(n).$$ [9]{} K. Bou-Rabee, *Quantifying residual finiteness*, J. Algebra, [**323**]{} (2010), 729–737. K. Bou-Rabee and D. B. McReynolds, *Bertrand’s postulate and subgroup growth*, to appear in J. Algebra. N. V. Buskin, *Economical separability in free groups*, Siberian Mathematical Journal, **50** (2009), 603–-608. K. Dekimpe, *Almost-Bieberbach groups: Affine and polynomial structures*, Springer-Verlag, 1996. M. Gromov with an appendix by J. Tits, *Groups of polynomial growth and expanding maps*, Publ. Math. Inst. Hautes Étud. Sci., **53** (1981), 53–78. P. de La Harpe, *Topics in Geometric Group Theory*, Chicago Lectures in Mathematics, Chicago 2000. U. Hadad, *On the Shortest Identity in Finite Simple Groups of Lie Type*, preprint. M. Kassabov and F. Matucci, *Bounding residual finiteness in free groups*, preprint. A. Lubotzky and D. Segal, *Subgroup growth*, Birkhäuser, 2003. V. D. Mazurov and E. I. Khukhro, editors, *The Kourovka notebook*, Russian Academy of Sciences Siberian Division Institute of Mathematics, Novosibirsk, sixteenth edition, 2006. Unsolved problems in group theory, Including archive of solved problems. I. Rivin, *Geodesics with one self-intersection, and other stories*, preprint. Department of Mathematics\ University of Chicago\ Chicago, IL 60637, USA\ email: [khalid@math.uchicago.edu]{}, [dmcreyn@math.uchicago.edu]{}\ [^1]: University of Chicago, Chicago, IL 60637. E-mail: [^2]: University of Chicago, Chicago, IL 60637. E-mail:
--- abstract: '[Cr$_{2}$Ge$_{2}$Te$_{6}$]{} has been of interest for decades, as it is one of only a few naturally forming ferromagnetic semiconductors. Recently, this material has been revisited due to its potential as a 2 dimensional semiconducting ferromagnet and a substrate to induce anomalous quantum Hall states in topological insulators. However, many relevant properties of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} still remain poorly understood, especially the spin-phonon coupling crucial to spintronic, multiferrioc, thermal conductivity, magnetic proximity and the establishment of long range order on the nanoscale. We explore the interplay between the lattice and magnetism through high resolution micro-Raman scattering measurements over the temperature range from 10 K to 325 K. Strong spin-phonon coupling effects are confirmed from multiple aspects: two low energy modes splits in the ferromagnetic phase, magnetic quasielastic scattering in paramagnetic phase, the phonon energies of three modes show clear upturn below [T$_{C}$]{}, and the phonon linewidths change dramatically below [T$_{C}$]{} as well. Our results provide the first demonstration of spin-phonon coupling in a potential 2 dimensional atomic crystal.' address: - 'Department of Physics, University of Toronto, ON M5S 1A7 Canada' - 'Department of Physics, Boston College 140 Commonwealth Ave Chestnut Hill MA 02467-3804 USA' - 'Department of Chemistry, Princeton University, Princeton, NJ 08540 USA' - 'Department of Chemistry, Princeton University, Princeton, NJ 08540 USA' - 'Department of Physics, Boston College 140 Commonwealth Ave Chestnut Hill MA 02467-3804 USA' author: - Yao Tian - 'Mason J. Gray' - Huiwen Ji - 'R. J. Cava' - 'Kenneth S. Burch' title: 'Magneto-Elastic Coupling in a potential ferromagnetic 2D Atomic Crystal' --- \[sec:intro\]Introduction ========================= [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} is a particularly interesting material since it is in the very rare class of ferromagnetic semiconductors and possesses a layered, nearly two dimensional structure due to van Der Waals bonds[@CGT_original; @li2014crxte]. Recently this material has been revisited as a substrate for the growth of the topological insulator [Bi$_{2}$Te$_{3}$]{} to study the anomalous quantum Hall effect[@BT_CGT_quantum_hall]. Furthermore the van Der Waals bonds make it a candidate two dimensional atomic crystal, which is predicted as a platform to study 2D semiconducting ferromagnets and for single layered spintronics devices[@sivadas2015magnetic]. In such devices, spin-phonon coupling can be a key factor in the magnetic and thermal relaxation processes[@golovach2004phonon; @ganzhorn2013strong; @jaworski2011spin], while generating other novel effects such as multiferroticity[@wesselinowa2012origin; @issing2010spin]. Combined with the fact that understanding heat dissipation in nanodevices is crucial, it is important to explore the phonon dynamics and their interplay with the magnetism in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. Indeed, recent studies have shown the thermal conductivity of its cousin compound [Cr$_{2}$Si$_{2}$Te$_{6}$]{} linearly increases with temperature in the paramagnetic phase, suggesting strong spin-phonon coupling is crucial in these materials[@casto2015strong]. However there are currently no direct probes of the phonon dynamics of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}, let alone the spin-phonon coupling. Such studies are crucial for understanding the potential role of magneto-elastic effects that could be central to the magnetic behavior of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} as a 2D atomic crystal and potential nano magneto-elastic device. Polarized temperature dependent Raman scattering is perfectly suited for such studies as it is well established for measuring the phonon dynamics and the spin-phonon coupling in bulk and 2D atomic crystals[@compatible_Heterostructure_raman; @Raman_Characterization_Graphene; @Raman_graphene; @Pandey2013; @sandilands2010stability; @zhao2011fabrication; @calizo2007temperature; @sahoo2013temperature; @polarized_raman_study_of_BFO; @dresselhaus2010characterizing]. Compared to other techniques, a high resolution Raman microscope can track sub-[cm$^{-1}$]{} changes to uncover subtle underlining physics. A demonstration of Raman studies of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} can be extremely meaningful for the future study of the exfoliated 2D ferromagnets. ![Crystal structure of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. The unit cell is indicated by the black frame. Cr and Ge dimer are inside the octahedra formed by Te atoms. One third of the octahedra are filled by Ge-Ge dimers while the other is filled by Cr ions forming a distorted honeycomb lattice.[]{data-label="fig:CGT_structure"}](CGT_lattice_structure.eps){width="0.5\columnwidth"} ![(a): Raman spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} taken in different conditions. All the spectra are taken at 300 K. The Raman spectra of the air-exposed sample shows broader and fewer Raman modes, indicating the formation of oxides. (b): Normalized Raman Spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} in XX and XY geometry at 270 K, showing the different symmetry of the phonon modes.[]{data-label="633_532_oldsample_raman"}](CGT_532nm_airexposed_cleaved_300k_XX_XY.eps "fig:"){width="\figuresize"}\ In this paper, we demonstrate the ease of exfoliating [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}, as well as the dangers of doing so in air. Namely we find the Raman spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} are strongly deteriorated by exposure to air, but [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} exfoliated in a glovebox reveals bulk like spectra. In addition, we find strong evidence for spin-phonon coupling in bulk [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}, via polarized temperature dependent Raman spectroscopy. The spin-phonon coupling has been confirmed in multiple ways: below [T$_{C}$]{} we observe a split of two phonon modes due to the breaking of time reversal symmetry; a drastic quenching of magnetic quasielastic scattering; an anomalous hardening of an additional three modes; and a dramatic decrease of the phonon lifetimes upon warming into the paramagnetic phase. Our results also suggest the possibility of probing the magneto-elastic coupling using Raman spectroscopy, opening the door for further studies of exfoliated 2D [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. \[sec:exp\][Method Section]{} ============================= Single crystal [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} was grown with high purity elements mixed in a molar ratio of 2:6:36; the extra Ge and Te were used as a flux. The materials were heated to 700$^{o}$C for 20 days and then slow cooled to 500$^{o}$C over a period of 1.5 days. Detailed growth procedures can be found elsewhere[@Huiwen_doc]. The Raman spectra on the crystal were taken in a backscattering configuration with a home-built Raman microscope[@RSI_unpublished]. The spectra were recorded with a polarizer in front of the spectrometer. Two Ondax Ultra-narrow-band Notch Filters were used to reject Rayleigh scattering. This also allows us to observe both Stokes and anti-Stokes Raman shifts and provides a way to confirm the absence of local laser heating. A solid-state 532 nm laser was used for the excitation. The temperature variance was achieved by using an automatically controlled closed-cycle cryostation designed and manufactured by Montana Instrument, Inc. The temperature stability was within 5 mK. To maximize the collection efficiency, a 100x N.A. 0.9 Zeiss objective was installed inside the cryostation. A heater and a thermometer were installed on the objective to prevent it from being damaged by the cold and to keep the optical response uniform at all sample temperatures. The laser spot size was 1 micron in diameter and the power was kept fairly low (80 $\mu$W) to avoid laser-induced heating. This was checked at 10 K by monitoring the anti-Stokes signal as the laser power was reduced. Once the anti-Stokes signal disappeared, the power was cut an additional $\approx 50\%$. ![(a): The reflection optical image of the exfoliated [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. The texts indicate different position of the prepared samples. (b): AFM topography image of the rectangle region in a. (c): The height distribution of the rectangle region in b. The height difference between the peaks reveals the thickness of the [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} flakes, which are region 1: 30 nm and region 2: 4 nm. (d): Raman spectra of the two exfoliated [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} flakes []{data-label="fig:exfoliated_CGT"}](CGT_exfoliation.eps){width="\textwidth"} \[sec:Results\_and\_discussion\]Results ======================================= Raman studies at room temperature --------------------------------- We first delve into the lattice structure of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} (shown in Fig. \[fig:CGT\_structure\]). This material contains van der Waals bonded layers, with the magnetic ions (Cr, [T$_{C}$]{}=61 K) forming a distorted honeycomb lattice[@Huiwen_doc]. The Cr atoms are locally surrounded by Te octahedra, and thus the exchange between Cr occurs via the Te atoms. Based on the group theory analysis, the Raman-active modes in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} are of A$_{g}$, E$_{1g}$ and E$_{2g}$ symmetry, and E$_{1g}$ and E$_{2g}$ are protected by time-reversal symmetry. In the paramagnetic state we expect to see 10 Raman-active modes, because the E$_{1g}$ and E$_{2g}$ mode are not distinguishable by energy (see details in the supplemental materials). Keeping the theoretical analysis in mind, we now turn to the mode symmetry assignment of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. This analysis was complicated by the oxidation of the [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} surface. Indeed, many chalcogenide materials suffer from easy oxidation of the surface, which is particularly problematic for [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} as TeO$_{x}$ contains a strong Raman signal[@Raman_aging_effect]. The role of oxidation and degradation are becoming increasingly important in many potential 2D atomic crystals[@osvath2007graphene], thus a method to rapidly characterize its presence is crucial for future studies. For this purpose, we measured the Raman response at room temperature in freshly cleaved, as well as air-exposed [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} (shown in Fig. \[633\_532\_oldsample\_raman\]a). The air-exposed sample reveals fewer phonon modes which are also quite broad, suggesting the formation of an oxide. A similar phenomena was also observed in similar materials and assigned to the formation of TeO$_{x}$[@Raman_amorphous_crystalline_transition_CGTfamily]. ![Temperature dependent collinear (XX) Raman spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} measured in the temperature range of 10 K $ - $ 325 K. T$_{c}$ is indicated by the black dash line.[]{data-label="XX_temp"}](Raman_colorplot_newscolor.eps "fig:"){width="\textwidth"}\ ![(a): Temperature dependent collinear (XX) Raman spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} for the E$_{g}^{1}$ and E$_{g}^{2}$ modes. T$_{c}$ is indicated by the black dash line. (b): Raw spectra of E$_{g}^{1}$ and E$_{g}^{2}$ modes. Four Lorentzians (shown in dash line) were used to account for the splitting.[]{data-label="fig:low_energy_colorplot"}](CGT_lowenergy.eps){width="\columnwidth"} From the Raman spectra of the freshly cleaved [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} sample, we can see that at room temperature there are 7 modes. They center at 78.6 [cm$^{-1}$]{}, 85.3 [cm$^{-1}$]{}, 110.8 [cm$^{-1}$]{}, 136.3 [cm$^{-1}$]{}, 212.9 [cm$^{-1}$]{}, 233.9 [cm$^{-1}$]{} and 293.8 [cm$^{-1}$]{} at 270 K. The other three modes might be too weak or out of our spectral range. To identify the symmetry of these modes, we turn to their polarization dependence (see Fig. \[633\_532\_oldsample\_raman\]b). From the Raman tensor (see the supplemental materials), we know that all modes should be visible in the co-linear (XX) geometry and A$_{g}$ modes should vanish in crossed polarized (XY) geometry. To test these predictions we compare the spectra taken at 270 K in XX and XY configurations. As can be seen from Fig. \[633\_532\_oldsample\_raman\]b, only the two modes located at 136.3 [cm$^{-1}$]{} and 293.8 [cm$^{-1}$]{} vanish in the XY configuration. Therefore, these two modes are of A$_{g}$ symmetry, and the other five modes are of E$_{g}$ symmetry. Before proceeding to the temperature dependent Raman studies, it is useful to confirm the quasi-two-dimensional nature of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. To achieve this, we exfoliated [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} on mica inside an argon filled glovebox to avoid oxidation. The results are shown in Fig. \[fig:exfoliated\_CGT\]. We can see from the optical image (Fig. \[fig:exfoliated\_CGT\]a) that many thin-flake [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} samples can be produced through the mechanical exfoliation method. To verify the thickness, we also performed atomic force microscope (AFM) measurement on two flakes (region 1 and 2). The results are shown in Fig. \[fig:exfoliated\_CGT\]b and \[fig:exfoliated\_CGT\]c. Both flakes are in nano regime and the region 2 is much thinner than region 1, showing the great promise of preparing 2D [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} samples through this method. To be sure that no dramatic structural changes occur during exfoliation, we also took Raman spectra on the flakes, the results of which are shown in Fig. \[fig:exfoliated\_CGT\]d. As can be seen from the plot, the Raman spectra of exfoliated [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} are very similar to the bulk, confirming the absence of structural changes. Besides, the Raman intensity of the [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} nanoflakes increases dramatically as its thickness decreases. This is due to the onset of interference effect and has been observed on many other 2D materials[@sandilands2010stability; @yoon2009interference; @zhao2011fabrication]. Temperature Dependence ---------------------- To search for the effects of magnetism (i.e. magneto-elastic and degeneracy lifting), we measured the Raman spectra well above and below the ferromagnetic transition temperature. The temperature resolution was chosen to be 10 K below 100 K and 20 K above. The full temperature dependence is shown in Fig. \[XX\_temp\], while we focus on the temperature dependence of the lowest energy E$_{g}$ modes in Fig. \[fig:low\_energy\_colorplot\] to search for the lifting of degeneracy due to time reversal symmetry breaking first. Indeed, as the temperature is lowered, additional modes appear in the spectra [near [T$_{C}$]{}]{} in Fig. \[fig:low\_energy\_colorplot\]a. As can be seen more clearly from the Raw spectra in Fig. \[fig:low\_energy\_colorplot\]b, the extra feature near the E$_{g}^{1}$ mode and the extremely broad and flat region of the E$_{g}^{2}$ mode appear below [T$_{C}$]{}. We note that the exact temperature at which this splitting occurs is difficult to determine precisely due to our spectral resolution and the low signal levels of these modes. Nonetheless, the splitting clearly grows as the temperature is lowered and magentic order sets in. At the lowest temperatures we find a 2.9 [cm$^{-1}$]{} splitting for the E$_{g}^{1}$ mode and a 4.5 [cm$^{-1}$]{} splitting for the E$_{g}^{2}$ mode. This confirms our prediction that the lifting of time reversal symmetry leads to splitting of the phonon modes, and suggests significant spin-phonon coupling. Indeed, a similar effect has been observed in numerous three dimensional materials such as MnO[@PhysRevB.77.024421], ZnCr$_{2}$X$_{4}$ (X = O, S, Se)[@yamashita2000spin; @rudolf2007spin], and CeCl$_{3}$. In CeCl$_{3}$ the E$_{g}$ symmetry in its point group C$_{4v}$ is also degenerate by time reversal symmetry. For CeCl$_{3}$ it was found that increasing the magnetic field led to a splitting of two E$_{g}$ modes and a hardening of a second set of E$_{g}$ modes[@schaack1977magnetic]. The phonon splitting and energy shifts (discussed in the later section) match well with our observation in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. ![Temperature dependence of phonon frequency shifts. The phonons frequency shifts are shown in blue. The red curves indicate the fit results using the anharmonic model mentioned in the texts. T$_{c}$ is indicated by the dash vertical lines.[]{data-label="CGT_phonon_fits"}](phononpos_stack.eps){width="0.5\columnwidth"} Further evidence of spin-phonon coupling as the origin of the splitting comes for the energy of the modes. The ferromagnetism of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} originates from the Cr-Te-Cr super-exchange interaction where the Cr octahedra are edge-sharing and the Cr-Te-Cr angle is $91.6^{o}$[@CGT_original]. The energy of these two modes are very close to the Te-displacement mode in [Cr$_{2}$Si$_{2}$Te$_{6}$]{}. Thus, it is very likely the E$_{g}^{1}$ and E$_{g}^{2}$ modes involve atomic motions of the Te atoms whose bond strength can be very susceptible to the spin ordering, since the Te atoms mediate the super-exchange between the two Cr atoms. Before continuing let us consider some alternative explanations for the splitting. For example, structural transitions can also result from magnetic order, however previous X-ray diffraction studies in both the paramagnetic (270 K) and ferromagnetic phases (5 K) found no significant differences in the structure[@CGT_original]. Alternatively, the dynamic Jahn-Teller effect can cause phonon splitting[@klupp2012dynamic], but the Cr$^{3+}$ ion is Jahn-Teller inactive in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}, thus eliminating this possibility as well. One-magnon scattering is also highly unlikely since the Raman tensor of one-magnon scattering is antisymmetric which means the scattering only shows in crossed polarized geometry (XY, XZ and YZ). However, we observed this splitting under XX configuration[@fleury1968scattering]. ![Temperature dependence of phonon linewidths (green). The red curves indicate the fit results using equation \[eqn:Klemens\_model\_width\] above [T$_{C}$]{}. The mode located at 110.8 (136.3) [cm$^{-1}$]{} is shown on left (right). T$_{c}$ is indicated by the vertical dash lines.[]{data-label="fig:phononlinewidth"}](phononwidth_stack.eps){width="0.5\columnwidth"} Other than the phonon splitting, we also note a dramatic change in the background Raman scattering at [T$_{C}$]{} in Fig. \[fig:low\_energy\_colorplot\]a. We believe this is due to magnetic quasielastic scattering. In a low dimensional magnetic material with spin-phonon coupling, the coupling will induce magnetic energy fluctuation and allow the fluctuations to become observable as a peak centered at 0 [cm$^{-1}$]{} in Raman spectra[@reiter1976light; @kaplan2006physics]. Typically the peak is difficult to be observed not just due to weak spin-phonon coupling, but since the area under the peak is determined by the magnetic specific heat $C_{m}$ and the width by $D_{t}=C_{m}/\kappa$ where $D_{t}$ is the spin diffusion coefficient, and $\kappa$ is the thermal conductivity. However in low dimensional materials the fluctuations are typically enhanced, increasing the specific heat and lowering the thermal conductivity, making these fluctuations easier to observe in Raman spectra. This effect has been also observed in many other low dimensional magnetic materials evidenced by the quenching of the scattering amplitude as the temperature drops below [T$_{C}$]{}[@choi2004coexistence; @lemmens2003magnetic]. To further investigate the spin-phonon coupling, we turn our attention to the temperature dependence of the mode frequencies and linewidths. Our focus is on the higher energy modes (E$_{g}^{3}$-A$_{g}^{2}$) as they are easily resolved. To gain more quantitative insights into these modes, we fit the Raman spectra with the Voigt function: $$\label{voigt_function} V(x,\sigma,\Omega,\Gamma)=\int^{+\infty}_{-\infty}G(x',\sigma)L(x-x',\Omega,\Gamma)dx'$$ which is a convolution of a Gaussian and a Lorentzian[@olivero1977empirical]. Here the Gaussian is employed to account for the instrumental resolution and the width $\sigma$ (1.8[cm$^{-1}$]{}) is determined by the central Rayleigh peak. The Lorentzian represents a phonon mode. In Fig. \[CGT\_phonon\_fits\], we show the temperature dependence of the extracted phonon energies. All phonon modes soften as the material is heated up. This result is not surprising since the anharmonic phonon-phonon interaction is enhanced at high temperatures and typically leads to a softening of the mode[@PhysRevB.29.2051]. However, for the E$_{g}^{3}$, E$_{g}^{4}$ and A$_{g}^{1}$ modes, their phonon energies change dramatically as the temperature approaches [T$_{C}$]{}. In fact the temperature dependence is much stronger than we would expect from standard anharmonic interactions. Especially for the E$_{g}^{4}$ mode, a 2 [cm$^{-1}$]{} downturn occurs from 10 K to 60 K. This sudden drop of phonon energy upon warming to [T$_{C}$]{} is a further evidence for the spin-phonon coupling in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. Other mechanisms which can induce the shift of phonon energies are of very small probability in this case. For example, an electronic mechanism for the strong phonon energy renormalization is unlikely due to the large electronic gap in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} (0.202 eV)[@Huiwen_doc]. The lattice expansion that explains the anomalous phonon shifts in some magnetic materials[@kim1996frequency] is also an unlikely cause. Specifically, the in-plane lattice constant of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} grows due to the onset of magnetic order[@CGT_original], which should lead to a softening of the modes. However we observe a strong additional hardening of the modes below [T$_{C}$]{}. The spin-phonon coupling is also confirmed by the temperature dependence of the phonon linewidths, which are not directly affected by the lattice constants[@PhysRevB.29.2051]. In Fig. \[fig:phononlinewidth\], we show the temperature dependent phonon linewidths of the E$_{g}^{3}$, and A$_{g}^{1}$ modes due to their larger signal level. We can see the phonon lifetimes are enhanced as the temperature drops below [T$_{C}$]{}, as the phase space for phonons to scatter into magnetic excitations is dramatically reduced[@ulrich2015spin]. This further confirms the spin-phonon coupling. To further uncover the spin-phonon interaction, we first remove the effect of the standard anharmonic contributions to the phonon temperature dependence. In a standard anharmonic picture, the temperature dependence of a phonon energy and linewidth is described by: $$\begin{aligned} \omega(T)=&\omega_{0}+C(1+2n_{B}(\omega_{0}/2))+ D(1+3n_{B}(\omega_{0}/3)+3n_{B}(\omega_{0}/3)^{2})\\ \Gamma(T)=&\Gamma_{0}+A(1+2n_{B}(\omega_{0}/2))+ B(1+3n_{B}(\omega_{0}/3)+3n_{B}(\omega_{0}/3)^{2})\label{eqn:Klemens_model_width}\end{aligned}$$ where $\omega_{0}$ is the harmonic phonon energy, $\Gamma_{0}$ is the disorder induced phonon broadening, $n_{B}$ is the Bose-factor, and $C$ ($A$) and $B$ ($D$) are constants determined by the cubic and quartic anharmonicity respectively. The second term in both equations results from an optical phonon decaying into two phonons with opposite momenta and a half of the energy of the original mode. The third term describes the optical phonon decaying into three phonons with a third of the energy of the optical phonon[@PhysRevB.28.1928]. The results of fitting the phonon energy and linewidths are shown in red in Fig. \[CGT\_phonon\_fits\] and \[fig:phononlinewidth\] with the resulting parameters listed in in table \[table:Anharmonic\_fit\_data\]. We can see that the temperature dependent frequencies of the two highest energy modes E$_{g}^{5}$, A$_{g}^{2}$ follow the anharmonic prediction very well throughout the entire range. However, for the other three modes, there is a clear deviation from the anharmonic prediction below [T$_{C}$]{} confirming the existence of spin-phonon coupling. Moreover, we notice that for the E$_{g}^{3}$, A$_{g}^{1}$ and E$_{g}^{4}$ modes, the phonon energies start to deviate from the anharmonic prediction even above [T$_{C}$]{} (circled in Fig. \[CGT\_phonon\_fits\]). This is probably due to the short-ranged two-dimensional magnetic correlations that persist to temperatures above [T$_{C}$]{}. Indeed finite magnetic moments[@Huiwen_doc] and magneto-striction[@CGT_original] were observed in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} above [T$_{C}$]{}. Mode $\omega_{0}$ Error C Error D Error A Error B Error ------------- -------------- ------- ------- ------- -------- ------- ------ ------- ------- ------- E$_{g}^{3}$ 113.0 0.1 -0.09 0.03 -0.013 0.002 0.10 0.03 0.003 0.002 A$_{g}^{1}$ 138.7 0.1 -0.12 0.04 -0.024 0.003 0.13 0.03 0.003 0.003 E$_{g}^{4}$ 220.9 0.4 -1.9 0.1 -0.02 0.01 – – – – E$_{g}^{5}$ 236.7 0.1 -0.01 0.07 -0.12 0.01 – – – – A$_{g}^{2}$ 298.1 0.3 -0.4 0.3 -0.20 0.04 – – – – : Anharmonic interaction parameters. The unit is in [cm$^{-1}$]{}.[]{data-label="table:Anharmonic_fit_data"} Discussion ========== The spin-phonon coupling in 3d-electron systems usually results from the modulation of the electron hopping amplitude by ionic motion, leading to a change in the exchange integral $J$. In the unit cell of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} there are two in-equivalent magnetic ions (Cr atoms), therefore the spin-phonon coupling Hamiltonian to the lowest order can be written as[@woods2001magnon; @PhysRev.127.432], $${H_{int} = \sum\limits_{i,\delta}\frac{\partial J}{\partial u}(\mathbf{S}_{i}^{a}{}\cdot\mathbf{S}_{i+\delta}^{b})u\label{equ:hamitionian_sp} }$$ where $\mathbf{S}$ is a spin operator, $u$ stands for the ionic displacement of atoms on the exchange path, the index ($i$) runs through the lattice, $\delta$ is the index of its adjacent sites, and the subscripts $a$ and $b$ indicate the in-equivalent Cr atoms in the unit cell. The strength of the coupling to a specific mode depends on how the atomic motion associated with that mode, modulates the exchange coupling. This in turn results from the detailed hybridization and/or overlap of orbitals on different lattice sites. Thus, some phonon modes do not show the coupling effect regardless of their symmetry. To extract the spin-phonon coupling coefficients, we use a simplified version of equation \[equ:hamitionian\_sp\][@fennie2006magnetically; @lockwood1988spin], $$\omega\approx\omega_{0}^{ph}+\lambda{}<\mathbf{S}_{i}^{a}{}\cdot\mathbf{S}_{i+\delta}^{b}>\label{spin_phonon_coupling_equation}$$ where $\omega$ is the frequency of the phonon mode, $\omega_{0}^{ph}$ is the phonon energy free of the spin-phonon interaction, $<\mathbf{S}_{i}^{a}\cdot\mathbf{S}_{i+\delta}^{b}>$ denotes a statistical average for adjacent spins, and $\lambda$ represents the strength of the spin-phonon interaction which is proportional to $\frac{\partial{}J}{\partial{}u}u$. The saturated magnetization value of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} reaches 3$\mu$B per Cr atom at 10 K, consistent with the expectation for a high spin configuration state of Cr$^{3+}$[@Huiwen_doc]. Therefore, $<\mathbf{S}_{i}^{a}{}\cdot\mathbf{S}_{i+\delta}^{b}> \approx 9/4$ for Cr$^{3+}$ at 10 K and the spin-phonon coupling constants can be estimated using equation \[spin\_phonon\_coupling\_equation\]. The calculated results are given in table \[spin\_phonon\_table\]. Compared to the geometrically ([CdCr$_{2}$O$_{4}$]{}, [ZnCr$_{2}$O$_{4}$]{}) or bond frustrated ([ZnCr$_{2}$S$_{4}$]{}) chromium spinels the coupling constants are smaller in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}[@rudolf2007spin]. This is probably not surprising, because in the spin frustrated materials, the spin-phonon couplings are typically very strong[@rudolf2007spin]. On the other hand, in comparison with the cousin compound [Cr$_{2}$Si$_{2}$Te$_{6}$]{} where the coupling constants were obtained for the phonon modes at 90.5 [cm$^{-1}$]{} ($\lambda$=0.1) and 369.3 [cm$^{-1}$]{} ($\lambda$=-0.2)[@casto2015strong], the coupling constants in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} is larger. ----------------------- ---------- ------------------- ----------- -- -- Mode $\omega$ $\omega_{0}^{ph}$ $\lambda$ \[0.5ex\] E$_{g}^{3}$ 113.4 112.9 0.24 A$_{g}^{1}$ 139.3 138.5 0.32 E$_{g}^{4}$ 221.7 219.0 1.2 ----------------------- ---------- ------------------- ----------- -- -- : Spin-phonon interaction parameters at 10 K. The unit is in [cm$^{-1}$]{}. \[spin\_phonon\_table\] \[sec:exp\]Conclusion ===================== In summary, we have demonstrated spin-phonon coupling in a potential 2D atomic crystal for the first time. In particular we studied the polarized temperature dependent Raman spectra of [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}. The two lowest energy modes of E$_{g}$ symmetry split below [T$_{C}$]{}, which is ascribed to the time reversal symmetry breaking by the spin ordering. The temperature dependence of the five modes at higher energies were studied in detail revealing additional evidence for spin-phonon coupling. Among the five modes, three modes show significant renormalization of the phonon lifetime and frequency due to the onset of magnetic order. Interestingly, this effect appears to emerge above [T$_{C}$]{}, consistent with other evidence for the onset of magnetic correlations at higher temperatures. Besides, magnetic quasielastic scattering was also observed in [Cr$_{2}$Ge$_{2}$Te$_{6}$]{}, which is consistent with the spin-phonon coupling effect. Our results also show the possibility to study magnetism in exfoliated 2D ferromagnetic [Cr$_{2}$Ge$_{2}$Te$_{6}$]{} from the perspective of the phonon modes and magnetic quasielastic scattering using micro-Raman scattering. Acknowledgements ================ We are grateful for numerous discussions with Y. J. Kim and H. Y. Kee at University of Toronto. Work at University of Toronto was supported by NSERC, CFI, and ORF and K.S.B. acknowledges support from the National Science Foundation (Grant No. DMR-1410846). The crystal growth at Princeton University was supported by the NSF MRSEC Program, grant number NSF-DMR-1005438. References ========== [10]{} url \#1[[\#1]{}]{} urlprefix \[2\]\[\][[\#2](#2)]{} Carteaux V, Brunet D, Ouvrard G and Andre G 1995 [*Journal of Physics: Condensed Matter*]{} [**7**]{} 69 <http://stacks.iop.org/0953-8984/7/i=1/a=008> Li X and Yang J 2014 [*Journal of Materials Chemistry C*]{} [**2**]{} 7071–7076 Alegria L D, Ji H, Yao N, Clarke J J, Cava R J and Petta J R 2014 [*Applied Physics Letters*]{} [**105**]{} 053512 <http://scitation.aip.org/content/aip/journal/apl/105/5/10.1063/1.4892353> Sivadas N, Daniels M W, Swendsen R H, Oakamoto S and Xiao D 2015 [*arXiv preprint arXiv:1503.00412*]{} Golovach V N, Khaetskii A and Loss D 2004 [*Physical Review Letters*]{} [ **93**]{} 016601 Ganzhorn M, Klyatskaya S, Ruben M and Wernsdorfer W 2013 [*Nature nanotechnology*]{} [**8**]{} 165–169 Jaworski C, Yang J, Mack S, Awschalom D, Myers R and Heremans J 2011 [ *Physical review letters*]{} [**106**]{} 186601 Wesselinowa J 2012 [*physica status solidi (b)*]{} [**249**]{} 615–619 Issing S, Pimenov A, Ivanov Y V, Mukhin A and Geurts J 2010 [*The European Physical Journal B*]{} [**78**]{} 367–372 Casto L, Clune A, Yokosuk M, Musfeldt J, Williams T, Zhuang H, Lin M W, Xiao K, Hennig R, Sales B [*et al.*]{} 2015 [*APL Materials*]{} [**3**]{} 041515 Hushur A, Manghnani M H and Narayan J 2009 [*Journal of Applied Physics*]{} [**106**]{} 054317 <http://scitation.aip.org/content/aip/journal/jap/106/5/10.1063/1.3213370> Oznuluer T, Pince E, Polat E O, Balci O, Salihoglu O and Kocabas C 2011 [ *Applied Physics Letters*]{} [**98**]{} 183101 <http://scitation.aip.org/content/aip/journal/apl/98/18/10.1063/1.3584006> Lin J, Guo L, Huang Q, Jia Y, Li K, Lai X and Chen X 2011 [*Physical Review B*]{} [**83**]{}(12) 125430 Pandey P K, Choudhary R J, Mishra D K, Sathe V G and Phase D M 2013 [ *Applied Physics Letters*]{} [**102**]{} 142401 ISSN 00036951 <http://link.aip.org/link/APPLAB/v102/i14/p142401/s1&Agg=doi> Sandilands L, Shen J, Chugunov G, Zhao S, Ono S, Ando Y and Burch K 2010 [ *Physical Review B*]{} [**82**]{} 064503 Zhao S, Beekman C, Sandilands L, Bashucky J, Kwok D, Lee N, LaForge A, Cheong S and Burch K 2011 [*Applied Physics Letters*]{} [**98**]{} 141911 Calizo I, Balandin A, Bao W, Miao F and Lau C 2007 [*Nano Letters*]{} [**7**]{} 2645–2649 Sahoo S, Gaur A P, Ahmadi M, Guinel M J F and Katiyar R S 2013 [*The Journal of Physical Chemistry C*]{} [**117**]{} 9042–9047 Singh M K, Jang H M, Ryu S and Jo M H 2006 [*Applied Physics Letters*]{} [ **88**]{} 042907 <http://scitation.aip.org/content/aip/journal/apl/88/4/10.1063/1.2168038> Dresselhaus M, Jorio A and Saito R 2010 [*Annual Review of Condensed Matter Physics*]{} [**1**]{} 89–108 Ji H, Stokes R A, Alegria L D, Blomberg E C, Tanatar M A, Reijnders A, Schoop L M, Liang T, Prozorov R, Burch K S, Ong N P, Petta J R and Cava R J 2013 [*Journal of Applied Physics*]{} [**114**]{} 114907 <http://scitation.aip.org/content/aip/journal/jap/114/11/10.1063/1.4822092> Tian Y, Reijnders A A, Osterhoudt G B, Valmianski I, Ramirez J G, Urban C, Zhong R, Schneeloch J, Gu G, Henslee I and Burch K S 2016 [*Review of Scientific Instruments*]{} [**87**]{} 043105 <http://scitation.aip.org/content/aip/journal/rsi/87/4/10.1063/1.4944559> Xia T L, Hou D, Zhao S C, Zhang A M, Chen G F, Luo J L, Wang N L, Wei J H, Lu Z Y and Zhang Q M 2009 [*Physical Review B*]{} [**79**]{}(14) 140510 <http://link.aps.org/doi/10.1103/PhysRevB.79.140510> Osv[á]{}th Z, Darabont A, Nemes-Incze P, Horv[á]{}th E, Horv[á]{}th Z and Bir[ó]{} L 2007 [*Carbon*]{} [**45**]{} 3022–3026 Avachev A, Vikhrov S, Vishnyakov N, Kozyukhin S, Mitrofanov K and Terukov E 2012 [*Semiconductors*]{} [**46**]{} 591–594 ISSN 1063-7826 <http://dx.doi.org/10.1134/S1063782612050041> Yoon D, Moon H, Son Y W, Choi J S, Park B H, Cha Y H, Kim Y D and Cheong H 2009 [*Physical Review B*]{} [**80**]{} 125422 Rudolf T, Kant C, Mayr F and Loidl A 2008 [*Phys. Rev. B*]{} [**77**]{}(2) 024421 <http://link.aps.org/doi/10.1103/PhysRevB.77.024421> Yamashita Y and Ueda K 2000 [*Physical Review Letters*]{} [**85**]{} 4960 Rudolf T, Kant C, Mayr F, Hemberger J, Tsurkan V and Loidl A 2007 [*New Journal of Physics*]{} [**9**]{} 76 Schaack G 1977 [*Zeitschrift f[ü]{}r Physik B Condensed Matter*]{} [**26**]{} 49–58 Klupp G, Matus P, Kamar[á]{}s K, Ganin A Y, McLennan A, Rosseinsky M J, Takabayashi Y, McDonald M T and Prassides K 2012 [*Nature Communications*]{} [**3**]{} 912 Fleury P and Loudon R 1968 [*Physical Review*]{} [**166**]{} 514 Reiter G 1976 [*Physical Review B*]{} [**13**]{} 169 Kaplan T and Mahanti S 2006 [*Physics of manganites*]{} (Springer Science & Business Media) Choi K Y, Zvyagin S, Cao G and Lemmens P 2004 [*Physical Review B*]{} [ **69**]{} 104421 Lemmens P, G[ü]{}ntherodt G and Gros C 2003 [*Physics Reports*]{} [**375**]{} 1–103 Olivero J and Longbothum R 1977 [*Journal of Quantitative Spectroscopy and Radiative Transfer*]{} [**17**]{} 233–236 Men[é]{}ndez J and Cardona M 1984 [*Physical Review B*]{} [**29**]{} 2051 Kim K, Gu J, Choi H, Park G and Noh T 1996 [*Physical Review Letters*]{} [ **77**]{} 1877 Ulrich C, Khaliullin G, Guennou M, Roth H, Lorenz T and Keimer B 2015 [ *Physical review letters*]{} [**115**]{} 156403 Balkanski M, Wallis R F and Haro E 1983 [*Physical Review B*]{} [**28**]{}(4) 1928–1934 <http://link.aps.org/doi/10.1103/PhysRevB.28.1928> Woods L 2001 [*Physical Review B*]{} [**65**]{} 014409 Sinha K P and Upadhyaya U N 1962 [*Physical Review*]{} [**127**]{}(2) 432–439 <http://link.aps.org/doi/10.1103/PhysRev.127.432> Fennie C J and Rabe K M 2006 [*Physical Review Letters*]{} [**96**]{} 205505 Lockwood D and Cottam M 1988 [*Journal of Applied Physics*]{} [**64**]{} 5876–5878
--- abstract: 'We describe a 325-MHz survey, undertaken with the Giant Metrewave Radio Telescope (GMRT), which covers a large part of the three equatorial fields at 9, 12 and 14.5 h of right ascension from the [ *Herschel*]{}-Astrophysical Terahertz Large Area Survey (H-ATLAS) in the area also covered by the Galaxy And Mass Assembly survey (GAMA). The full dataset, after some observed pointings were removed during the data reduction process, comprises 212 GMRT pointings covering $\sim90$ deg$^2$ of sky. We have imaged and catalogued the data using a pipeline that automates the process of flagging, calibration, self-calibration and source detection for each of the survey pointings. The resulting images have resolutions of between 14 and 24 arcsec and minimum rms noise (away from bright sources) of $\sim1$ mJy beam$^{-1}$, and the catalogue contains 5263 sources brighter than $5\sigma$. We investigate the spectral indices of GMRT sources which are also detected at 1.4 GHz and find them to agree broadly with previously published results; there is no evidence for any flattening of the radio spectral index below $S_{1.4}=10$ mJy. This work adds to the large amount of available optical and infrared data in the H-ATLAS equatorial fields and will facilitate further study of the low-frequency radio properties of star formation and AGN activity in galaxies out to $z \sim 1$.' author: - | Tom Mauch$^{1,2,3}$[^1], Hans-Rainer Klöckner$^{1,4}$, Steve Rawlings$^1$, Matt Jarvis$^{2,5,1}$, Martin J. Hardcastle$^2$, Danail Obreschkow$^{1,6}$, D.J. Saikia$^{7,8}$ and Mark A. Thompson$^2$\ $^1$Oxford Astrophysics, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH\ $^2$Centre for Astrophysics Research, University of Hertfordshire, College Lane, Hatfield, Hertfordshire AL10 9AB\ $^3$SKA South Africa, Third Floor, The Park, Park Road, Pinelands, 7405 South Africa\ $^4$Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany\ $^5$ Physics Department, University of the Western Cape, Cape Town, 7535, South Africa\ $^6$ International Centre for Radio Astronomy Research, University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia\ $^7$ National Centre for Radio Astrophysics, Tata Institute of Fundamental Research, Pune University Campus, Ganeshkind P.O., Pune 411007, India\ $^8$ Cotton College State University, Panbazar, Guwahati 781001, India\ bibliography: - 'allrefs.bib' - 'mn-jour.bib' title: 'A 325-MHz GMRT survey of the [*Herschel*]{}-ATLAS/GAMA fields' --- \[firstpage\] surveys – catalogues – radio continuum: galaxies Introduction ============ The *Herschel*-Astrophysical Terahertz Large Area Survey [H-ATLAS; @eales10] is the largest Open Time extragalactic survey being undertaken with the *Herschel Space Observatory* [@herschel10]. It is a blind survey and aims to provide a wide and unbiased view of the sub-millimetre Universe at a median redshift of $1$. H-ATLAS covers $\sim 570$ deg$^2$ of sky at $110$, $160$, $250$, $350$ and $500$ ${\mu}$m and is observed in parallel mode with [*Herschel*]{} using the Photodetector Array Camera [PACS; @pacs] at 110 and 160 ${\mu}$m and the Spectral and Photometric Imaging Receiver [SPIRE; @spire] at 250, 350 and 500 ${\mu}$m. The survey is made up of six fields chosen to have minimal foreground Galactic dust emission, one field in the northern hemisphere covering $150$ deg$^2$ (the NGP field), two in the southern hemisphere covering a total of $250$ deg$^2$ (the SGP fields) and three fields on the celestial equator each covering $\sim 35$ deg$^2$ and chosen to overlap with the Galaxy and Mass Assembly redshift survey [GAMA; @Driver+11] (the GAMA fields). The H-ATLAS survey is reaching 5-$\sigma$ sensitivities of (132, 121, 33.5, 37.7, 44.0) mJy at (110, 160, 250, 350, 500) $\mu$m and is expected to detect $\sim 200,000$ sources when complete [@Rigby+11]. A significant amount of multiwavelength data is available and planned over the H-ATLAS fields. In particular, the equatorial H-ATLAS/GAMA fields, which are the subject of this paper, have been imaged in the optical (to $r \sim 22.1$) as part of the Sloan Digital Sky Survey [SDSS; @sloan] and in the infrared (to $K \sim 20.1$) with the United Kingdom Infra-Red Telescope (UKIRT) through the UKIRT Infrared Deep Sky Survey [UKIDSS; @ukidss] Large Area Survey (LAS). In the not-too-distant future, the GAMA fields will be observed approximately two magnitudes deeper than the SDSS in 4 optical bands by the Kilo-Degree Survey (KIDS) to be carried out with the Very Large Telescope (VLT) Survey Telescope (VST), which was the original motivation for observing these fields. In addition, the GAMA fields are being observed to $K \sim 1.5-2$ mag. deeper than the level achieved by UKIDSS as part of the Visible and Infrared Survey Telescope for Astronomy (VISTA) Kilo-degree Infrared Galaxy (VIKING) survey, and with the Galaxy Evolution Explorer (GALEX) to a limiting AB magnitude of $\sim 23$. In addition to this optical and near-infrared imaging there is also extensive spectroscopic coverage from many of the recent redshift surveys. The SDSS survey measured redshifts out to $z\sim0.3$ in the GAMA and NGP fields for almost all galaxies with $r<17.77$. The Two-degree Field (2dF) Galaxy Redshift Survey [2dFGRS; @2df] covers much of the GAMA fields for galaxies with $b_{J}<19.6$ and median redshift of $\sim0.1$. The H-ATLAS fields were chosen to overlap with the GAMA survey, which is ongoing and aims to measure redshifts for all galaxies with $r<19.8$ to $z\sim0.5$. Finally, the WiggleZ Dark Energy survey has measured redshifts of blue galaxies over nearly half of the H-ATLAS/GAMA fields to a median redshift of $z\sim0.6$ and detects a significant population of galaxies at $z\sim1$. The wide and deep imaging from the far infrared to the ultraviolet and extensive spectroscopic coverage makes the H-ATLAS/GAMA fields unparallalled for detailed investigation of the star-forming and AGN radio source populations. However, the coverage of the H-ATLAS fields is not quite so extensive in the radio. All of the fields are covered down to a $5\sigma$ sensitivity of 2.5 mJy beam$^{-1}$ at 1.4 GHz by the National Radio Astronomy Obervatory (NRAO) Very Large Array (VLA) Sky Survey [NVSS; @nvss]. These surveys are limited by their $\sim45$-arcsec resolution, which makes unambiguous identification of radio sources with their host galaxy difficult, and by not being deep enough to find a significant population of star-forming galaxies, which only begin to dominate the radio-source population below 1 mJy [e.g. @Wilman08]. The Faint Images of the Radio Sky at Twenty-cm [FIRST; @first] survey covers the NGP and GAMA fields at a resolution of $\sim6$ arcsec down to $\sim0.5$ mJy at 1.4 GHz, is deep enough to probe the bright end of the star-forming galaxy population, and has good enough resolution to see the morphological structure of the larger radio-loud AGN, but it must be combined with the less sensitive NVSS data for sensitivity to extended structure. Catalogues based on FIRST and NVSS have already been used in combination with H-ATLAS data to investigate the radio-FIR correlation [@jarvis+10] and to search for evidence for differences between the star-formation properties of radio galaxies and their radio-quiet counterparts (@hardcastle+10 [@hardcastle+12; @virdee+13]). To complement the already existing radio data in the H-ATLAS fields, and in particular to provide a second radio frequency, we have observed the GAMA fields (which have the most extensive multi-wavelength coverage) at 325 MHz with the Giant Metrewave Radio Telescope [GMRT; @gmrtref]. The most sensitive GMRT images reach a $1\sigma$ depth of $\sim 1$ mJy beam$^{-1}$ and the best resolution we obtain is $\sim 14$ arcsec, which is well matched to the sensitivity and resolution of the already existing FIRST data. The GMRT data overlaps with the three $\sim60$-deg$^2$ GAMA fields, and cover a total of $108$ deg$^2$ in $288$ 15-minute pointings (see Fig. \[noisemaps\]). These GMRT data, used in conjunction with the available multiwavelength data, will be valuable in many studies, including an investigation of the radio-infrared correlation as a function of redshift and as a function of radio spectral index, the link between star formation and accretion in radio-loud AGN and how this varies as a function of environment and dust temperature, and the three-dimensional clustering of radio-source populations. The data will also bridge the gap between the well-studied 1.4-GHz radio source populations probed by NVSS and FIRST and the radio source population below 250 MHz, which will be probed by the wide area surveys made with the Low Frequency Array [LOFAR; @reflofar] in the coming years. This paper describes the 325-MHz survey of the H-ATLAS/GAMA regions. The structure of the paper is as follows. In Section 2 we describe the GMRT observations and the data. In Section 3 we describe the pipeline that we have used to reduce the data and in Section 4 we describe the images and catalogues produced. In Section 5 we discuss the data quality and in Section 6 we present the spectral index distribution for the detected sources between 1.4 GHz and 325 MHz. A summary and prospects for future work are given in Section 7. GMRT Observations ================= Date Start Time (IST) Hours Observerd N$_{\rm{antennas}}$ Antennas Down Comments -------------- ------------------ ----------------- --------------------- --------------------- ------------------------------ 2009, Jan 15 21:00 14.0 27 C01,S03,S04 C14,C05 stopped at 09:00 2009, Jan 16 21:00 15.5 27 C01,S02,S04 C04,C05 stopped at 09:00 2009, Jan 17 21:00 15.5 29 C01 C05 stopped at 06:00 2009, Jan 18 21:00 16.5 26 C04,E02,E03,E04 C05 stopped at 09:00 2009, Jan 19 22:00 16.5 29 C04 2009, Jan 20 21:00 13.5 29 C01 20min power failure at 06:30 2009, Jan 21 21:30 13.0 29 S03 Power failure after 06:00 2010, May 17 16:00 10.0 26 C12,W01,E06,E05 2010, May 18 17:00 10.0 25 C11,C12,S04,E05,W01 E05 stopped at 00:00 2010, May 19 18:45 10.5 25 C12,E05,C05,E03,E06 40min power failure at 22:10 2010, Jun 4 13:00 12.0 28 W03,W05 Survey Strategy --------------- The H-ATLAS/GAMA regions that have been observed by the *Herschel Space Observatory* and are followed up in our GMRT survey are made up of three separate fields on the celestial equator. The three fields are centered at 9 h, 12 h, and 14.5 h right ascension (RA) and each spans approximately 12 deg in RA and 3 deg in declination to cover a total of 108 deg$^2$ (36 deg$^2$ per field). The Full Width at Half Maximum (FWHM) of the primary beam of the GMRT at 325 MHz is 84 arcmin. In order to cover each H-ATLAS/GAMA field as uniformly and efficiently as possible, we spaced the pointings in an hexagonal grid separated by 42 arcmin. An example of our adopted pointing pattern is shown in Fig. \[pointings\]; each field is covered by 96 pointings, with 288 pointings in the complete survey. ![The 96 hexagonal GMRT pointings for the 9-h H-ATLAS/GAMA fields. The pointing strategy for the 12- and 14.5-h fields is similar. The dark grey ellipses (circles on the sky) show the 42-arcmin region at the centre of each pointing; the light grey ellipses (circles) show the 84-arcmin primary beam.[]{data-label="pointings"}](gamma9hr.png){width="\linewidth"} Observations ------------ ![image](9hr_crop.png){width="\textwidth"} ![image](12hr_crop.png){width="\textwidth"} ![image](14_5hr_crop.png){width="\textwidth"} Observations were carried out in three runs in Jan 2009 (8 nights) and in May 2010 (3 nights) and in June 2010 (1 night). Table \[obssummary\] gives an overview of each night’s observing. On each night as many as 5 of the 30 GMRT antennas could be offline for various reasons, including being painted or problems with the hardware backend. On two separate occasions (Jan 20 and May 19) power outages at the telescope required us to stop observing, and on one further occasion on Jan 21 a power outage affected all the GMRT baselines outside the central square. Data taken during the Jan 21 power outage were later discarded. Each night’s observing consisted of a continuous block of 10-14 h beginning in the early evening or late afternoon and running through the night. Night-time observations were chosen so as to minimise the ionopheric variations. We used the GMRT with its default parameters at 325 MHz and its hardware backend (GMRT Hardware Backend; GHB), two 16 MHz sidebands (Upper Sideband; USB, and Lower Sideband; LSB) on either side of 325 MHz, each with 128 channels, were used. The integration time was set to 16.7 s. The flux calibrators 3C147 and 3C286 were observed for 10 minutes at the beginning and towards the end of each night’s oberving. We assumed 325-MHz flux densities of 46.07 Jy for 3C147 and 24.53 Jy for 3C286, using the standard VLA (2010) model provided by the [AIPS]{} task [SETJY]{}. Typically the observing on each night was divided into 3 $\sim4-5$-h sections, concentrating on each of the 3 separate fields in order of increasing RA. The 9-h and 12-h fields were completely covered in the Jan 2009 run and we carried out as many observations of the 14.5-h field as possible during the remaining nights in May and June 2010. The resulting coverage of the sky, after data affected by power outages or other instrumental effects had been taken into account, is shown in Fig. \[noisemaps\], together with an indication of the relationship between our sky coverage and that of GAMA and H-ATLAS. Each pointing was observed for a total of 15 minutes in two 7.5-min scans, with each scan producing $\sim 26$ records using the specified integration time. The two scans on each pointing were always separated by as close to 6 h in hour angle as possible so as to maximize the $uv$ coverage for each pointing. The $uv$ coverage and the dirty beam of a typical pointing, observed in two scans with an hour-angle separation of 3.5 h, is shown in Fig. \[uvcoverage\]. Phase Calibrators ----------------- One phase calibrator near to each field was chosen and was verified to have stable phases and amplitudes on the first night’s observing. All subsequent observations used the same phase calibrator, and these calibrators were monitored continuously during the observing to ensure that their phases and amplitudes remained stable. The positions and flux densities of the phase calibrators for each field are listed in Table \[phasecals\]. Although there are no 325-MHz observations of the three phase calibrators in the literature, we estimated their 325-MHz flux densities that are listed in the table using their measured flux densities from the 365-MHz Texas survey [@texassurvey] and extrapolated to 325 MHz assuming a spectral index of $\alpha=-0.8$[^2]. Each 7.5-minute scan on source was interleaved with a 2.5-minute scan on the phase calibrator in order to monitor phase and amplitude fluctuations of the telescope, which could vary significantly during an evening’s observing. During data reduction we discovered that the phase calibrator for the 14.5-h field (PHC00) was significantly resolved on scales of $\sim 10$ arcsec. It was therefore necessary to flag all of the data at $uv$ distance $>20$ k$\lambda$ from the 14.5-h field. This resulted in degraded resolution and sensitivity in the 14.5-h field, which will be discussed in later sections of this paper. During observing the phases and amplitudes of the phase calibrator measured on each baseline were monitored. The amplitudes typically varied smoothly by $<30$ per cent in amplitude for the working long baselines and by $<10$ per cent for the working short baselines. We can attribute some of this effect to variations in the system temperature, but since the effects are larger on long baselines it may be that slight resolution of the calibrators is also involved. Phase variations on short to medium baselines were of the order of tens of degrees per hour, presumably due to ionospheric effects. On several occasions some baselines showed larger phase and amplitude variations, and these data were discarded during the data reduction. ------------ --------- --------------- --------------- ---------------------- Calibrator Field RA (J2000) Dec. (J2000) $S_{325\,{\rm MHz}}$ Name *hh mm ss.ss* *dd mm ss.ss* Jy PHA00 9-hr 08 15 27.81 -03 08 26.51 9.3 PHB00 12-hr 11 41 08.24 +01 14 17.47 6.5 PHC00 14.5-hr 15 12 25.35 +01 21 08.64 6.7 ------------ --------- --------------- --------------- ---------------------- : The phase calibrators for the three fields.[]{data-label="phasecals"} The Data Reduction Pipeline =========================== The data handling was carried out using an automated calibration and imaging pipeline. The pipeline is based on [python]{}, [aips]{} and [ParselTongue]{} (Greisen 1990; Kettenis 2006) and has been specially developed to handle GMRT data. The pipeline performs a full cycle of data calibration, including automatic flagging, delay corrections, absolute amplitude calibration, bandpass calibration, a multi-facet self-calibration process, cataloguing, and evaluating the final catalogue. A full description of the GMRT pipeline and the calibration will be provided elsewhere (Klöckner in prep.). Flagging -------- The GMRT data varies significantly in quality over time; in particular, some scans had large variations in amplitude and/or phase over short time periods, presumably due either to instrumental problems or strong ionospheric effects. The phases and amplitudes on each baseline were therefore initially inspected manually and any scans with obvious problems were excluded prior to running the automated flagging procedures. Non-working antennas listed in Table \[obssummary\] were also discarded at this stage. Finally, the first and last 10 channels of the data were removed as the data quality was usually poor at the beginning and end of the bandpass. After the initial hand-flagging of the most seriously affected data an automated flagging routine was run on the remaining data. The automatic flagging checked each scan on each baseline and fitted a 2D polynomial to the spectrum which was then subtracted from it. Visibilities $>3\sigma$ from the mean of the background-subtracted data were then flagged, various kernels were then applied to the data and also $3\sigma$ clipped and the spectra were gradient-filtered and flagged to exclude values $>3\sigma$ from the mean. In addition, all visibilities $>3\sigma$ from the gravitational centre of the real-imaginary plane were discarded. Finally, after all flags had been applied any time or channel in the scan which had had $>40$ per cent of its visibilities flagged was completely removed. On average, 60 per cent of a night’s data was retained after all hand and automated flagging had been performed. However at times particularly affected by Radio Frequency Interference (RFI) as little as 20 per cent of the data might be retained. A few scans ($\sim10$ per cent) were discarded completely due to excessive RFI during their observation. Calibration and Imaging {#imagepipe} ----------------------- After automated flagging, delay corrections were determined via the [aips]{} task [FRING]{} and the automated flagging was repeated on the delay-corrected data. Absolute amplitude calibration was then performed on the flagged and delay corrected dataset, using the [ aips]{} task [SETJY]{}. The [aips]{} calibration routine [ CALIB]{} was then run on channel 30, which was found to be stable across all the different night’s observing, to determine solutions for the phase calibrator. The [aips]{} task [GETJY]{} was used to estimate the flux density of the phase-calibrator source (which was later checked to be consistent with other catalogued flux densities for this source, as shown in Table \[phasecals\]). The bandpass calibration was then determined using [BPASS]{} using the cross-correlation of the phase calibrator. Next, all calibration and bandpass solutions were applied to the data for the phase calibrator and the amplitude and phase versus $uv$-distance plots were checked to ensure the calibration had succeded. The calibration solutions of the phase-calibrator source were then applied to the target pointing, and a multi-facet imaging and phase self-calibration process was carried out in order to increase the image sensitivity. To account for the contributions of the $w$-term in the imaging and self-calibration process the field of view was divided into sub-images; the task [SETFC]{} was used to produce the facets. The corrections in phase were determined using a sequence of decreasing solution intervals starting at 15 min and ending at 3 min (15, 7, 5, 3). At each self-calibration step a local sky model was determined by selecting clean components above $5\sigma$ and performing a model fit of a single Gaussian in the image plane using [SAD]{}. The number of clean components used in the first self-calibration step was 50, and with each self-calibration step the number of clean components was increased by 100. After applying the solutions from the self-calibration process the task [IMAGR]{} is then used to produce the final sub-images. These images were then merged into the final image via the task [FLATN]{}, which combines all facets and performs a primary beam correction. The parameters used in [FLATN]{} to account for the contribution of the primary beam (the scaled coefficients of a polynomial in the off-axis distance) were: -3.397, 47.192, -30.931, 7.803. Cataloguing {#catadesc} ----------- The LSB and USB images that were produced by the automated imaging pipeline were subsequently run through a cataloguing routine. As well as producing source catalogues for the survey, the cataloguing routine also compared the positions and flux densities measured in each image with published values from the NVSS and FIRST surveys as a figure-of-merit for the output of the imaging pipeline. This allowed the output of the imaging pipeline to be quickly assesed; the calibration and imaging could subsequently be run with tweaked parameters if necessary. The cataloguing procedure first determined a global rms noise ($\sigma_{\rm global}$) in the input image by running [IMEAN]{} to fit the noise part of the pixel histogram in the central 50 per cent of the (non-primary-beam corrected) image. In order to mimimise any contribtion from source pixels to the calculation of the image rms, [IMEAN]{} was run iteratively using the mean and rms measured from the previous iteration until the measured noise mean changed by less than 1 per cent. The limited dynamic range of the GMRT images and errors in calibration can cause noise peaks close to bright sources to be fitted in a basic flux-limited cataloguing procedure. We therefore model background noise variation in the image as follows: 1. Isolated point sources brighter than $100\sigma_{\rm global}$ were found using [SAD]{}. An increase in local source density around these bright sources is caused by noise peaks and artefacts close to them. Therefore, to determine the area around each bright source that has increased noise and artefacts, the source density of $3\sigma_{\rm global}$ sources as a function of radius from the bright source position was determined. The radius at which the local source density is equal to the global source density of all $3\sigma_{\rm global}$ sources in the image was then taken as the radius of increased noise around bright sources. 2. To model the increased noise around bright sources a *local* dynamic range was found by determining the ratio of the flux density of each $100\sigma_{\rm global}$ bright source to the brightest $3\sigma_{\rm global}$ source within the radius determined in step (i). The median value of the local dynamic range for all $100\sigma_{\rm global}$ sources in the image was taken to be the local dynamic range. This median local dynamic range determination prevents moderately bright sources close to the $100\sigma$ source from being rejected, which would happen if *all* sources within the computed radius close to bright sources were rejected. 3. A local rms ($\sigma_{\rm local}$) map was made from the input image using the task [RMSD]{}. This calculates the rms of pixels in a box of $5$ times the major axis width of the restoring beam and was computed for each pixel in the input image. [RMSD]{} iterates its rms determination 30 times and the computed histogram is clipped at $3\sigma$ on each iteration to remove the contribution of source data to the local rms determination. 4. We then added to this local rms map a Gaussian at the position of each $100\sigma_{\rm global}$ source, with width determined from the radius of the local increased source density from step (i) and peak determined from the median local dynamic range from step (ii). 5. A local mean map is constructed in a manner similar to that described in step (iii). Once a local rms and mean model has been produced the input map was mean-subtracted and divided by the rms model. This image was then run through the [SAD]{} task to find the positions and sizes of all $5\sigma_{\rm local}$ peaks. Eliptical Gaussians were fitted to the source positions using [JMFIT]{} (with peak flux density as the only free parameter) on the original input image to determine the peak and total flux density of each source. Errors in the final fitted parameters were determined by summing the equations in @c97 (with $\sigma_{\rm local}$ as the rms), adding an estimated $5$ per cent GMRT calibration uncertanty in quadrature. Once a final $5\sigma$ catalogue had been produced from the input image, the sources were compared to positions and flux densities from known surveys that overlap with the GMRT pointing (i.e., FIRST and NVSS) as a test of the image quality and the success of the calibration. Any possible systematic position offset in the catalogue was computed by comparing the positions of $>15\sigma$ point sources to their counterparts in the FIRST survey (these are known to be accurate to better than 0.1 arcsec [@first]). For this comparison, a point source was defined as being one whose fitted size is smaller than the restoring beam plus 2.33 times the error in the fitted size (98 per cent confidence), as was done in the NVSS and SUMSS surveys [@nvss; @sumss]. The flux densities of all catalogue sources were compared to the flux densities of sources from the NVSS survey. At the position of each NVSS source in the image area, the measured flux densities of each GMRT source within the NVSS source area were summed and then converted from 325 MHz to 1.4 GHz assuming a spectral index of $\alpha=-0.7$. We chose $\alpha=-0.7$ because it is the median spectral index of radio souces between 843 MHz and 1.4 GHz found between the SUMSS and NVSS surveys [@sumss]; it should therefore serve to indicate whether any large and systematic offsets can be seen in the distribution of measured flux densities of the GMRT sources. Mosaicing --------- The images from the upper and lower sidebands of the GMRT that had been output from the imaging pipeline described in Section \[imagepipe\] were then coadded to produce uniform mosaics. In order to remove the effects of the increased noise at the edges of each pointing due to the primary beam and produce a survey as uniform as possible in sensitivity and resolution across each field, all neighbouring pointings within 80 arcmin of each pointing were co-added to produce a mosaic image of $100\times100$ arcmin. This section describes the mosaicing process in detail, including the combination of the data from the two sidebands. ![The offsets in RA and declination between $15\sigma$ point sources in the GMRT survey that are detected in the FIRST survey. Each point in the plot is the median offset for all sources in an entire field. The error bars in the bottom right of the Figure show the rms in RA and declination from Fig. \[finaloffs\].[]{data-label="posnoffsets"}](medposnoffs.pdf){width="\linewidth"} ### Combining USB+LSB data We were unable to achieve improved signal-to-noise in images produced by coadding the data from the two GMRT sidebands in the $uv$ plane, so we instead chose to image the USB and LSB data separately and then subsequently co-add the data in the image plane, which always produced output images with improved sensitivity. During the process of co-adding the USB and LSB images, we regridded all of them to a 2 arcsec pixel scale using the [aips]{} task [regrid]{}, shifted the individual images to remove any systematic position offsets, and smoothed the images to a uniform beam shape across each of the three survey fields. Fig. \[posnoffsets\] shows the distribution of the median offsets between the GMRT and FIRST positions of all $>15\sigma$ point sources in each USB pointing output from the pipeline. The offsets measured for the LSB were always within 0.5 arcsec of the corresponding USB pointing. These offsets were calculated for each pointing using the method described in Section \[catadesc\] as part of the standard pipeline cataloguing routine. As the Figure shows, there was a significant distribution of non-zero positional offsets between our images and the FIRST data, which was usually larger than the scatter in the offsets measured per pointing (shown as an error bar on the bottom right of the figure). It is likely that these offsets are caused by ionospheric phase errors, which will largely be refractive at 325 MHz for the GMRT. Neighbouring images in the survey can have significantly different FIRST-GMRT position offsets, and coadding these during the mosaicing process may result in spurious radio-source structures and flux densities in the final mosaics. Because of this, the measured offsets were all removed using the [aips]{} task [shift]{} before producing the final coadded USB+LSB images. ![The distribution of the raw clean beam major axis FWHM in each of the three H-ATLAS/GAMA fields from the USB+LSB images output from the imaging pipeline. The dotted line shows the width of the convolving beam used before the mosaicing process. Images with raw clean beam larger than our adopted cutoffs have been discarded from the final dataset.[]{data-label="beamsizes"}](BMAJ.pdf){width="\linewidth"} Next, the USB+LSB images were convolved to the same resolution before they were co-added; the convolution minimises artefacts resulting from different source structures at different resolution, and in any case is required to allow flux densities to be measured from the resulting co-added maps. Fig. \[beamsizes\] shows the distribution in restoring beam major axes in the images output from the GMRT pipeline. The beam minor axis was always better than 12 arcsec in the three surveyed fields. In the 9-h and 12-h fields, the majority of images had better than 10-arcsec resolution. However, roughly 10 per cent of them are significantly worse; this can happen for various reasons but is mainly caused by the poor $uv$ coverage produced by the $2\times7.5$-minute scans on each pointing. Often, due to scheduling constraints, the scans were observed immediately after one another rather than separated by 6 h which can limit the distribution of visibilities in the $uv$ plane. In addition, when even a few of the longer baselines are flagged due to interference or have problems during their calibration, the resulting image resolution can be degraded. The distribution of restoring beam major axes in the 14.5-h field is much broader. This is because of the problems with the phase calibrator outlined in Section \[phasecals\]. All visibilities in excess of $20$ k$\lambda$ were removed during calibration of the 14.5-h field and this resulted in degraded image resolution. The dotted lines in Fig. \[beamsizes\] show the width of the beam used to convolve the images for each of the fields before coadding USB+LSB images. We have used a resolution of 14 arcsec for the 9-h field, 15 arcsec for the 12-h field and 23.5 arcsec for the 14.5-h field. Images with lower resolution than these were discarded from the final data at this stage. Individual USB and LSB images output from the self-calibration step of the pipeline were smoothed to a circular beam using the [aips]{} task [convl]{}. After smoothing, regridding and shifting the USB+LSB images, they are combined after being weighted by their indivdual variances, which were computed from the square of the local rms image measured during the cataloguing process. The combined USB+LSB images have all pixels within 30 arcsec of their edge blanked in order to remove any residual edge effect from the regridding, position shifting and smoothing process. ### Producing the final mosaics The combined USB+LSB images were then combined with all neighbouring coadded USB+LSB images within 80 arcmin of their pointing center. This removes the effects at the edges of the individual pointings caused by the primary beam correction and improves image sensitivity in the overlap regions. The final data product consists of one combined mosaic for each original GMRT pointing, and therefore the user should note that there is significant overlap between each mosaic image. Each combined mosaic image has a width of $100\times100$ arcmin and 2 arcsec pixels. They were produced from all neighboring images with pointing centers within 80 arcmin. Each of these individual image was then regridded onto the pixels of the output mosaic. The [aips]{} task [rmsd]{} was run in the same way described during the cataloging (i.e. with a box size of 5 times the major axis of the smoothed beam) on the regridded images to produce local rms noise maps. The noise maps were smoothed with a Gaussian with a FWHM of 3 arcmin to remove any small-scale variation in them. These smoothed noise maps were then used to create variance weight maps (from the sum of the squares of the individual noise maps) which were then in turn multiplied by each regridded input image. Finally, the weighted input images were added together. The final source catalogue for each pointing was produced as described above from the fully weighted and mosaiced images. Data Products ============= The primary data products from the GMRT survey are a set of FITS images (one for each GMRT pointing that has not been discarded during the pipeline reduction process) overlapping the H-ATLAS/GAMA fields, the $5\sigma$ source catalogues and a list of the image central positions.[^3] This section briefly describes the imaging data and the format of the full catalogues. Images ------ ![image](eximage.pdf){width="\textwidth"} An example of a uniform mosaic image output from the full pipeline is shown in Fig. \[eximage\]. In each field some of the 96 originally observed pointings had to be discarded for various reasons that have been outlined in the previous sections. The full released data set comprises 80 pointings in the 9-h field, 61 pointings in the 12-h field and 71 pointings in the 14.5-h field. In total 76 out of the 288 original pointings were rejected. In roughly 50 per cent of cases they were rejected because of the cutoff in beam size shown in Fig. \[beamsizes\], while in the other 50 per cent of cases the $2\times7.5$-minute scans of the pointing were completely flagged due to interference or other problems with the GMRT during observing. The full imaging dataset from the survey comprises a set of mosaics like the one pictured in Fig. \[eximage\], one for each of the non-rejected pointings. Catalogue --------- ----------- ---------------- ---------------- ------------- ------------ -------------- ------- ------------- ------- ------------- -------- ------ ------ ------------- ------------- ------ ---------------- ---------- (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) RA Dec. RA Dec. $\Delta$RA $\Delta$Dec. $A$ ${\Delta}A$ $S$ ${\Delta}S$ Maj Min PA $\Delta$Maj $\Delta$Min PA Local $\sigma$ Pointing $hh$ $mm$ $ss$ $dd$ $mm$ $ss$ $^\circ$ $^\circ$ mJy/bm 130.87617 -00.22886 08 43 30.28 -00 13 43.9 2.5 1.3 6.2 1.1 15.0 3.8 —- —- —– —- —- – 1.1 PNA02 130.87746 +02.11494 08 43 30.59 +02 06 53.8 2.3 1.6 12.1 1.3 41.2 5.6 —- —- —– —- —- – 1.4 PNA67 130.88025 +00.48630 08 43 31.26 +00 29 10.7 0.5 0.3 129.7 4.7 153.6 6.7 15.9 14.6 13.8 0.5 0.4 9 2.6 PNA51 130.88525 +00.49582 08 43 32.46 +00 29 45.0 0.5 0.4 122.2 4.6 152.1 7.0 —- —- —– —- —- – 2.6 PNA51 130.88671 -00.24776 08 43 32.81 -00 14 51.9 0.5 0.3 106.1 3.3 171.3 5.3 21.1 15.0 80.8 0.5 0.4 1 1.0 PNA02 130.88817 -00.89953 08 43 33.16 -00 53 58.3 0.6 0.3 59.6 2.2 118.8 5.0 26.4 14.8 87.5 0.9 0.6 1 1.2 PNA03 130.89171 -00.24660 08 43 34.01 -00 14 47.8 0.5 0.4 34.6 1.5 38.5 2.2 —- —- —– —- —- – 1.0 PNA02 130.89279 -00.12352 08 43 34.27 -00 07 24.7 1.2 1.1 4.6 0.8 4.7 1.5 —- —- —– —- —- – 0.8 PNA35 130.89971 -00.91813 08 43 35.93 -00 55 05.3 2.4 1.0 7.9 1.5 13.9 3.9 —- —- —– —- —- – 1.4 PNA03 130.90150 -00.01532 08 43 36.36 -00 00 55.1 0.9 0.8 6.6 0.8 6.6 1.4 —- —- —– —- —- – 0.8 PNA03 ----------- ---------------- ---------------- ------------- ------------ -------------- ------- ------------- ------- ------------- -------- ------ ------ ------------- ------------- ------ ---------------- ---------- Final catalogues were produced from the mosaiced images using the catalogue procedure described in Section \[catadesc\]. The catalogues from each mosaic image were then combined into 3 full catalogues covering each of the 9-h, 12-h, and 14.5-h fields. The mosaic images overlap by about 60 per cent in both RA and declination, so duplicate sources in the full list were removed by finding all matches within 15 arcsec of each other and selecting the duplicate source with the lowest local rms ($\sigma_{\rm local}$) from the full catalogue; this ensures that the catalogue is based on the best available image of each source. Removing duplicates reduced the total size of the full catalogue by about 75 per cent due to the amount of overlap between the final mosaics. The resulting full catalogues contain 5263 sources brighter than the local $5\sigma$ limit. 2628 of these are in the 9-h field, 1620 in the 12-h field and 1015 in the 14.5-h field. Table \[catexample\] shows 10 random lines of the output catalogue sorted by RA. A short description of each of the columns of the catalogue follows: Columns (1) and (2): The J2000 RA and declination of the source in decimal degrees (the examples given in Table \[catexample\] have reduced precision for layout reasons). Columns (3) and (4): The J2000 RA and declination of the source in sexagesimal coordinates. Columns (5) and (6): The errors in the quoted RA and declination in arcsec. This is calulated from the quadratic sum of the calibration uncertainty, described in Section \[positioncal\], and the fitting uncertainty, calculated using the equations given by @c97. Columns (7) and (8): The fitted peak brightness in units of mJy beam$^{-1}$ and its associated uncertainty, calculated from the quadratic sum of the fitting uncertainty from the equations given by @c97 and the estimated 5 per cent flux calibration uncertainty of the GMRT. The raw brightness measured from the image has been increased by 0.9 mJy beam$^{-1}$ to account for the effects of clean bias (see Section \[sec:flux\]). Columns (9) and (10): The total flux density of the source in mJy and its uncertainty calculated from equations given by @c97. This equals the fitted peak brightness if the source is unresolved. Columns (11), (12) and (13): The major axis FWHM (in arcsec), minor axis FWHM (in arcsec) and position angle (in degrees east of north) of the fitted elliptical Gaussian. The position angle is only meaningful for sources that are resolved (i.e. when the fitted Gaussian is larger than the restoring beam for the relevant field). As discussed in Section \[sec:sizes\], fitted sizes are only quoted for sources that are moderately resolved in their minor axis. Columns (14), (15) and (16): The fitting uncertanties in the size parameters of the fitted elliptical Gaussian calculated using equations from @c97. Column (17): The *local* rms noise ($\sigma_{\rm local}$) in mJy beam$^{-1}$ at the source position calculated as described in Section \[catadesc\]. The *local* rms is used to determine the source signal-to-noise ratio, which is used to determine fitting uncertainties. Column (18): The name of the GMRT mosaic image containing the source. These names consist of the letters PN, a letter A, B or C indicating the 9-, 12- or 14.5-h fields respectively, and a number between 01 and 96 which gives the pointing number within that field (see Fig. \[pointings\]). Data Quality ============ The quality of the data over the three fields varies considerably due in part to the different phase and flux calibration sources used for each field, and also due to the variable observing conditions over the different nights’ observing. In particular on each night’s observing, the data taken in the first half of the night seemed to be much more stable than that taken in the second half/early mornings. Some power outages at the telescope contributed to this as well as the variation in the ionosphere, particularly at sunrise. Furthermore, as described in Section \[mosaicing\], the poor phase calibrator in the 14.5-h field has resulted in degraded resolution and sensitivity. Image noise {#sec:noise} ----------- ![The rms noise measured in the central 1000 pixels of each image plotted against the square root of the number of visibilities. Outliers from the locus are produced by the increased noise in images around sources brighter than 1 Jy.[]{data-label="rmsnvis"}](rmsnvis.pdf){width="\linewidth"} Fig. \[rmsnvis\] shows the distribution of the rms noise measured within a radius of 1000 pixels in the individual GMRT images immediately after the self-calibration stage of the pipeline, plotted against the number of visibilities that have contributed to the final image (this can be seen as a proxy for the effective integration time after flagging). The rms in the individual fields varies from $\sim 1$ mJy beam$^{-1}$ in those images with the most visibilities to $\sim 7$ mJy beam$^{-1}$ in the worst case, with the expected trend toward higher rms noise with decreasing number of visibilities. The scatter to higher rms from the locus is caused by residual problems in the calibration and the presence of bright sources in the primary beam of the reduced images, which can increase the image noise in their vicinity due to the limited dynamic range of the GMRT observations ($\sim1000:1$). A bright 7 Jy source in the 12-h field and a 5 Jy source in the 14.5-h field have both contributed to the generally increased rms noise measured from some images. On average, the most visibilities have been flagged from the 14.5-h field because of the restriction we imposed on the $uv$ range of the data. This has also resulted in higher average noise in the 14.5-h fields. Fig. \[noisemaps\] shows the rms noise maps covering all of the 3 fields. These have been made by averaging the background rms images produced during the cataloguing of the the final mosaiced images and smoothing the final image with a Gaussian with a FWHM of 3 arcmin to remove edge effects between the individual backgound images. The rms in the final survey is significantly lower than that measured from the individual images output from the pipeline self-calibration process, which is a consequence of the large amount of overlap between the individual GMRT pointings in our survey strategy (see Fig. \[pointings\]). The background rms is $\sim0.6-0.8$ mJy beam$^{-1}$ in the 9-h field, $\sim0.8-1.0$ mJy beam$^{-1}$ in the 12-h field and $\sim1.5-2.0$ mJy beam$^{-1}$ in the 14.5-h field. Gaps in the coverage are caused by having discarded some pointings in the survey due to power outages at the GMRT, due to discarding scans during flagging as described in Section \[flagging\], and as a result of pointings whose restoring beam was larger than the smoothing width during the mosaicing process (Section \[mosaicing\]). Flux Densities {#sec:flux} -------------- The $2\times7.5$-min observations of the GMRT survey sample the $uv$ plane sparsely (see Fig. \[uvcoverage\]), with long radial arms which cause the dirty beam to have large radial sidelobes. These radial sidelobes can be difficult to clean properly during imaging and clean components which can be subtracted at their position when cleaning close to the noise can cause the average flux density of all point sources in the restored image to be systematically reduced. This “clean bias” is common in “snapshot” radio surveys and for example was found in the FIRST and NVSS surveys [@first; @nvss]. We have checked for the presence of clean bias in the GMRT data by inserting 500 point sources into the calibrated $uv$ data at random positions and the re-imaging the modified data with the same parameters as the original pipeline. We find an average difference between the imaged and input peak flux densities of $\Delta S_{\rm peak}=-0.9$ mJy beam$^{-1}$ with no significant difference between the 9hr, 12hr and 14hr fields. A constant offset of $0.9$ mJy beam$^{-1}$ has been added to the peak flux densities of all sources in the published catalogues. As a consistency check for the flux density scale of the survey we can compare the measured flux densities of the phase calibrator source with those listed in Table \[phasecals\]. The phase calibrator is imaged using the standard imaging pipeline and its flux density is measured using [sad]{} in [aips]{}. The scatter in the measurments of each phase calibrator over the observing period gives a measure of the accuracy of the flux calibration in the survey. In the 9-h field, the average measured flux density of the phase calibrator PHA00 is 9.5 Jy with rms scatter 0.5 Jy; in the 12-h field, the average measured flux density of PHB00 is 6.8 Jy with rms 0.4 Jy; and in the 14.5-h field the average measured flux density of PHC00 is 6.3 Jy with rms 0.5 Jy. This implies that the flux density scale of the survey is accurate to within $\sim5$ per cent; there is no evidence for any systematic offset in the flux scales. As there are no other 325-MHz data available for the region covered by the GMRT survey, it is difficult to provide any reliable external measure of the absolute quality of the flux calibration. An additional check is provided by a comparison of the spectral index distribution of sources detected in both our survey and the 1.4-GHz NVSS survey. We discuss this comparison further in Section \[sindexsec\]. Positions {#positioncal} --------- -------- --------- -------- -------- -------- Field median rms median rms 9-h $-0.04$ $0.52$ $0.01$ $0.31$ 12-h $-0.06$ $0.54$ $0.01$ $0.39$ 14.5-h $0.30$ $0.72$ $0.26$ $0.54$ -------- --------- -------- -------- -------- : Median and rms of position offsets between the GMRT and FIRST catalogues.[]{data-label="offsetdata"} ![The offsets in RA and declination between $>15\sigma$ point sources from the GMRT survey that are detected in the FIRST survey. The mean offsets in each pointing shown in Fig. \[posnoffsets\] have been removed. Different point styles are used to denote the three different H-ATLAS/GAMA fields to show the effect of the variation in the resolution of the GMRT data.[]{data-label="finaloffs"}](finaloffs.pdf){width="\linewidth"} In order to measure the poitional accuracy of the survey, we have compared the postions of $>15\sigma$ GMRT point sources with sources from the FIRST survey. Bright point sources in FIRST are known to have positional accuracy of better than 0.1 arcsec in RA and declination [@first]. We select point sources using the method outlined in Section \[catadesc\]. Postions are taken from the final GMRT source catalogue, which have had the shifts described in Section \[mosaicing\] removed; the scatter in the measured shifted positions is our means of estimating the calibration accuracy of the positions. Fig. \[finaloffs\] shows the offsets in RA and declination between the GMRT catalogue and the FIRST survey and Table \[offsetdata\] summarizes the mean offsets and their scatter in the three separate fields. As expected, the mean offset is close to zero in each case, which indicates that the initial image shifts have been correctly applied and that no additional position offsets have appeared in the final mosaicing and cataloguing process. The scatter in the offsets is smallest in the 9-h field and largest in the 14.5-h field, which is due to the increasing size of the restoring beam. The rms of the offsets listed in Table \[offsetdata\] give a measure of the positional calibration uncertainty of the GMRT data; these have been added in quadrature to the fitting error to produce the errors listed in the final catalogues. Source Sizes {#sec:sizes} ------------ The strong sidelobes in the dirty beam shown in Fig. \[uvcoverage\] extend radially at position angles (PAs) of $40^{\circ}$, $70^{\circ}$ and $140^{\circ}$ and can be as high as 15 per cent of the central peak up to 1 arcmin from it. Improper cleaning of these sidelobes can leave residual radial patterns with a similar structure to the dirty beam in the resulting images. Residual peaks in the dirty beam pattern can also be cleaned (see the discussion of “clean bias” in Section \[sec:flux\]) and this has the effect of enhancing positive and negative peaks in the dirty beam sidelobes, and leaving an imprint of the dirty beam structure in the cleaned images. This effect, coupled with the alternating pattern of positive and negative peaks in the dirty beam structure (see Fig. \[uvcoverage\]), causes sources to appear on ridges of positive flux squeezed between two negative valleys. Therefore, when fitting elliptical Gaussians to even moderately strong sources in the survey these can appear spuriously extended in the direction of the ridge and narrow in the direction of the valleys. These effects are noticeable in our GMRT images (see, for example, Fig. \[eximage\]) and in the distribution of fitted position angles of sources that appear unresolved in their minor axes (ie. $\phi_{\rm min}-\theta_{\rm min} < \sigma_{\rm min}$; where $\phi_{\rm min}$ is the fitted minor axis size, $\theta_{\rm min}$ is the beam minor axis size and $\sigma_{\rm min}$ is the rms fitting error in the fitted minor axis size) and are moderately resolved in their major axes (ie. $\phi_{\rm maj}-\theta_{\rm maj} > 2\sigma_{\rm maj}$; defined by analogy with above) from the catalogue. These PAs are clustered on average at $65^\circ$ in the 9-hr field, $140^\circ$ in the 12-hr field and at $130^\circ$ in the 14.5-hr field, coincident with the PAs of the radial sidelobes in the dirty beam shown in Fig. \[uvcoverage\]. The fitted PAs of sources that show some resolution in their minor axes (ie. $\phi_{\rm min}-\theta_{\rm min} > \sigma_{\rm min}$) are randomly distributed between $0^\circ$ and $180^\circ$ as is expected of the radio source population. We therefore only quote fitted source sizes and position angles for sources with $\phi_{\rm min}-\theta_{\rm min} > \sigma_{\rm min}$ in the published catalogue. 325-MHz Source Counts {#sec:scounts} ===================== We have made the widest and deepest survey yet carried out at 325 MHz. It is therefore interesting to see if the behaviour of the source counts at this frequency and flux-density limit differ from extrapolations from other frequencies. We measure the source counts from our GMRT observations using both the catalogues and the rms noise map described in Section \[sec:noise\], such that the area available to a source of a given flux-density and signal-to-noise ratio is calculated on an individual basis. We did not attempt to merge individual, separate components of double or multiple sources into single sources in generating the source counts. However, we note that such sources are expected to contribute very little to the overall source counts. Fig. \[fig:scounts\] shows the source counts from our GMRT survey compared to the source count prediction from the Square Kilometre Array Design Study (SKADS) Semi-Empirical Extragalactic (SEX) Simulated Sky [@Wilman08; @Wilman10] and the deep 325 MHz survey of the ELAIS-N1 field by [@Sirothia2009]. Our source counts agree, within the uncertainties, with those measured by [@Sirothia2009], given the expected uncertainties associated with cosmic variance over their relatively small field ($\sim 3$ degree$^{2}$), particularly at the bright end of the source counts. The simulation provides flux densities down to nJy levels at frequencies of 151 MHz, 610 MHz, 1400 MHz, 4860 MHz and 18 GHz. In order to generate the 325-MHz source counts from this simulation we therefore calculate the power-law spectral index between 151 MHz and 610 MHz and thus determine the 325-MHz flux density. We see that the observed source counts agree very well with the simulated source counts from SKADS, although the observed source counts tend to lie slightly above the simulated curve over the 10-200 mJy flux-density range. This could be a sign that the spectral curvature prescription implemented in the simulation may be reducing the flux density at low radio frequencies in moderate redshift sources, where there are very few constraints. In particular, the SKADS simulations do not contain any steep-spectrum ($\alpha_{325}^{1400}<-0.8$) sources, but there is clear evidence for such sources in the current sample (see the following subsection). A full investigation of this is beyond the scope of the current paper, but future observations with LOFAR should be able to confirm or rebut this explanation: we might expect the SKADS source count predictions for LOFAR to be slightly underestimated. ![The 325-MHz source counts measured from our GMRT survey (filled squares) and from the survey of the ELAIS-N1 field by [@Sirothia2009] (open circles). The solid line shows the predicted source counts from the SKADS simulation [@Wilman08; @Wilman10].[]{data-label="fig:scounts"}](325_scounts.pdf){width="\linewidth"} Spectral index distribution {#sindexsec} =========================== In this section we discuss the spectral index distribution of sources in the survey by comparison with the 1.4-GHz NVSS. We do this both as a check of the flux density scale of our GMRT survey (the flux density scale of the NVSS is known to be better than 2 per cent: @nvss) and as an initial investigation into the properties of the faint 325-MHz radio source population. In all three fields the GMRT data have a smaller beam than the 45 arcsec resolution of the NVSS. We therefore crossmatched the two surveys by taking all NVSS sources in the three H-ATLAS/GAMA fields and summing the flux densities of the catalogued GMRT radio sources that have positions within the area of the catalogued NVSS source (fitted NVSS source sizes are provided in the ‘fitted’ version of the catalogue [@nvss]). 3951 NVSS radio sources in the fields had at least one GMRT identification; of these, 3349 (85 per cent) of them had a single GMRT match, and the remainder had multiple GMRT matches. Of the 5263 GMRT radio sources in the survey 4746 (90 per cent) are identified with NVSS radio sources. (Some of the remainder may be spurious sources, but we expect there to be a population of genuine steep-spectrum objects which are seen in our survey but not in NVSS, particularly in the most sensitive areas of the survey, where the catalogue flux limit approaches 3 mJy.) ![The spectral index distribution between 1.4-GHz sources from the NVSS and 325-MHz GMRT sources.[]{data-label="sindex"}](si.pdf){width="\linewidth"} Fig. \[sindex\] shows the measured spectral index distribution ($\alpha$ between 325 MHz and 1.4 GHz) of radio sources from the GMRT survey that are also detected in the NVSS. The distribution has median $\alpha=-0.71$ with an rms scatter of 0.38, which is in good agreement with previously published values of spectral index at frequncies below 1.4 GHz [@sumss; @debreuck2000; @randall12]. ([@Sirothia2009] find a steeper 325-MHz/1.4-GHz spectral index, with a mean value of 0.83, in their survey of the ELAIS-N1 field, but their low-frequency flux limit is much deeper than ours, so that they probe a different source population, and it is also possible that their use of FIRST rather than NVSS biases their results towards steeper spectral indices.) The rms of the spectral index distributions we obtain increases with decreasing 325-MHz flux density; it increases from 0.36 at $S_{325}>50$ mJy to 0.4 at $S_{325}<15$ mJy. This reflects the increasing uncertainty in flux density for fainter radio sources in both the GMRT and NVSS data. ![The distribution of the spectral index measured between 325 MHz and 1.4 GHz as a function of 1.4-GHz flux density. The solid line indicates the spectral index traced by the nominal 5 mJy limit of the 325-MHz data.[]{data-label="sindexflux"}](allsi_14.pdf){width="\linewidth"} There has been some discussion about the spectral index distribution of low-frequency radio sources, with some authors detecting a flattening of the spectral index distribution below $S_{1.4}=10$ mJy [@prandoni06; @prandoni08; @om08] and others not [@randall12; @ibar09]. It is well established that the 1.4-GHz radio source population mix changes at around 1 mJy, with classical radio-loud AGN dominating above this flux density and star-forming galaxies and fainter radio-AGN dominating below it [@condon+84; @Windhorst+85]. In particular, the AGN population below 10 mJy is known to be more flat-spectrum-core dominated [e.g. @nagar00] and it is therefore expected that some change in the spectral-index distribution should be evident. Fig. \[sindexflux\] shows the variation in 325-MHz to 1.4-GHz spectral index as a function of 1.4-GHz flux density. Our data show little to no variation in median spectral index below 10 mJy, in agreement with the results of [@randall12]. The distribution shows significant populations of steep ($\alpha < -1.3$) and flat ($\alpha > 0$) spectrum radio sources over the entire flux density range, which are potentially interesting populations of radio sources for further study (e.g. in searches for high-$z$ radio galaxies [@hzrg] or flat-spectrum quasars). Summary ======= In this paper we have described a 325-MHz radio survey made with the GMRT covering the 3 equatorial fields centered at 9, 12 and 14.5-h which form part of the sky coverage of [*Herschel*]{}-ATLAS. The data were taken over the period Jan 2009 – Jul 2010 and we have described the pipeline process by which they were flagged, calibrated and imaged. The final data products comprise 212 images and a source catalogue containing 5263 325-MHz radio sources. These data will be made available via the H-ATLAS (http://www.h-atlas.org/) and GAMA (http://www.gama-survey.org/) online databases. The basic data products are also available at http://gmrt-gama.extragalactic.info/ . The quality of the data varies significantly over the three surveyed fields. The 9-h field data has 14 arcsec resolution and reaches a depth of better than 1 mJy beam$^{-1}$ over most of the survey area, the 12-h field data has 15 arcsec resolution and reaches a depth of $\sim 1$ mJy beam$^{-1}$ and the 14.5-h data has 23.5 arcsec resolution and reaches a depth of $\sim 1.5$ mJy beam$^{-1}$. Positions in the survey are usually better than 0.75 arcsec for brighter point sources, and the flux scale is believed to be better than 5 per cent. We show that the source counts are in good agreement with the prediction from the SKADS Simulated Skies [@Wilman08; @Wilman10] although there is a tendency for the observed source counts to slightly exceed the predicted counts between 10–100 mJy. This could be a result of excessive curvature in the spectra of radio sources implemented within the SKADS simulation. We have investigated the spectral index distribution of the 325-MHz radio sources by comparison with the 1.4-GHz NVSS survey. We find that the measured spectral index distribution is in broad agreement with previous determinations at frequencies below 1.4 GHz and find no variation of median spectral index as a function of 1.4-GHz flux density. The data presented in this paper will complement the already extant multi-wavelength data over the H-ATLAS/GAMA regions and will be made publicly available. These data will thus facilitate detailed study of the properties of sub-mm galaxies dectected at sub-GHz radio frequencies in preparation for surveys by LOFAR and, in future, the SKA. Acknowledgements {#acknowledgements .unnumbered} ================ We thank the staff of the GMRT, which made these observations possible. We also thank the referee Jim Condon, whose comments have helped to improve the final version of this paper. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. The [*Herschel*]{}-ATLAS is a project with [*Herschel*]{}, which is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. The H-ATLAS website is http://www.h-atlas.org/. This work has made use of the University of Hertfordshire Science and Technology Research Institute high-performance computing facility (http://stri-cluster.herts.ac.uk/). \[lastpage\] [^1]: E-mail: txmauch@gmail.com [^2]: $S \propto \nu^\alpha$ [^3]: Data products are available on line at http://gmrt-gama.extragalactic.info .
--- abstract: 'The $f(R)$ gravity models formulated in Einstein conformal frame are equivalent to Einstein gravity together with a minimally coupled scalar field. We shall explore phantom behavior of $f(R)$ models in this frame and compare the results with those of the usual notion of phantom scalar field.' --- .5cm -26pt -.85in \ ${\bf Yousef~Bisabr}$[^1]\ \ Introduction ============ There are strong observational evidences that the expansion of the universe is accelerating. These observations are based on type Ia supernova [@super], cosmic microwave background radiation [@cmbr], large scale structure surveys [@ls] and weak lensing [@wl]. There are two classes of models aim at explaining this phenomenon: In the first class, one modifies the laws of gravity whereby a late-time acceleration is produced. A family of these modified gravity models is obtained by replacing the Ricci scalar $R$ in the usual Einstein-Hilbert Lagrangian density for some function $f(R)$ [@carro] [@sm]. In the second class, one invokes a new matter component usually referred to as dark energy. This component is described by an equation of state parameter $\omega \equiv \frac{p}{\rho}$, namely the ratio of the homogeneous dark energy pressure over the energy density. For a cosmic speed up, one should have $\omega < -\frac{1}{3}$ which corresponds to an exotic pressure $p<-\rho/3$. Recent analysis of the latest and the most reliable dataset (the Gold dataset [@gold]) have indicated that significantly better fits are obtained by allowing a redshift dependent equation of state parameter [@data]. In particular, these observations favor the models that allow the equation of state parameter crossing the line corresponding to $\omega=-1$, the phantom divide line (PDL), in the near past. It is therefore important to construct dynamical models that provide a redshift dependent equation of state parameter and allow for crossing the phantom barrier.\ Most simple models of this kind employ a scalar field coupled minimally to curvature with negative kinetic energy which referred to as phantom field [@ph] [@caldwell]. In contrast to these models, one may consider models which exhibit phantom behavior due to curvature corrections to gravitational equations rather than introducing exotic matter systems. Recently, there is a number of attempts to find phantom behavior in $f(R)$ gravity models. It is shown that one may realize crossing the PDL in this framework without recourse to any extra component relating to matter degrees of freedom with exotic behavior [@o] [@n]. Following these attempts, we intend to explore phantom behavior in some $f(R)$ gravity models which have a viable cosmology, i.e. a matter-dominated epoch followed by a late-time acceleration. In contrast to [@n], we shall consider $f(R)$ gravity models in Einstein conformal frame. It should be noted that mathematical equivalence of Jordan and Einstein conformal frames does not generally imply that they are also physically equivalent. In fact it is shown that some physical systems can be differently interpreted in different conformal frames [@soko] [@no]. The physical status of the two conformal frames is an open question which we are not going to address here. Our motivation to work in Einstein conformal frame is that in this frame, $f(R)$ models consist of Einstein gravity plus an additional dynamical degree of freedom, the scalar partner of the metric tensor. This suggests that it is this scalar degree of freedom which drives late-time acceleration in cosmologically viable $f(R)$ models. We compare this scalar degree of freedom with the usual notion of phantom scalar field. We shall show that behaviors of this scalar field attributed to $f(R)$ models which allow crossing the PDL are similar to those of a quintessence field with a negative potential rather than a phantom with a wrong kinetic term.                                                                      Phantom as a Minimally coupled Scalar Field =========================================== The simplest class of models that provides a redshift dependent equation of state parameter is a scalar field minimally coupled to gravity whose dynamics is determined by a properly chosen potential function $V(\varphi)$. Such models are described by the Lagrangian density [^2] $$L=\frac{1}{2}\sqrt{-g}(R-\alpha ~g^{\mu\nu}\partial_{\mu}\varphi \partial_{\nu}\varphi-2V(\varphi)) \label{a1}$$ where $\alpha=+1$ for quintessence and $\alpha=-1$ for phantom. The distinguished feature of the phantom field is that its kinetic term enters (\[a1\]) with opposite sign in contrast to the quintessence or ordinary matter. The Einstein field equations which follow (\[a1\]) are $$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=T_{\mu\nu} \label{a2}$$ with $$T_{\mu\nu}=\alpha~\partial_{\mu}\varphi \partial_{\nu}\varphi-\frac{1}{2}\alpha~g_{\mu\nu} \partial_{\gamma}\varphi \partial^{\gamma}\varphi-g_{\mu\nu} V(\varphi) \label{a3}$$ In a homogeneous and isotropic spacetime, $\varphi$ is a function of time alone. In this case, one may compare (\[a3\]) with the stress tensor of a perfect fluid with energy density $\rho_{\varphi}$ and pressure $p_{\varphi}$. This leads to the following identifications $$\rho_{\varphi}=\frac{1}{2}\alpha \dot{\varphi}^2+V(\varphi)~,~~~~~p_{\varphi}=\frac{1}{2}\alpha \dot{\varphi}^2-V(\varphi) \label{a4}$$ The equation of state parameter is then given by $$\omega_{\varphi}=\frac{\frac{1}{2}\alpha \dot{\varphi}^2-V(\varphi)}{\frac{1}{2}\alpha \dot{\varphi}^2+V(\varphi)} \label{a5}$$ In the case of a quintessence (phantom) field with $V(\varphi)>0$ ($V(\varphi)<0$) the equation of state parameter remains in the range $-1<\omega_{\varphi}<1$. In the limit of small kinetic term (slow-roll potentials [@slow]), it approaches $\omega_{\varphi}=-1$ but does not cross this line. The phantom barrier can be crossed by either a phantom field ($\alpha<0$) with $V(\varphi)>0$ or a quintessence field ($\alpha>0$) with $V(\varphi)<0$, when we have $2|V(\varphi)|>\dot{\varphi}^2$. This situation corresponds to $$\rho_{\varphi}>0~~~~~,~~~~~p_{\varphi}<0~~~~~,~~~~~V(\varphi)>0~~~~~~~~~~~~~~~phantom \label{a51}$$ $$\rho_{\varphi}<0~~~~~,~~~~~p_{\varphi}>0~~~~~,~~~~~V(\varphi)<0~~~~~~~~~~quintessence\label{a52}$$ Here it is assumed that the scalar field has a canonical kinetic term $\pm \frac{1}{2}\dot{\varphi}^2$. It is shown [@vik] that any minimally coupled scalar field with a generalized kinetic term (k-essence Lagrangian [@k]) can not lead to crossing the PDL through a stable trajectory. However, there are models that employ Lagrangians containing multiple fields [@multi] or scalar fields with non-minimall coupling [@non] which in principle can achieve crossing the barrier.\ There are some remarks to do with respect to $V(\varphi)<0$ appearing in (\[a52\]). In fact, the role of negative potentials in cosmological dynamics has been recently investigated by some authors [@neg]. One of the important points about the cosmological models containing such potentials is that they predict that the universe may end in a singularity even if it is not closed. For more clarification, consider a model containing different kinds of energy densities such as matter, radiation, scalar fields and so on. The Friedmann equation in a flat universe is $H^2 \propto \rho_{t}$ with $\rho_{t}=\Sigma_{i}\rho_{i}$ being the sum of all energy densities. It is clear that the universe expands forever if $\rho_{t}>0$. However, if the contribution of some kind of energy is negative so that $\rho_{i}<0$, then it is possible to have $H^2=0$ at finite time and the size of the universe starts to decrease [^3]. We will return to this issue in the context of $f(R)$ gravity models in the next section.\ The possibility of existing a fluid with a surenegative pressure ($\omega<-1$) leads to problems such as vacuum instability and violation of energy conditions [@carroll]. For a perfect fluid with energy density $\rho$ and pressure $p$, the weak energy condition requires that $\rho\geq 0$ and $\rho+p \geq 0$. These state that the energy density is positive and the pressure is not too large compared to the energy density. The null energy condition $\rho+p\geq 0$ is a special case of the latter and implies that energy density can be negative if there is a compensating positive pressure. The strong energy condition as a hallmark of general relativity states that $\rho+p \geq 0$ and $\rho+3p\geq 0$. It implies the null energy condition and excludes excessively large negative pressures. The null dominant energy condition is a statement that $\rho\geq |p|$. The physical motivation of this condition is to prevent vacuum instability or propagation of energy outside the light cone. Applying to an equation of state $p=\omega \rho$ with a constant $\omega$, it means that $\omega \geq -1$. Violation of all these reasonable constraints by phantom, gives an unusual feature to this principal energy component of the universe. There are however some remarks concerning how these unusual features may be circumvented [@carroll] [@mc].                                                                                  $f(R)$ Gravity ============== Let us consider an $f(R)$ gravity model described by the action $$S=\frac{1}{2} \int d^{4}x \sqrt{-g}~ f(R) + S_{m}(g_{\mu\nu}, \psi)\label{b1}$$ where $g$ is the determinant of $g_{\mu\nu}$, $f(R)$ is an unknown function of the scalar curvature $R$ and $S_{m}$ is the matter action depending on the metric $g_{\mu\nu}$ and some matter field $\psi$. It is well-known that these models are equivalent to a scalar field minimally coupled to gravity with an appropriate potential function. In fact, we may use a new set of variables $$\bar{g}_{\mu\nu} =p~ g_{\mu\nu} \label{b2}$$ $$\phi = \frac{1}{2\beta} \ln p \label{b3}$$ where $p\equiv\frac{df}{dR}=f^{'}(R)$ and $\beta=\sqrt{\frac{1}{6}}$. This is indeed a conformal transformation which transforms the above action in the Jordan frame to the Einstein frame [@soko] [@maeda] [@wands] $$S=\frac{1}{2} \int d^{4}x \sqrt{-g}~\{ \bar{R}-\bar{g}^{\mu\nu} \partial_{\mu} \phi~ \partial_{\nu} \phi -2V(\phi)\} + S_{m}(\bar{g}_{\mu\nu} e^{2\beta \phi}, \psi) \label{b4}$$ In the Einstein frame, $\phi$ is a minimally coupled scalar field with a self-interacting potential which is given by $$V(\phi(R))=\frac{Rf'(R)-f(R)}{2f'^2(R)} \label{b5}$$ Note that the conformal transformation induces the coupling of the scalar field $\phi$ with the matter sector. The strength of this coupling $\beta$, is fixed to be $\sqrt{\frac{1}{6}}$ and is the same for all types of matter fields.\ Variation of the action (\[b4\]) with respect to $\bar{g}_{\mu\nu}$, gives the gravitational field equations $$\bar{G}_{\mu\nu}=T^{\phi}_{\mu\nu}+\bar{T}^{m}_{\mu\nu} \label{b6}$$ where $$\bar{T}^{m}_{\mu\nu}=\frac{-2}{\sqrt{-g}}\frac{\delta S_{m}}{\delta \bar{g}^{\mu\nu}}\label{b7}$$ $$T^{\phi}_{\mu\nu}=\partial_{\mu} \phi~\partial_{\nu} \phi -\frac{1}{2}\bar{g}_{\mu\nu} \partial_{\gamma} \phi~\partial^{\gamma} \phi-V(\phi) \bar{g}_{\mu\nu} \label{b8}$$ Here $\bar{T}^{m}_{\mu\nu}$ and $T^{\phi}_{\mu\nu}$ are stress tensors of the matter system and the minimally coupled scalar field $\phi$, respectively. Comparing (\[a3\]) and (\[b8\]) indicates that $\alpha=1$ and $\phi$ appears as a normal scalar field. Thus the equation of state parameter which corresponds to $\phi$ is given by $$\omega_{\phi} \equiv \frac{p_{\phi}}{\rho_{\phi}}=\frac{\frac{1}{2} \dot{\phi}^2-V(\phi)}{\frac{1}{2} \dot{\phi}^2+V(\phi)} \label{b9}$$ Inspection of (\[b9\]) reveals that for $\omega_{\phi}<-1$, we should have $V(\phi)<0$ and $|V(\phi)|>\frac{1}{2}\dot{\phi}^2$ which corresponds to (\[a52\]). In explicit terms, crossing the PDL in this case requires that $\phi$ appear as a quintessence (rather than a phantom) field with a negative potential.\ Here the scalar field $\phi$ has a geometric nature and is related to the curvature scalar by (\[b3\]). One may therefore use (\[b3\]) and (\[b5\]) in the expression (\[b9\]) to obtain $$\omega_{\phi}=\frac{3\dot{R}^2 f''^2(R)-\frac{1}{2}(Rf'(R)-f(R))}{3\dot{R}^2 f''^2(R)+\frac{1}{2}(Rf'(R)-f(R))} \label{b10}$$ which is an expression relating $\omega_{\phi}$ to the function $f(R)$. It is now possible to use (\[b10\]) and find the functional forms of $f(R)$ that fulfill $\omega_{\phi}<-1$. In general, to find such $f(R)$ gravity models one may start with a particular $f(R)$ function in the action (\[b1\]) and solve the corresponding field equations for finding the form of $H(z)$. One can then use this function in (\[b10\]) to obtain $\omega_{\phi}(z)$. However, this approach is not efficient in view of complexity of the field equations. An alternative approach is to start from the best fit parametrization $H(z)$ obtained directly from data and use this $H(z)$ for a particular $f(R)$ function in (\[b10\]) to find $\omega_{\phi}(z)$. We will follow the latter approach to find $f(R)$ models that provide crossing the phantom barrier.\ We begin with the Hubble parameter $H\equiv \frac{\dot{a}}{a}$. Its derivative with respect to cosmic time $t$ is $$\dot{H}=\frac{\ddot{a}}{a}-(\frac{\dot{a}}{a})^2 \label{b11}$$ where $a(t)$ is the scale factor of the Friedman-Robertson-Walker (FRW) metric. Combining this with the definition of the deceleration parameter $$q(t)=-\frac{\ddot{a}}{aH^2} \label{b12}$$ gives $$\dot{H}=-(q+1)H^2 \label{b13}$$ One may use $z=\frac{a(t_{0})}{a(t)}-1$ with $z$ being the redshift, and the relation (\[b12\]) to write (\[b13\]) in its integration form $$H(z)=H_{0}~exp~[\int_{0}^{z} (1+q(u))d\ln(1+u)] \label{b14}$$ where the subscript “0" indicates the present value of a quantity. Now if a function $q(z)$ is given, then we can find evolution of the Hubble parameter. Here we use a two-parametric reconstruction function characterizing $q(z)$ [@wang][@q], $$q(z)=\frac{1}{2}+\frac{q_{1}z+q_{2}}{(1+z)^2} \label{b15}$$ where fitting this model to the Gold data set gives $q_{1}=1.47^{+1.89}_{-1.82}$ and $q_{2}=-1.46\pm 0.43$ [@q]. Using this in (\[b14\]) yields $$H(z)=H_{0}(1+z)^{3/2}exp[\frac{q_{2}}{2}+\frac{q_{1}z^2-q_{2}}{2(z+1)^2}] \label{b16}$$ In a spatially flat FRW spacetime $R=6(\dot{H}+2H^2)$ and therefore $\dot{R}=6(\ddot{H}+4\dot{H}H)$. In terms of the deceleration parameter we have $$R=6(1-q)H^2 \label{b17}$$and $$\dot{R}=6H^3 \{2(q^2-1)-\frac{\dot{q}}{H}\} \label{b18}$$ which the latter is equivalent to $$\dot{R}=6H^3 \{2(q^2-1)+(1+z)\frac{dq}{dz}\} \label{b19}$$ It is now possible to use (\[b15\]) and (\[b16\]) for finding $R$ and $\dot{R}$ in terms of the redshift. Then for a given $f(R)$ function, the relation (\[b10\]) determines the evolution of the equation of state parameter $\omega_{\phi}(z)$.\ As an illustration we apply this procedure to some $f(R)$ functions. Let us first consider the model [@cap] [@A] $$f(R)=R+\lambda R^n \label{b20}$$ in which $\lambda$ and $n$ are constant parameters. In terms of the values attributed to these parameters, the model (\[b20\]) is divided in three cases [@A]. Firstly, when $n>1$ there is a stable matter-dominated era which does not follow by an asymptotically accelerated regime. In this case, $n = 2$ corresponds to Starobinsky’s inflation and the accelerated phase exists in the asymptotic past rather than in the future. Secondly, when $0<n<1$ there is a stable matter-dominated era followed by an accelerated phase only for $\lambda<0$. Finally, in the case that $n<0$ there is no accelerated and matter-dominated phases for $\lambda>0$ and $\lambda<0$, respectively. Thus the model (\[b20\]) is cosmologically viable in the regions of the parameters space which is given by $\lambda<0$ and $0<n<1$.\ Due to complexity of the resulting $\omega_{\phi}(z)$ function, we do not explicitly write it here and only plot it in Fig.1a for some parameters. As the figure shows, there is no phantom behavior and $\omega_{\phi}(z)$ remains near the line of the cosmological constant $\omega_{\phi}=-1$. We also plot $\omega_{\phi}$ in terms of $n$ and $\lambda$ for $z=1$ in Fig.1b. The figure shows that $\omega_{\phi}$ remains near unity except for a small region in which $-1\leq \omega_{\phi}<0$ and therefore the PDL is never crossed.\ Now we consider the models presented by Starobinsky [@star] $$f(R)=R-\gamma R_{c} \{1-[1+(\frac{R}{R_{c}})^2]^{-m}\} \label{b21}$$ and Hu-Sawicki [@hs] $$f(R)=R-\gamma R_{c}\{\frac{(\frac{R}{R_{c}})^m}{1+(\frac{R}{R_{c}})^m}\} \label{b22}$$ where $\gamma$, $m$ and $R_{c}$ are positive constants with $R_{c}$ being of the order of the presently observed effective cosmological constant. Using the same procedure, we can obtain evolution of the equation of state parameter for both models (\[b21\]) and (\[b22\]). We plot the resulting functions in Fig.2. The figures show that while the model (\[b22\]) allows crossing the PDL for the given values of the parameters, in the model (\[b21\]) the equation of state parameter remains near $\omega_{\phi}=-1$. To explore the behavior of the models in a wider range of the parameters, we also plot $\omega_{\phi}$ in the redshift $z=1$ in Fig.3.\ It is interesting to consider violation of energy conditions for the model (\[b22\]) which can exhibit phantom behavior. In Fig.4, we plot some expressions corresponding to null, weak and strong energy conditions. As it is indicated in the figures, the model violates weak and strong energy conditions while it respects null energy condition for a period of evolution of the universe. Moreover, Fig.4a indicates that $\rho_{\phi}<0$ for some parameters in terms of which the PDL is crossed. This is in accord with (\[a52\]) and (\[b9\]) which require that in order for crossing the PDL, $\phi$ should be a quintessence field with a negative potential function.                                                                            Concluding Remarks ================== We have studied phantom behavior for some $f(R)$ gravity models in which the late-time acceleration of the universe is realized. Working in Einstein conformal frame, we separate the scalar degree of freedom which is responsible for the late-time acceleration. Comparing this scalar field with the phantom field, we have made our first observation that the former appears as a minimally coupled quintessence whose dynamics is characterized by a negative potential. The impact of such a negative potential in cosmological dynamics is that it leads to a collapsing universe or a big crunch [@neg]. As a consequence, the $f(R)$ gravity models in which crossing the phantom barrier is realized predict that the universe stops expanding and eventually collapses. This is in contrast to phantom scalar fields in which the final stage of the universe has a divergence of the scale factor at a finite time, or a big rip [@ph] [@caldwell].\ We have used the reconstruction functions $q(z)$ and $H(z)$ fitting to the Gold data set to find evolution of equation of state parameter $\omega_{\phi}(z)$ for some cosmologically viable $f(R)$ models. We obtained the following results :\ \ 1) The model (\[b20\]) does not provide crossing the PDL. It however allows $\omega_{\phi}$ to be negative for a small region in the parameters space. For $n=0$, the expression (\[b20\]) appears as the Einstein gravity plus a cosmological constant. This state is indicated in Fig.1b when the equation of state parameter experiences a sharp decrease to $\omega_{\phi}=-1$.\ \ 2) We also do not observe phantom behavior in the Starobinsky’s model (\[b21\]). In the region of the parameters space corresponding to $m>0.5$ the equation of state parameter decreases to $\omega_{\phi}=-1$ and the model effectively appears as $\Lambda$CDM.\ \ 3) The same analysis is fulfilled for Hu-Sawicki’s model (\[b22\]). This model exhibits phantom crossing in a small region of the parameters space as it is indicated in Fig.2b and Fig.3b. Due to crossing the PDL in this case, we also examine energy conditions. We find that in contrast to weak and strong energy conditions which are violated, the null energy condition hold in a period of the evolution.\ Although the properties of $\phi$ differ from those of the phantom due to the sign of its kinetic term, violation of energy conditions remains as a consequence of crossing the PDL in both cases. However, the scalar field $\phi$ in our case should not be interpreted as an exotic matter since it has a geometric nature characterized by (\[b3\]). In fact, taking $\omega_{\phi}<-1$ as a condition in (\[b10\]) just leads to some algebraic relations constraining the explicit form of the $f(R)$ function. [99]{} A. G. Riess et al., Astron. J. [**116**]{}, 1009 (1998)\ S. Perlmutter et al., Bull. Am. Astron. Soc., [**29**]{}, 1351 (1997)\ S. Perlmutter et al., Astrophys. J., [**517**]{} 565 (1997) L. Melchiorri et al., Astrophys. J. Letts., [**536**]{}, L63 (2000)\ C. B. Netterfield et al., Astrophys. J., [**571**]{}, 604 (2002)\ N. W. Halverson et al., Astrophys. J., [**568**]{}, 38 (2002)\ A. E. Lange et al, Phys. Rev. D [**63**]{}, 042001 (2001)\ A. H. Jaffe et al, Phys. Rev. Lett. [**86**]{}, 3475 (2001) M. Tegmark et al., Phys. Rev. D [**69**]{}, 103501 (2004)\ U. Seljak et al., Phys. Rev. D [**71**]{}, 103515 (2005) B. Jain and A. Taylor, Phys. Rev. Lett. [**91**]{}, 141302 (2003) S. M. Carroll, V. Duvvuri, M. Trodden, M. S. Turner, Phys. Rev. D [**70**]{}, 043528 (2004) S. M. Carroll, A. De Felice, V. Duvvuri, D. A. Easson, M. Trodden and M. S. Turner, Phys. Rev. D [**71**]{}, 063513 (2005)\ G. Allemandi, A. Browiec and M. Francaviglia, Phys. Rev. D [**70**]{}, 103503 (2004)\ X. Meng and P. Wang, Class. Quant. Grav. [**21**]{}, 951 (2004)\ M. E. soussa and R. P. Woodard, Gen. Rel. Grav. [**36**]{}, 855 (2004)\ S. Nojiri and S. D. Odintsov, Phys. Rev. D [**68**]{}, 123512 (2003)\ P. F. Gonzalez-Diaz, Phys. Lett. B [**481**]{}, 353 (2000)\ K. A. Milton, Grav. Cosmol. [**9**]{}, 66 (2003) A. G. Riess et al., Astrophys. J. [**607**]{}, 665 (2004) U. Alam, V. Sahni, T. D. Saini and A. A. Starobinsky, Mon. Not. Roy. Astron. Soc. [**354**]{}, 275 (2004)\ S. Nesseris and L. Perivolaropoulos, Phys. Rev. D [**70**]{}, 043531 (2004) R. R. Caldwell, Phys. Lett. B [**545**]{}, 23 (2002) R. R. Caldwell, M. Kamionkowski and N. N. Weinberg Phys. Rev. Lett. [**91**]{}, 071301 (2003) K. Bamba, C. Geng, S. Nojiri, S. D. Odintsov, Phys. Rev. D [**79**]{}, 083014 (2009) K. Nozari and T. Azizi, Phys. Lett. B [**680**]{}, 205 (2009) G. Magnano and L. M. Sokolowski, Phys. Rev. D [**50**]{}, 5039 (1994) Y. M. Cho, Class. Quantum Grav. [**14**]{}, 2963 (1997)\ E. Elizalde, S. Nojiri and S. D. Odintsov, Phys. Rev. D [**70**]{}, 043539 (2004)\ S. Nojiri and S. D. Odintsov, Phys. Rev. D [**74**]{}, 086005 (2006)\ S. Capozziello, S. Nojiri, S. D. Odintsov and A. Troisi, Phys. Lett. B [**639**]{}, 135 (2006)\ K. Bamba, C. Q. Geng, S. Nojiri and S. D. Odintsov, Phys. Rev. D [ **79**]{}, 083014 (2009)\ K. Nozari and S. D. Sadatian, Mod. Phys. Lett. A [**24**]{}, 3143 (2009) R. J. Scherrer and A. A. Sen, Phys. Rev. D [**77**]{}, 083515 (2008)\ R. J. Scherrer and A. A. Sen, Phys. Rev. D [**78**]{}, 067303 (2008)\ S. Dutta, E. N. Saridakis and R. J. Scherrer, Phys. Rev. D [**79**]{}, 103005 (2009) A. Vikman, Phys. Rev. D [**71**]{}, 023515 (2005) C. Armendariz-Picon, V. Mukhanov and P. J. Steinhardt, Phys. Rev. D [**63**]{}, 103510 (2001)\ A. Melchiorri, L. Mersini, C. J. Odman and M. Trodden, Phys. Rev. D [**68**]{}, 043509 (2003) R. R. Caldwell and M. Doran, Phys.Rev. D [**72**]{}, 043527 ( 2005)\ W. Hu, Phys. Rev. D [**71**]{}, 047301 (2005)\ Z. K. Guo, Y. S. Piao, X. M. Zhang and Y. Z. Zhang, Phys.Lett. B [**608**]{}, 177 (2005)\ B. Feng, X. L. Wang and X. M. Zhang, Phys. Lett. B [**607**]{}, 35 (2005)\ B. Feng, M. Li, Y. S. Piao and X. Zhang, Phys. Lett. B [**634**]{}, 101, (2006) L. Perivolaropoulos, JCAP 0510, 001 (2005) A. Linde, JHEP 0111, 052 (2001)\ J. Khoury, B. A. Ovrut, P. J. Steinhardt and N. Turok, Phys. Rev. D [**64**]{}, 123522 (2001)\ P. J. Steinhardt and N. Turok, Phys. Rev. D [**65**]{}, 126003 (2002)\ N. Felder, A.V. Frolov, L. Kofman and A. V. Linde, Phys. Rev. D [**66**]{}, 023507 (2002) A. de la Macorra and G. German, Int. J. Mod. Phys. D [ **13**]{}, 1939 (2004) S. M. Carroll, M. Hoffman and M. Trodden Phys. Rev. D [**68**]{}, 023509 (2004) B. McInnes, JHEP 0208, 029 (2002) K. Maeda, Phys. Rev. D [**39**]{}, 3159 (1989) D. Wands, Class. Quant. Grav. [**11**]{}, 269 (1994) Y.G. Gong and A. Wang, Phys. Rev. D [**73**]{}, 083506 (2006) Y. Gong and A. Wang, Phys. Rev. D [**75**]{}, 043520 (2007) S. Capozziello, V. F. Cardone, S. Carloni and A. Troisi, Int. J. Mod. Phys. D [**12**]{}, 1969 (2003) L. Amendola, R. Gannouji, D. Polarski and S. Tsujikawa, Phys. Rev. D [**75**]{}, 083504 (2007) A. A. Starobinsky, JETP. Lett. [**86**]{}, 157 (2007) W. Hu and I. Sawicki, Phys. Rev. D [**76**]{}, 064004 (2007) ![a) The plot of $\omega_{\phi}$ in terms of $z$ and for some values of the parameters $\lambda$ and $n$. There is not any phantom behavior in these cases. b) The plot of $\omega_{\phi}$ for the redshift $z=0.25$. Even though in a small region $\omega_{\phi}$ takes negative values, it does not however cross the PDL. ](fig1a.eps "fig:"){width="0.45\linewidth"} ![a) The plot of $\omega_{\phi}$ in terms of $z$ and for some values of the parameters $\lambda$ and $n$. There is not any phantom behavior in these cases. b) The plot of $\omega_{\phi}$ for the redshift $z=0.25$. Even though in a small region $\omega_{\phi}$ takes negative values, it does not however cross the PDL. ](fig1b.eps "fig:"){width="0.45\linewidth"} ![The plot of $\omega_{\phi}(z)$ for (a) Starobinsky’s and (b) Hu-Sawicki’s models. As the figures indicate, there is a phantom-like behavior in the latter.](fig2a.eps "fig:"){width="0.49\linewidth"} ![The plot of $\omega_{\phi}(z)$ for (a) Starobinsky’s and (b) Hu-Sawicki’s models. As the figures indicate, there is a phantom-like behavior in the latter.](fig2b.eps "fig:"){width="0.49\linewidth"} ![The plot of $\omega_{\phi}$ in the redshift $z=0.25$ for (a) Starobinsky’s and (b) Hu-Sawicki’s models. As the figures show the PDL can be crossed in the latter in a small region of the parameters space.](fig3a.eps "fig:"){width="0.43\linewidth"} ![The plot of $\omega_{\phi}$ in the redshift $z=0.25$ for (a) Starobinsky’s and (b) Hu-Sawicki’s models. As the figures show the PDL can be crossed in the latter in a small region of the parameters space.](fig3b.eps "fig:"){width="0.43\linewidth"} ![image](fig4a.eps){width="0.45\linewidth"} ![image](fig4b.eps){width="0.45\linewidth"} ![Variations of (a) $\rho_{\phi}$ , (b) $\rho_{\phi}+p_{\phi}$ and (c) $\rho_{\phi}+3p_{\phi}$ in terms of the redshift for Hu-Sawicki’s model. The plots indicate that $\rho_{\phi}< 0$ and $\rho_{\phi}+3p_{\phi}<0$ while $\rho_{\phi}+p_{\phi}>0$ for $z<0.4$. The curves are plotted for the same values of the parameters $\gamma$ and $m$ appeared in Fig.2b.](fig4c.eps){width="0.45\linewidth"} [^1]: e-mail: y-bisabr@srttu.edu. [^2]: We use the unit system $8\pi G=\hbar=c=1$ and the metric signature $(-,+,+,+)$. [^3]: For a more detailed discussion see, e.g., [@mac].
--- author: - | \ Royal Society University Research Fellow\ School of Physics & Astronomy\ The University of Birmingham\ BIRMINGHAM B15 2TT, UK\ E-mail: title: Experimental Tests of the Standard Model --- BHAM-HEP/01-02\ 31 October 2001 Introduction ============ The field of precise experimental tests of the electroweak sector of the Standard Model encompasses a wide range of experiments. The current status of these is reviewed in this report, with emphasis placed on new developments in the year preceding summer 2001. A theme common to many measurements is that theoretical and experimental uncertainties are comparable. The theoretical uncertainties, usually coming from the lack of higher-order calculations, can be at least as hard to estimate reliably as the experimental errors. At low energies, new hadronic cross-section results in collisions are discussed. The new measurement of the muon anomalous magnetic moment at Brookhaven is reported and compared with recent Standard Model calculations. Results from the now complete LEP data sample are reviewed, together with recent results from the Tevatron, HERA and SLD. The synthesis of many of these results into a global test of the Standard Model via a comprehensive fit is summarised. Finally, prospects for the next few years are considered. Many results presented here are preliminary: they are not labelled explicitly for lack of space. References should be consulted for details. R and $\mathbf{\alpha(M_Z^2)}$ ============================== The BES-II detector at the BEPC electron-positron collider in Beijing, China, has been operating since 1997. Many measurements have been made in the centre-of-mass energy range $2<\sqrt{s}<5$ GeV, but of relevance to electroweak physics are those of the ratio $$R = \frac{\sigma(\Mepem\to\mathrm{hadrons})}{\sigma_0(\Mepem\to\Mmpmm)}$$ where the denominator, $\sigma_0(\Mepem\to\Mmpmm)=4\pi\alpha^2(0)/(3s)$, is the lowest-order QED prediction. The BES measurements [@bib:besr] of R are presented in Figure \[fig:besr\], where the improvement in quality over previous, often very early, measurements is clear. Around 1000 hadronic events are used at each energy, and an average precision of 6.6% is obtained at each of the 85 energy points. The point-to-point correlated error is estimated to be 3.3%, providing a factor of 2 to 3 improvement over earlier measurements. In order to achieve such an improvement, detailed studies of the detector acceptance for hadronic events at low $\sqrt{s}$ were made, in collaboration with the Lund Monte Carlo team. The experimental acceptance for hadronic events varies in the range 50 to 87% from 2 to 4.8 GeV respectively, so the modelling at low $\sqrt{s}$ is of most concern. Good descriptions of the hadronic event data were obtained from a tuned version of the [LUARLW]{} generator, and the hadronic model-dependent uncertainty is estimated to be as low as 2-3%. At even lower energies, analysis continues of the large data sample from CMD-2 [@bib:cmd2] at the VEPP-2M  collider at Novosibirsk taken over $0.36<\sqrt{s}<1.4$ GeV. Many exclusive final-states are studied, with the main contribution to the overall cross-section arising from $\pi^+\pi^-$ production. A key application of the low energy R measurements is in the prediction of the value of the electromagnetic coupling at the  mass scale. This is modified from its zero-momentum value, $\alpha(0)=1/137.03599976(50)$, by vacuum polarisation loop corrections: $$\alpha(M_Z^2)=\frac{\alpha(0)}{1-\Delta\alpha_{e\mu\tau}(M_Z^2)- \dahad(M_Z^2)-\Delta\alpha_{top}(M_Z^2)} .$$ The contributions from leptonic and top quark loops ($\Delta\alpha_{e\mu\tau}$ and $\Delta\alpha_{top}$, respectively) are sufficiently well calculated knowing only the particle masses. The $\dahad$ term contains low-energy hadronic loops, and must be calculated via a dispersion integral: $$\dahad(M_Z^2) = - \frac{\alpha M_Z^2}{3\pi} \Re\int_{4m_{\pi}^2}^\infty ds \frac{R(s)}{s(s-M_Z^2-i\epsilon)} .$$ The R data points must, at least, be interpolated to evaluate this integral. More sophisticated methods are employed by different authors, and use may also be made of $\tau$ decay spectral function data via isospin symmetry. A recent calculation [@bib:pietrzyk] using minimal assumptions has obtained $\dahad(M_Z^2)=0.02761\pm0.00036$, approximately a factor two more precise than a previous similar estimate which did not use the new BES-II data. With extra theory-driven assumptions, an error as low as $\pm0.00020$ may be obtained [@bib:martin]. Prospects for further improvements in measurements of the hadronic cross-section at low energies are good: an upgraded accelerator in Beijing should give substantially increased luminosity; CLEO proposes to run at lower centre-of-mass energies than before to examine the region from 3 to 5 GeV; DA$\Phi$NE may be able to access the low energy range with radiative events; and finally the concept of a very low energy ring to work together with the present PEP-II LER could give access to the poorly covered region between 1.4 and 2 GeV. The Muon Anomalous Magnetic Moment g-2 ====================================== The Brookhaven E821 experiment has recently reported [@bib:e821] a new measurement of the muon anomalous magnetic moment, $a_{\mu}$, by measuring the spin-precession frequency, $\Mwa$, of polarised muons in a magnetic field: $$a_{\mu} \equiv \frac{g-2}{2} = \frac{\Mwa m_{\mu}c}{e\langle B\rangle}$$ The muons circulate in a special-purpose storage ring constructed to have an extremely uniform magnetic field across its aperture. The spin-precession frequency $\Mwa$ is measured by observing the time variation of production of decay electrons above a fixed energy cut-off (2 GeV), as shown in Figure \[fig:e821\]. The mean bending field is measured using two sets of NMR probes: one fixed set mounted around the ring and used for continuous monitoring, and another set placed on a trolley which can be pulled right around the evacuated beam chamber. In practice, the magnetic field is re-expressed in terms of the mean proton NMR frequency, $\omega_p$, and $a_{\mu}$ extracted from: $$a_{\mu} = \frac{R}{\lambda-R}$$ where $R=\omega_a/\omega_p$ and $\lambda$ is the ratio of muon to proton magnetic moments. The latest E821 result, obtained using $0.95\times10^9$ $\mu^+$ decays is [@bib:e821]: $$a_{\mu^+} = (11\,659\,202\pm14\pm6)\times 10^{-10}$$ The overall precision obtained is relatively 1.3 parts per million: 1.2 ppm from statistics and 0.5 ppm from systematic errors. Data from a further $4\times10^9$ $\mu^+$ and $3\times10^9$ $\mu^-$ are in hand, and should result in a factor two improvement in the near future. Interpretation of this result in terms of the Standard Model and possible new physics requires detailed calculations of loop corrections to the simple QED $\mu\mu\gamma$ vertex, which gives the original $g=2$ at lowest order. The corrections may be subdivided into electromagnetic (QED), weak and hadronic parts according to the type of loops. The QED and weak terms are respectively calculated to be $a_{\mu}(QED) = (11\, 657\,470.57 \pm 0.29) \times 10^{-10}$, and $a_{\mu}(weak) = (15.2 \pm 0.4) \times 10^{-10}$. The hadronic corrections, although much smaller than the QED correction, provide the main source of uncertainty on the predicted $a_{\mu}$. To $\mathcal{O}(\alpha^3)$, the dominant corrections may be subdivided into the lowest and higher-order vacuum polarisation terms and higher-order “light-on-light” terms. The lowest-order (vacuum polarisation) term is numerically much the largest. It can be calculated using a dispersion relation: $$a_{\mu}(had;LO) = \frac{\alpha^2(0)}{3\pi^2}\int_{4m_{\pi}^2}^{\infty} ds \frac{R(s)\hat{K}(s)}{s^2}$$ where $\hat{K}(s)$ is a known bounded function. As for $\alpha(M_Z^2)$, optional additional theory-driven assumptions may be made. Recent estimates of the lowest-order vacuum polarisation term are shown in Table \[tab:lovp\]. There is some ambiguity at the level of $\sim$5$\times10^{-10}$ about the treatment of further photon radiation in some of these calculations, as it may be included either here or as a higher-order correction, depending also on whether the input experimental data includes final-states with extra photons. The estimates agree with each other within the overall errors, which is not surprising since the data employed is mostly in common. It is notable that the best value available at the time of the E821 publication was that of Davier and Höcker (“DH(98/2)”), which is numerically the lowest of the calculations. The summed corrections are shown in figure \[fig:gminus2\] and compared with the new and previous measurements [@bib:oldgm2]. The major experimental improvement from the new E821 measurement is striking. At the time of publication, the most precise available calculation of $a_{\mu}$ led to a difference between data and theory of around 2.6 standard deviations [@bib:e821]. More recent calculations reduce that difference, in some cases to the one standard deviation level, thus also suggesting that the error on the prediction may have been too optimistic. At present there is therefore no reason to consider $a_{\mu}$ as giving evidence of physics beyond the Standard Model. The accuracy of the theoretical predictions will be even more severely challenged by an experimental measurement with a factor two smaller error, as expected in the near future. Theoretical progress is essential to obtain a maximum physics return from such a precise measurement. Recent News from the Z$^{\mathbf{0}}$ Pole {#sec:asy} ========================================== Measurements of the  cross-section, width, and asymmetries have been available for many years from LEP and SLD data, and most results have now been finalised [@bib:lepls; @bib:alr]. Recently new results [@bib:afbbnew] have become available on the b quark forward-backward asymmetry ($\afbb$) from ALEPH and DELPHI, using inclusive lifetime-based b-tagging techniques and various quark charge indicators. Substantial improvements are obtained over earlier lifetime-tag measurements, so that this type of asymmetry measurement now has a comparable precision to that using a traditional lepton tag. The lepton and lifetime results are compatible, and together give a LEP average Z pole asymmetry of $$A_{\mathrm{FB}}^{\mathrm{0,b}} = 0.0990 \pm 0.0017 .$$ This result may be compared with other asymmetry measurements from LEP and SLD by interpreting $\afbb$ in terms of $\sintwl$. In doing this, it is effectively assumed that the b quark couplings are given by their Standard Model values. The result is shown in figure \[fig:sstw\], comparing to $\sintwl$ values derived from the leptonic forward-backward asymmetry from LEP ($A_{\mathrm{fb}}^{\mathrm{0,l}}$) [@bib:lepls]; that from the $\tau$ polarisation measurements ($P_{\tau}$) [@bib:ptau]; from the left-right polarisation asymmetry at SLD [@bib:alr]; from the charm forward-backward asymmetry [@bib:afbc]; and from inclusive hadronic event forward-backward asymmetry measurements ($Q_{\mathrm{fb}}$) [@bib:qfb]. The two most precise determinations of $\sintwl$, from $A_{\mathrm{LR}}$ and $\afbb$, differ at the level of 3.2 standard deviations. This might suggest that the b quark couplings to the  differ from the Standard Model expectations, but such an interpretation is not compelling at present, and direct measurements via the left-right polarised forward-backward b quark asymmetry at SLD are not precise enough to help. Future improvements in b quark asymmetry measurements using the existing LEP data samples may help elucidate this issue, but scope for such improvement is limited. LEP-2 and Fermion-Pair Production ================================= With the completion of LEP-2 data-taking at the end of 2000, the integrated luminosity collected at energies of 161 GeV and above has reached 700  per experiment, in total giving each 1 fb$^{-1}$ from the entire LEP programme. Following on from the measurements of the LEP-1 Z lineshape and forward-backward asymmetries, studies of fermion-pair production have continued at LEP-2. At these higher energies, fermion-pair events may be subdivided into those where the pair invariant mass has “radiatively returned” to the Z region or below, and non-radiative events with close to the full centre-of-mass energy. The cross-sections and forward-backward asymmetries for non-radiative events at the full range of LEP-2 energies are shown in Figures \[fig:fflep1\] and \[fig:fflep2\] for hadronic, muon and tau pair final states, averaged between all four LEP experiments [@bib:ffmeas]. Analogous measurements have been made for electrons, b and c quarks [@bib:ffmeas]. The Standard Model expectations describe the data well. Limits can be placed on new physics from these data [@bib:ffmeas]. As an example, limits may be placed on new Z$^\prime$ bosons which do not mix with the , as indicated in Table \[tab:fflim\]. Z’s and W’s at Colliders with Hadrons ===================================== Electroweak fermion-pair production has also been studied at the Tevatron, in the Drell-Yan process. Updated results on high mass electron pairs were presented at this conference [@bib:cdfdy; @bib:gerber]: both cross-sections and asymmetries are well described by the Standard Model expectations, and extend beyond the LEP-2 mass reach to around 500 GeV (see Figure \[fig:drellyan\]). As indicated in the figure, there is some sensitivity to new physics models, and improvements on that of LEP should come with the Run 2 data. W production in  collisions provided, before LEP-2, the only direct measurements of the W mass, using reconstructed electron and muon momenta and inferred missing momentum information. The main results from CDF and D0 from Run 1 data have been available for some time [@bib:mwtev]. D0 have recently updated their Run 1 results with a new analysis making use of electrons close to calorimeter cell edges [@bib:gerber]. The main importance of the extra data is to allow a better calorimeter calibration from Z events. Measurements of the W mass from the Tevatron are summarised in Table \[tab:mwhad\]. The high tail of the distribution of the transverse mass of the lepton-missing momentum system provides information about the W width. CDF finalised their Run 1 result ($\Gamma_{\mathrm{W}}=$ 2.05$\pm0.13$ GeV) [@bib:gwcdf] some time ago. D0 presented a new measurement using all the Run 1 data, of $\Gamma_{\mathrm{W}}=$ 2.23$\pm0.17$ GeV, at this conference [@bib:gerber]. The presence of the W and Z bosons is primarily probed at HERA via t-channel exchange. The charged and neutral current differential cross-sections as a function of $Q^2$ are shown in Figures \[fig:heracc\] and \[fig:heranc\] respectively. The charged current process proceeds only by W exchange, and is sensitive to the W mass via the propagator term (and also, indirectly, via the overall normalisation). The effect of  exchange can be seen in the high-$Q^2$ neutral current region where it gives rise to a difference between the e$^-$p and e$^+$p cross-sections. Real W production may also have been observed at HERA, by looking for events with high transverse momentum electrons or muons, missing transverse momentum, and a recoiling hadronic system. For transverse momenta of the recoiling hadronic system above 40 GeV, H1 and ZEUS together observe 6 events compared to an expectation of 2.0$\pm$0.3, which is 90% composed of W production and decay [@bib:whera]. These events have been interpreted as possible evidence of new physics, but within the framework of the Standard Model their natural interpretation is as W production. W Physics at LEP-2 ================== Each LEP experiment now has a sample of around 12000 W-pair events from the full LEP-2 data sample. Event selections are well established, and have needed only minor optimisations for the highest energy data. Typical selection performances give efficiencies and purities in the 80-90% range for almost all channels – channels with $\tau$ decays being the most challenging. The measured W-pair cross-section [@bib:sigww] is shown in Figure \[fig:sigww\], and compared to the predictions of the RacoonWW [@bib:racoon] and YFSWW [@bib:yfsww] Monte Carlo programs. These programs incorporate full $\Oalpha$ corrections to the doubly-resonant W-pair production diagrams, and give a cross-section approximately 2% lower than earlier predictions. The agreement can be tested by comparing the experimental and predicted cross-sections as a function of centre-of-mass energy. The new calculations describe the normalisation of the data well, the old ones over-estimate it by between two and three standard deviations of the experimental error [@bib:sigww]. The selected W-pair events are also used to measure the W decay branching ratios. The combined LEP results [@bib:sigww] are shown in Table \[tab:wbr\]. The leptonic results are consistent with lepton universality, and so are combined to measure the average leptonic branching ratio, corrected to massless charged leptons. This measurement now has a better than 1% relative error, and is consistent with the Standard Model expectation of 10.83%. It is significantly more precise than a value extracted from the Tevatron W and Z cross-section data, assuming Standard Model production of W’s, which is Br(W$\to\ell\nu)=10.43\pm0.25$% [@bib:wbrtev]. The W mass and width are measured above the W-pair threshold at LEP-2 by direct reconstruction of the W decay products [@bib:mwlepex], using measured lepton momenta and jet momenta and energies. Events with two hadronically decaying W’s (“$\Wqqqq$”), or where one W decays hadronically and the other leptonically (“$\Wqqlv$”), are used by all experiments. A kinematic fit is made to the reconstructed event quantities, constraining the total energy and momentum to be that of the colliding beam particles, thus reconstructing the unobserved neutrino in mixed hadronic-leptonic decay events. This fit significantly improves the resolution on the W mass. The reconstructed mass distributions can be fitted to obtain the W mass, or the W mass and width together. Other, more complicated, techniques to extract the most W mass information from the fitted events are used by some experiments. ALEPH and OPAL also use the small amount of information contained in $\Wlvlv$ events, which has been included in the $\Wqqlv$ results quoted. After the kinematic fit, the W mass statistical sensitivity is very similar for the two event types. The systematic error sources are largely different between the two channels: the main correlated systematics come from the knowledge of the LEP beam energy, and hadronisation modelling. The W mass measurements obtained by the four LEP experiments, and averaged by channel, are shown in table \[tab:lepmw\]. There is good consistency between all the measurements, and the overall precision [@bib:mwlep] now improves significantly on the 60 MeV from hadron colliders. If the W width is also fitted, the W mass measurement is essentially unchanged, and a LEP combined value of $\Gamma_{\mathrm{W}}=2.150\pm0.091$ GeV is found. The 39 MeV error on the combined LEP result includes 26 MeV statistical and 30 MeV systematic contributions. Systematic errors are larger in the $\Wqqqq$ channel (see Table \[tab:lepmw\]), having the effect of deweighting that channel, to just 27%, in the average. With no systematic errors this deweighting would not occur, and the statistical error would be 22 MeV. The main systematic errors on the combined result are as follows [@bib:mwlep]: The LEP beam energy measurement contributes a highly correlated 17 MeV to all channels; hadronisation modelling uncertainties contribute another 17 MeV; “final-state interactions” (FSI) between the hadronic decay products of two W’s contribute 13 MeV; detector-related uncertainties – different for the different experiments – contribute 10 MeV; and uncertainties on photonic corrections contribute 8 MeV. The main improvements that are expected before the results are finalised lie in the areas of the LEP beam energy, where a concerted programme is in progress to reduce the error, and the final-state interactions. The basic physical problem which gives rise to the uncertainty over final-state interactions is that when two W’s in the same event both decay hadronically, the decay distance is smaller than typical hadronisation scales. The hadronisation of the two systems may therefore not be independent, and so hadronisation models tuned to $\to\mathrm{q}\overline{\mathrm{q}}$ decays may not properly describe them. Phenomenological models are used to study possible effects, subdividing them into “colour reconnection” in the parton-shower phase of the Monte Carlo models, and possible Bose-Einstein correlations between identical particles formed in the hadronisation process. A substantial effort has been spent in understanding the possible effects of FSI models. Recent work, in a collaborative effort between all four LEP experiments, has focused on determining the common sensitivity to different models between different experiments, and on developing ways to measure visible effects predicted by the models. Sensitivity to the effect of colour reconnection models has been obtained by studying the particle flow between jets in $\Wqqqq$ events [@bib:cr]. This is illustrated in Figure \[fig:cr\]. The data show some sensitivity to the effects as predicted in the colour reconnection models, and work continues to combine results from the four LEP experiments to improve the sensitivity. Bose-Einstein correlations are also being studied in data [@bib:bec], in this case by comparing the two-particle correlation functions, $\rho$, for single hadronically decaying W’s in $\Wqqlv$ events ($\rho^\mathrm{W}$), and for $\Wqqqq$ events ($\rho^{\mathrm{W}\mathrm{W}}$). This may be expressed as [@bib:chekanov]: $$\rho^{\mathrm{W}\mathrm{W}}(Q) = 2 \rho^{\mathrm{W}}(Q) + \rho_{mix}^{\mathrm{W}\mathrm{W}}(Q) + \Delta\rho(Q)$$ where $\rho_{mix}^{\mathrm{W}\mathrm{W}}$ is evaluated from mixing hadronic W decays from $\Wqqlv$ decays, and $\Delta\rho$ is any extra part arising from correlations between particles from different W decays in $\Wqqqq$ events. Alternatively the ratio $D(Q)$ may be examined: $$D(Q) \equiv \frac{\rho^{\mathrm{W}\mathrm{W}}(Q)}{2 \rho^{\mathrm{W}}(Q) + \rho_{mix}^{\mathrm{W}\mathrm{W}}(Q)} .$$ An observed $D(Q)$ distribution is shown in Figure \[fig:bec\]: a deviation from unity at low $Q$ would most clearly signal the effect of Bose-Einstein correlations between particles from different W’s. As illustrated in this figure, no evidence is observed of such an effect. As for colour reconnection, work is in progress to derive combined LEP results in order better to constrain the possible effect on the W mass measurement. When the LEP measurement of $\mw$, given in Table \[tab:lepmw\] is combined with that from  colliders as given in Table \[tab:mwhad\], a world average W mass of $80.451\pm0.033$ GeV is obtained. A similar combination of W width results gives $\Gamma_{\mathrm{W}}=2.134\pm0.069$ GeV. Tests of the Gauge Couplings of Vector Bosons ============================================= The gauge group of the Standard Model dictates the self-couplings of the vector bosons, both in form and strength. The direct measurement of these couplings therefore provides a fundamental test of the Standard Model gauge structure. Electroweak gauge couplings have been measured directly at both LEP and the Tevatron: at present constraints from LEP are more stringent. W-pair production at LEP-2 involves the triple gauge coupling vertex in two of the three lowest-order doubly-resonant diagrams. Sensitivity to possible anomalous couplings is found in the W-pair cross-section, and the W production and decay angle distributions. Measurements have been reported at previous conferences [@bib:cctgcosaka], but no combined LEP results have been released recently because [@bib:racoon; @bib:kandy] higher-order corrections, previously neglected, are thought to be comparable to the current experimental precision [@bib:villa]. Other measurements of triple gauge boson couplings are made at LEP-2 [@bib:nctgc] in the neutral vector boson processes of $\gamma$ and  production. The cross-section measured for the latter process is shown in Figure \[fig:sigzz\] and is well-described by Standard Model predictions. Measurements of quartic gauge couplings have also been made at LEP-2, and were discussed in detail in other contributions to this conference [@bib:qgchere]. Global Electroweak Tests ======================== Many of the individual results reported in preceding sections may be used together to provide a global test of consistency with the Standard Model. If consistency with the model is observed, it is justifiable to go on to deduce, in the framework of the Standard Model, the unknown remaining parameter, the mass of the Higgs boson, $\mh$. The LEP electroweak working group has, for a number of years, carried out such global tests via a combined fit to a large number of measurements sensitive to Standard Model parameters. These results are reported here for the data available at this conference. These global fits use the electroweak libraries ZFITTER version 6.36 [@bib:zfitter] and TOPAZ0 version 4.4 [@bib:topaz0] to provide the Standard Model predictions. Theoretical uncertainties are included following detailed studies of missing higher order electroweak corrections and their interplay with QCD corrections [@bib:precew]. The precise LEP, SLD and Tevatron electroweak data are included, as are $\sin^2\theta_W$ as measured in neutrino-nucleon (“$\nu$N”) scattering[^1] [@bib:nuN] and, new this year, atomic parity violation (“APV”) measurements in caesium [@bib:apv]. Before making the full fit, the precise electroweak data from LEP and SLD can be used together with $\alpha{(M_{\mathrm{Z}}^2)}$, the $\nu$N and APV results to predict the masses of the top quark, $\mtop$, and of the W, $\mw$. The result obtained is shown in Figure \[fig:mtopmw\] by the solid (red) contour. Also shown are the direct measurements (dotted/green contour) of $\mtop=174.3\pm5.1$ GeV from the Tevatron [@bib:mtop] and $\mw=80.451\pm0.033$ GeV obtained by combining LEP and results; and the expected relationship between $m_{\mathrm{W}}$ and $m_{\mathrm{top}}$ in the Standard Model for different $\mh$ (shaded/yellow). It can be seen that the precise input data predict values of $\mtop$ and $\mw$ consistent with those observed – in both cases within two standard deviations – demonstrating that the electroweak corrections can correctly predict the mass of heavy particles. For the W, the precision of the prediction via the Standard Model fit is similar to that of the direct measurement. For the top mass, the measurement is twice as precise as the prediction. It is observed in addition that both the precise input data and the direct $\mw$/$\mtop$ measurements favour a light Higgs boson rather than a heavy one. Going further, the full fit is made including also the $\mtop$ and $\mw$ measurements. The overall $\chi^2$ of the fit is 22.9 for 15 degrees of freedom, corresponding to an 8.6% probability. To provide an impression of the contributions to this $\chi^2$, the best-fit value of each input datum is compared with the actual measurement, and the pull calculated as the difference between observation and best-fit divided by the measurement error. The results are shown in Figure \[fig:pulls\]. The poorest description is of $\afbb$, which is a reflection of the same disagreement discussed earlier in Section \[sec:asy\]. The best fit value of the Higgs mass is $\mh=88_{-35}^{+53}$ GeV, where the error is asymmetric because the leading corrections depend on $\log\mh$. The variation above the minimum value of the $\chi^2$ as a function of the mass of the Higgs boson, $m_{\mathrm{H}}$, is shown in Figure \[fig:blueband\]. The darker shaded/blue band enclosing the $\chi^2$ curve provides an estimate of the theoretical uncertainty on the shape of the curve. This band is a little broader than previously estimated because of the inclusion of a new higher-order (fermionic two-loop) calculation of $\mw$ [@bib:weiglein]. This has little effect via $\mw$ but does have an impact via $\sintwl=\kappa_W(1-\mw^2/\mz^2)$. This latter effect is controversial, and may well overestimate the true theoretical uncertainty, but it is currently included as equivalent two-loop calculations for Z widths and the effective mixing angle are not available. The $\chi^2$ curve may be used to derive a constraint on the Standard Model Higgs boson mass, namely $m_{\mathrm{H}} < 196$ GeV at 95% C.L. Also shown in the Figure is the effect of using an alternative theory-driven estimate of the hadronic corrections to $\dahad(M_{\mathrm{Z}}^2)$ [@bib:martin] (dashed curve). The effect on the $\mh$ prediction is sizable compared to the theoretical uncertainty, for example. The 95% C.L. upper limit on $\mh$ moves to 222 GeV with this $\dahad$ estimate. A Forward Look, and Conclusions =============================== The eleven years of data-taking by the LEP experiments, plus the contributions of SLD, have established that Standard Model radiative corrections describe precision electroweak measurements. Data analysis is close to complete on the LEP-1 data, taken from 1989-1995. Work continues to finish LEP-2 analyses, and final results can be expected over the next couple of years. Improvements can still be expected in the W mass measurement, from better understanding of final-state interaction effects in particular, and in gauge-coupling measurements where the full data sample is not yet included. At the Tevatron, Run 2 data-taking has recently begun. Although luminosities are so far low, the expectation remains of accumulating 2 fb$^{-1}$ in the next couple of years, which should allow a W mass measurement with 30 MeV precision from each experiment [@bib:tevprospects], and a top mass measured to $\pm$3 GeV. Combining the former result with the final $\mw$ results from LEP-2 should provide a world average W mass measurement error close to 20 MeV. The effect such improvements could have, for example on the global fit $\Delta\chi^2$ as a function of $\mh$, are shown in Figure \[fig:forward\] (the central value of $\mh$ employed for the future is, of course, arbitrarily selected). Further substantial improvements in precision will have to wait for the LHC and a future linear collider. The LHC should improve the W and top mass precisions by a further factor two. The main improvement would, of course, come from a discovery of the Higgs boson, and a direct indication of whether it is the simplest Standard Model particle. In summary, precise tests of the electroweak sector of the Standard Model have been made by a wide range of experiments, from the g-2 measurement in muon decays to LEP and the Tevatron. Many of these tests have a high sensitivity to radiative corrections, and the radiative correction structure is now rather well-established. Two and three-loop calculations are essential in making sufficiently precise predictions for some processes, and more progress is still needed. A small number of measurements, for example the measurement of $\sintwl$ from the b forward-backward asymmetry at LEP, show two or three standard deviation differences from expectation which might point to possible cracks in the Standard Model description, but none are compelling at present. Further improvements in the quality of tests will arrive slowly over the next few years: in particular further elucidation of the electroweak symmetry-breaking mechanism will likely have to await an improved discovery reach for a Higgs boson. Acknowledgments {#acknowledgments .unnumbered} =============== The preparation of this talk was greatly eased by the work of the LEP electroweak working group, and cross-LEP working groups on the W mass, gauge coupling and fermion-pair measurements. In particular, I thank Martin Grünewald for his unstinting help, and Chris Hawkes for comments on this manuscript. I also benefitted from the assistance of P. Antilogus, E. Barberio, A. Bodek, D. Cavalli, G. Chiarelli, G. Cvetic, Y.S. Chung, M. Elsing, C. Gerber, F. Gianotti, R. Hawkings, G.S. Hi, J. Holt, F. Jegerlehner, M. Kuze, I. Logashenko, K. Long, W. Menges, K. Mönig, A. Moutoussi, C. Parkes, B. Pietrzyk, R. Tenchini, J. Timmermans, A. Valassi, W. Venus, H. Voss, P. Wells, F. Yndurain and Z.G. Zhao. [99]{} J.Z.Bai , BES Collaboration, ; J.C.Chen, these proceedings. See, for example, R.R.Akhmetshin , CMD-2 Collaboration, . H.Burkhardt and B.Pietrzyk, [preprint LAPP-EXP 2001-03](http://wwwlapp.in2p3.fr/preplapp/LAPP_EX2001_03.pdf), to appear in Physics Letters. A.D.Martin, J.Outhwaite and M.G.Ryskin, . H.N.Brown , Muon g-2 Collaboration, ; I.Logashenko, these proceedings. D.H.Brown and W.A.Worstell, . R.Alemany, M.Davier and A.Höcker, . M.Davier and A.Höcker, . M.Davier and A.Höcker, . S.Narison, . F.Jegerlehner, . J.F.de Troconiz and F.J.Yndurain, . G.Cvetic, T.Lee and I.Schmidt, . J.Bailey ., ;\ R.M.Carey , Muon g-2 Collaboration, ; H.N.Brown , Muon g-2 Collaboration, . R.Barate , ALEPH Collaboration, ;\ P.Abreu , DELPHI Collaboration, ;\ M.Acciarri , L3 Collaboration, ;\ G.Abbiendi , OPAL Collaboration, ;\ M.Paganoni, these proceedings. K.Abe , SLD Collaboration, ; K.Abe , SLD Collaboration, ; V.Serbo, these proceedings. ALEPH Collaboration, ALEPH 2001-026 CONF 2001-020;\ DELPHI Collaboration, DELPHI 2001-027 CONF 468;\ P.Hansen, these proceedings. D.Buskulic , ALEPH Collaboration, ; ALEPH Collaboration, ALEPH 96-097 CONF 98-037;\ P.Abreu , DELPHI Collaboration, ;\ M.Acciarri , L3 Collaboration, ;\ G.Abbiendi , OPAL Collaboration, ;\ M.Casado, these proceedings. See, for example, P.Hansen, these proceedings. D.Buskulic , ALEPH Collaboration, ;\ P.Abreu , DELPHI Collaboration, ;\ M.Acciarri , L3 Collaboration, ;\ P.D.Acton , OPAL Collaboration, . JHolt, these proceedings; LEPEWWG f$\overline{\mathrm{f}}$ subgroup, [note LEP2FF/01-02](http://lepewwg.web.cern.ch/LEPEWWG/lep2/summer2001/summer2001.ps). T.Affolder , CDF Collaboration, . C.Gerber, these proceedings. T.Affolder , CDF Collaboration, ;\ S.Abachi , D0 Collaboration, . T.Affolder , CDF Collaboration, . C.Adloff , H1 Collaboration, ; H1 Collaboration, paper EPS-2001-787 submitted to this conference;\ ZEUS Collaboration, papers EPS-2001-631 and EPS-2001-633 submitted to this conference. C.Adloff , H1 Collaboration, ; H1 Collaboration, paper EPS-2001-787 submitted to this conference;\ ZEUS Collaboration, papers EPS-2001-630 and EPS-2001-632 submitted to this conference. H1 Collaboration, paper EPS-2001-802 submitted to this conference. The LEP Collaborations and the LEP WW Working Group, [note LEPEWWG/XSEC/2001-03](http://lepewwg.web.cern.ch/LEPEWWG/lepww/4f/Summer01/4f_s01_main.ps.gz); R. Chierici, these proceedings. A.Denner, S.Dittmaier, M.Roth and D.Wackeroth, ; M.Roth, these proceedings. S.Jadach , . S. Eno , note CDF/ANAL/ELECTROWEAK/CDFR/5139, D0note 3693. ALEPH Collaboration, ALEPH 2001-020 CONF 2001-017;\ DELPHI Collaboration, DELPHI 2001-103 CONF 531;\ L3 Collaboration, L3 Note 2637;\ OPAL Collaboration, OPAL Physics Notes PN422 and PN480;\ H.Ruiz, these proceedings. The LEP Collaborations and the LEP W Working Group, [note LEPEWWG/MASS/2001-02](http://lepewwg.web.cern.ch/LEPEWWG/lepww/mw/Summer01/mw_main.ps.gz); H.Ruiz, these proceedings. L3 Collaboration, L3 Note 2683. D.Duchesneau, these proceedings. DELPHI Collaboration, DELPHI 2001-060 CONF 488. O.Pooth, these proceedings. S.V.Chekanov, E.A.de Wolf and W.Kittel, . See, for example, S.Jezequel, in [*30th International Conference on High Energy Physics*]{}, Ed. by C.Lim and T.Yamanaka. S.Jadach , . S.Villa, these proceedings. The LEP Collaborations and the LEP WW Working Group, [note LEPEWWG/XSEC/2001-03](http://lepewwg.web.cern.ch/LEPEWWG/lepww/4f/Summer01/4f_s01_main.ps.gz); H.Rick, these proceedings. S.Jadach, W.Placzek and B.F.L.Ward, ;\ G.Passarino, in . A.Oh, these proceedings. F.Piccinini, these proceedings; M.Biglietti, these proceedings. D.Y.Bardin . . G.Montagna , . [CERN Yellow Report 95-03](http://www-spires.dur.ac.uk/cgi-bin/spiface/find/hep/www?rawcmd=find+rn+cern-95-03), eds. D.Bardin, W.Hollik and G.Passarino; D.Bardin, M.Grünewald and G.Passarino, . K.McFarland , CCFR/NuTeV Collaboration, ; K.McFarland for the NuTeV Collaboration, . G.P.Zeller , . C.S.Wood , ;\ S.C.Bennett and C.E.Wieman, ;\ A.Derevianko, ;\ M.G.Kozlov, S.G.Porsev and I.I.Tupitsyn,. L.Demortier , The Top Averaging Group for the CDF and D0 Collaborations, preprint FERMILAB-TM-2084. A.Freitas, W.Hollik, W.Walter, G.Weiglein, . See, for example, G.Chiarelli, these proceedings. [^1]: A new $\nu$N scattering result was reported by the NuTeV Collaboration [@bib:newnutev] during the final stage of preparation of this contribution. The $\sin^2\theta_{\mathrm{W}}$ result obtained differs from the expected value by three standard deviations.
--- abstract: 'We study the effects of weak columnar and point disorder on the vortex-lattice phase transitions in high temperature superconductors. The combined effect of thermal fluctuations and of quenched disorder is investigated using a simplified cage model. For columnar disorder the problem maps into a quantum particle in a harmonic + random potential. We use the variational approximation to show that columnar and point disorder have opposite effect on the position of the melting line as observed experimentally. Replica symmetry breaking plays a role at the transition into a vortex glass at low temperatures.' address: | Department of Physics and Astronomy\ University of Pittsburgh\ Pittsburgh, PA 15260 author: - 'Yadin Y. Goldschmidt' date: 'August 4, 1996' title: ' [**Phase Transitions of the Flux Line Lattice in High-Temperature Superconductors with Weak Columnar and Point Disorder**]{} ' --- [2]{} There is a lot of interest in the physics of high temperature superconductors due to their potential technological applications. In particular these materials are of type II and allow for partial magnetic flux penetration. Pinning of the magnetic flux lines (FL) by many types of disorder is essential to eliminate dissipative losses associated with flux motion. In clean materials below the superconducting temperature there exist a ’solid ’ phase where the vortex lines form a triangular Abrikosov lattice [@blatter]. This solid can melt due to thermal fluctuations and the effect of impurities. In particular known observed transitions are into a flux liquid at higher temperatures via a [*melting line*]{} (ML)[@zeldov], and into a vortex glass at low temperature [@VG],[@Fisher],[@BG] in the presence of disorder- the so called [*entanglement line*]{} (EL). [@blatter] Recently the effect of point and columnar disorder on the position of the melting transition has been measured experimentally in the high-$T_c$ material $Bi_2Sr_2CaCu_2O_8$ [@Khaykovitch]. Point disorder has been induced by electron irradiation (with 2.5 MeV electrons), whereas columnar disorder has been induced by heavy ion irradiation (1 GeV Xe or 0.9 GeV Pb). It turns out that the flux melting transition persists in the presence of either type of disorder, but its position shifts depending on the disorder type and strength. A significant difference has been observed between the effects of columnar and point disorder on the location of the ML. Weak columnar defects stabilize the solid phase with respect to the vortex liquid phase and shift the transition to [*higher*]{} fields, whereas point-like disorder destabilizes the vortex lattice and shifts the melting transition to [*lower*]{} fields. In this paper we attempt to provide an explanation to this observation. The case of point defects has been addressed in a recent paper by Ertas and Nelson [@EN] using the cage-model approach which replaces the effect of vortex-vortex interactions by an harmonic potential felt by a single vortex. For columnar disorder the parabolic cage model was introduced by Nelson and Vinokur \[8\]. Here we use a different approach to analyze the cage-model Hamiltonian vis. the replica method together with the variational approximation. In the case of columnar defects our approach relies on our recent analysis of a quantum particle in a random potential [@yygold]. We compare the effect of the two types of disorder with each other and with results of recent experiments. Assume that the average magnetic field is aligned along the $z$-axis. Following EN we describe the Hamiltonian of a single FL whose position is given by a two-component vector ${\bf r}(z)$ (overhangs are neglected) by: $$\begin{aligned} {\cal H} = \int_0^L dz \left\{ {\frac{\tilde{\epsilon} }{2}} \left({\frac{ d% {\bf r }}{dz}} \right)^2 + V(z,{\bf r }) + {\frac{\mu }{2}} {\bf r }^2 \right\}. \label{hamil}\end{aligned}$$ Here $\tilde \epsilon =\epsilon _0/\gamma ^2$ is the line tension of the FL, $\gamma ^2=m_z/m_{\perp }$ is the mass anisotropy, $\epsilon _0=(\Phi _0/4\pi \lambda )^2$, ($\lambda $ being the penetration length), and $\mu \approx \epsilon _0/a_0^2$ is the effective spring constant (setting the cage size) due to interactions with neighboring FLs, which are at a typical distance of $a_0=\sqrt{\Phi _0/B}$ apart. For the case of columnar (or correlated) disorder, $V(z,{\bf r})=V({\bf r})$ is independent of $z$, and $$\begin{aligned} \langle V({\bf r})V({\bf r^{\prime }})\rangle \equiv -2f(({\bf r}-{\bf % r^{\prime }})^2/2)=g\epsilon _0^2\xi ^2\delta _\xi ^{(2)}({\bf r}-{\bf % r^{\prime }}), \label{VVC}\end{aligned}$$ where $$\begin{aligned} \delta _\xi ^{(2)}({\bf r}-{\bf r^{\prime }})\approx 1/(2\pi \xi ^2)\exp (-(% {\bf r}-{\bf r^{\prime }})^2/2\xi ^2), \label{delta}\end{aligned}$$ and $\xi $ is the vortex core diameter. The dimensionless parameter g is a measure of the strength of the disorder. On the other hand for point-disorder, $V$ depends on $z$ and [@EN] $$\begin{aligned} \langle V(z,{\bf r})V(z^{\prime },{\bf r^{\prime }})\rangle =\tilde \Delta \epsilon _0^2\xi ^3\delta _\xi ^{(2)}({\bf r}-{\bf r^{\prime }})\delta (z-z^{\prime }). \label{VVP}\end{aligned}$$ The quantity that measures the transverse excursion of the FL is $$\begin{aligned} u_0^2(\ell )\equiv \langle |{\bf r}(z)-{\bf r}(z+\ell )|^2\rangle \ /2, \label{ul}\end{aligned}$$ Let us now review the connection between a quantum particle in a random potential and the behavior of a FL in a superconductor. The partition function of the former is just like the partition sum of the FL, provided one make the identification [@nelson] $$\begin{aligned} \hbar \rightarrow T,\qquad \beta \hbar \rightarrow L, \label{corresp}\end{aligned}$$ Where T is the temperature of the superconductor and L is the system size in the $z$-direction. $\beta $ is the inverse temperature of the quantum particle. We are interested in large fixed L as T is varied, which corresponds to high $\beta $ for the quantum particle when $\hbar $ (or alternatively the mass of the particle) is varied. The variable $z$ is the so called Trotter time. This is the picture we will be using for the case of columnar disorder. For the case of point-disorder the picture we use is that of a directed polymer in the presence of a random potential plus an harmonic potential as used by EN. The main effect of the harmonic (or cage) potential is to cap the transverse excursions of the FL beyond a confinement length $\ell ^{*}\approx a_0/\gamma $. The mean square displacement of the flux line is given by $$u^2(T)\approx u_0^2(\ell ^{*}). \label{uT}$$ The location of the melting line is determined by the Lindemann criterion $$u^2(T_m(B))=c_L^2a_0^2, \label{Lind}$$ where $c_L\approx 0.15-0.2$ is the phenomenological Lindemann constant. This means that when the transverse excursion of a section of length $\approx \ell ^{*}$becomes comparable to a finite fraction of the interline separation $a_0$, the melting of the flux solid occurs. We consider first the case of columnar disorder. In the absence of disorder it is easily obtained from standard quantum mechanics and the correspondence (\[corresp\]), that when $L\rightarrow \infty ,$ $$u^2(T)=\frac T{\sqrt{\widetilde{\epsilon }\mu }}\left( 1-\exp (-\ell ^{*}% \sqrt{\mu /\widetilde{\epsilon }})\right) =\frac T{\sqrt{\widetilde{\epsilon }\mu }}(1-e^{-1}), \label{u2g0}$$ from which we find that $$B_m(T)\approx \frac{\Phi _0^{}}{\xi ^2}\frac{\epsilon _0^2\xi ^2c_L^4}{% \gamma ^2T^2}. \label{Bmg0}$$ When we turn on disorder we have to solve the problem of a quantum particle in a random quenched potential. This problem has been recently solved using the replica method and the variational approximation [@yygold]. Let us review briefly the results of this approach. In this approximation we chose the best quadratic Hamiltonian parametrized by the matrix $% s_{ab}(z-z^{\prime })$: $$\begin{aligned} h_n &=&\frac 12\int_0^Ldz\sum_a[\widetilde{\epsilon }{\bf \dot r}_a^2+\mu {\bf r}_a^2] \nonumber \\ &&-\frac 1{2T}\int_0^Ldz\int_0^Ldz^{\prime }\sum_{a,b}s_{ab}(z-z^{\prime })% {\bf r}_a(z)\cdot {\bf r}_b(z^{\prime }). \label{hn}\end{aligned}$$ Here the replica index $a=1\ldots n$, and $n\rightarrow 0$ at the end of the calculation. This Hamiltonian is determined by stationarity of the variational free energy which is given by $$\left\langle F\right\rangle _R/T=\left\langle H_n-h_n\right\rangle _{h_n}-\ln \int [d{\bf r}]\exp (-h_n/T), \label{FV}$$ where $H_n$ is the exact $n$-body replicated Hamiltonian. The off-diagonal elements of $s_{ab}$can consistently be taken to be independent of $z$, whereas the diagonal elements are $z$-dependent. It is more convenient to work in frequency space, where $\omega $ is the frequency conjugate to $z$. $% \omega _j=(2\pi /L)j,$with $j=0,\pm 1,\pm 2,\ldots $.Assuming replica symmetry, which is valid only for part of the temperature range, we can denote the off-diagonal elements of $\widetilde{s}_{ab}(\omega )=(1/T)\int_0^Ldz\ e^{i\omega z}$ $s_{ab}(z)$, by $\widetilde{s}(\omega )=% \widetilde{s}\delta _{\omega ,0}$. Denoting the diagonal elements by $% \widetilde{s}_d(\omega )$, the variational equations become: $$\begin{aligned} \tilde s &=&2\frac LT\widehat{f}\ ^{\prime }\left( {\frac{2T}{\mu L}}+{\frac{% 2T}L}\sum_{\omega ^{\prime }\neq 0}\frac 1{\epsilon \ \omega ^{\prime }\,^2+\mu -\widetilde{s}_d(\omega ^{\prime })}\right) \label{s} \\ \tilde s_d(\omega ) &=&\tilde s-{\frac 2T}\int_0^Ld\zeta \ (1-e^{i\omega \zeta })\times \nonumber \\ &&\ \ \widehat{f}\ ^{\prime }\left( {\frac{2T}L}\sum_{\omega ^{\prime }\neq 0}\ \frac{1-e^{-i\omega ^{\prime }\varsigma }}{\widetilde{\epsilon \ }\omega ^{\prime }\,^2+\mu -\widetilde{s}_d(\omega ^{\prime })}^{}\right) . \label{sd}\end{aligned}$$ here $\widehat{f}$ $^{\prime }(y)$ denotes the derivative of the ”dressed” function $\widehat{f}(y)$ which is obtained in the variational scheme from the random potential’s correlation function $f(y)$ (see eq. (\[VVC\])), and in 2+1 dimensions is given by: $$\widehat{f}(y)=-\frac{g\epsilon _0^2\xi ^2}{4\pi }\frac 1{\xi ^2+y} \label{f}$$ The full equations, taking into account the possibility of replica-symmetry breaking are given in ref. [@yygold]. In terms of the variational parameters the function $u_0^2(\ell ^{*})$ is given by $$u_0^2(\ell ^{*})={\frac{2T}L}\sum_{\omega ^{\prime }\neq 0}\frac{1-\cos (\omega ^{\prime }\ell ^{*})}{\widetilde{\epsilon \ }\omega ^{\prime }\,^2+\mu -\widetilde{s}_d(\omega ^{\prime })}. \label{u2qp}$$ This quantity has not been calculated in ref. [@yygold]. There we calculated $\left\langle {\bf r}^2(0)\right\rangle $ which does not measure correlations along the $z$-direction. In the limit $L\rightarrow \infty $ we were able to solve the equations analytically to leading order in $g$. In that limit eq. (\[sd\]) becomes (for $\omega \neq 0$) : $$\begin{aligned} \tilde s_d(\omega ) &=&\frac 4\mu \widehat{f}\ ^{\prime \prime }(b_0)-\frac 2% T\int_0^\infty d\varsigma (1-\cos (\omega \varsigma )) \nonumber \\ &&\times (\widehat{f}\ ^{\prime }(C_0(\varsigma ))-\widehat{f}\ ^{\prime }(b_0)), \label{sdi}\end{aligned}$$ with $$C_0(\varsigma )=2T\int_{-\infty }^\infty \frac{d\omega }{2\pi }\frac{1-\cos (\omega \varsigma )}{\widetilde{\epsilon \ }\omega \,^2+\mu -\widetilde{s}% _d(\omega )} \label{C0}$$ and $b_0$ given by a similar expression with the cosine term missing in the numerator of eq. (\[C0\]). Defining $$\begin{aligned} \tau &=&T\ /\sqrt{\widetilde{\epsilon }\ \mu },\ \alpha =\tau \ /(\xi ^2+\tau ), \label{tau,al} \\ f_1(\alpha ) &=&1/(1-\alpha )-(1/\alpha )\log (1-\alpha ), \label{f1} \\ f_2(\alpha ) &=&\frac 1\alpha \sum_{k=1}^\infty (k+1)\alpha ^k/k^3 \label{f2} \\ a^2 &=&f_1(\alpha )/f_2(\alpha ),\ A=-\widehat{f}\ ^{\prime \prime }(\tau )\ f_1^2(\alpha )/f_2(\alpha )/\mu , \label{a2,A} \\ s_\infty &=&\widehat{f}\ ^{\prime \prime }(\tau )\ (4+f_1(\alpha ))/\mu , \label{sinf}\end{aligned}$$ a good representation of $\widetilde{s}_d(\omega ),\ (\omega \neq 0)$ with the correct behavior at low and high frequencies is $$\widetilde{s}_d(\omega )=s_\infty +A\mu /(\widetilde{\epsilon \ }\omega ^2+a^2\mu ). \label{sde}$$ (notice that this function is negative for all $\omega $). Substituting in eq. (\[C0\]) and expanding the denominator to leading order in the strength of the disorder, we get : $$\begin{aligned} u_0^2(\ell ) &=&C_0(\sqrt{\widetilde{\epsilon }\ /\ \mu })=\tau (1-A/(a^2-1)^2/\mu ) \nonumber \\ &&\ \times (1-e^{-\ell /\ell ^{*}})+\tau A/(a(a^2-1)^2\mu )\times \nonumber \\ &&(1-e^{-a\ell /\ell ^{*}})+\tau /(2\mu )\times \ (s_\infty +A/(a^2-1)) \nonumber \\ &&\times \ (1-e^{-\ell /\ell ^{*}}-(\ell /\ell ^{*})\ e^{-\ell /\ell ^{*}}). \label{u2f}\end{aligned}$$ In order to plot the results we measure all distances in units of $\xi $ , we measure the temperature in units of $\epsilon _0\xi $, and the magnetic field in units of $\Phi _0/\xi ^2$ . We observe that the spring constant $% \mu $ is given in the rescaled units by $B$ and $a_0=1/\sqrt{B}$. We further use $\gamma =1$ for the plots. Fig. 1 shows a plot of $\sqrt{u_0^2(\ell ^{*})}/a_0$ vs. $T$ for zero disorder (curve a) as well as for $g/2\pi =0.02$ (curve b). We have chosen $B=1/900$. We see that the disorder tends to align the flux lines along the columnar defects , hence decreasing $u^2(T)$ .Technically this happens since $% \widetilde{s}_d(\omega )$ is negative. The horizontal line represents a possible Lindemann constant of 0.15. In Fig. 2 we show the modified melting line $B_m(T)$ in the presence of columnar disorder. This is obtained from eq. (\[Lind\]) with $c_L=0.15$. We see that it shifts towards higher magnetic fields. For $T<T_c\approx (\epsilon _0\xi /\gamma )[g^2\epsilon _0/(16\pi ^2\mu \xi ^2)]^{1/6}$, there is a solution with RSB but we will not pursue it further in this paper. This temperature is at the bottom of the range plotted in the figures for columnar disorder. We will pursue the RSB solution only for the case of point disorder, see below. The expression (\[u2f\]) becomes negative for very low temperature. This is an artifact of the truncation of the expansion in the strength of the disorder. For the case of point defects the problem is equivalent to a directed polymer in a combination of a random potential and a fixed harmonic potential. This problem has been investigated by MP [@mp], who were mainly concerned with the limit of $\mu \rightarrow 0$. In this case the variational quadratic Hamiltonian is parametrized by: $$\begin{aligned} h_n &=&\frac 12\int_0^Ldz\sum_a[\widetilde{\epsilon }{\bf \dot r}_a^2+\mu {\bf r}_a^2] \nonumber \\ &&\ \ -\frac 12\int_0^Ldz\sum_{a,b}^{}s_{ab}\ {\bf r}_a(z)\cdot {\bf r}_b(z), \label{hnpd}\end{aligned}$$ with the elements of $s_{ab}$ all constants as opposed to the case of columnar disorder. The replica symmetric solution to the variational equations is simply given by : $$\begin{aligned} s &=&s_d=\frac{2\xi }T\widehat{f}\ ^{\prime }(\tau ) \label{s,sd} \\ u_0^2(\ell ) &=&2T\int_{-\infty }^\infty \frac{d\omega }{2\pi }\frac{1-\cos (\omega \ell )}{\widetilde{\epsilon \ }\omega \,^2+\mu } \left( 1+ \frac{s_d}{ \widetilde{\epsilon \ }\omega \,^2+\mu}\right) \label{u2p}\end{aligned}$$ and hence $$\begin{aligned} u_0^2(\ell ) &=&\tau (1-e^{-\ell /\ell ^{*}})+\tau \ s_d\ /\ (2\mu ) \nonumber \\ &&\ \ \times \ (1-e^{-\ell /\ell ^{*}}-(\ell /\ell ^{*})\ e^{-\ell /\ell ^{*}}). \label{u2p2}\end{aligned}$$ In eq.(\[s,sd\]) $\widehat{f}$ is the same function as defined in eq. (\[f\]) with $g\ $replaced by $\widetilde{\Delta }$. As opposed the case of columnar disorder, in this case $s_d$ is positive and independent of $\omega $, and hence the mean square displacement $u_0^2(\ell ^{*})$ is bigger than its value for zero disorder. Fig. 1 curve [*c* ]{}shows a plot of $\sqrt{% u_0^2(\ell ^{*})}/a_0$ vs. $T$ for $\widetilde{\Delta }/2\pi =0.8$. Again $% B=1/900$. For $T<T_{cp}\approx $ $(\epsilon _0\xi /\gamma )(\gamma $ $% \widetilde{\Delta }/2\pi )^{1/3}$ it is necessary to break replica symmetry as shown by MP [@mp]. This means that the off-diagonal elements of the variational matrix $s_{ab}$ are not all equal to each other. MP worked out the solution in the limit of $\mu \rightarrow 0$, but it is not difficult to extend it to any value of $\mu .$ We have worked out the first stage RSB solution which is all is required for a random potential with short ranged correlations. The analytical expression is not shown here for lack of space. The solution is represented by curve [*d*]{} in Fig. 1 which consists of upward triangles. The modified melting line in the presence of disorder is indicated by the curve [*c*]{} in Fig. 2 for $T>T_{cp}$. For $T<T_{cp}$ the so called [*entanglement line* ]{}is represented by curve [*d*]{} of filled squares.The value of the magnetic field $B_m(T_{cp})\approx (\Phi _0/\xi ^2)(\gamma \widetilde{\Delta }/2\pi )^{-2/3}c_L^4$ gives a reasonable agreement with the experiments. The analytical expressions given in eqs. (\[u2f\]), (\[u2p2\]), though quite simple, seem to capture the essential feature required to reproduce the position of the melting line. The qualitative agreement with experimental results is remarkable, especially the opposite effects of columnar and point disorder on the position of the melting line. The ’as grown’ experimental results are corresponding to very small amount of point disorder, and thus close to the line of no disorder in the figures. At low temperature, the entanglement transition is associated in our formalism with RSB, and is a sort of a spin-glass transition in the sense that many minima of the random potential and hence free energy, compete with each other. In this paper we worked out the one-step RSB for the case of point disorder. The experiments show that in the case of colmunar disorder the transition into the vortex glass seems to be absent. This has to be further clarified theoretically. We have shown that the [*cage model* ]{}together with the variational approximation reproduce the main feature of the experiments. Effects of many body interaction between vortex lines which are not taken into account by the effective cage model seem to be of secondary importance. Inclusion of such effects within the variational formalism remains a task for the future. For point disorder, in the limit of infinite cage ( $\mu \rightarrow 0$), the variational approximation gives a wandering exponent of 1/2 for a random potential with short ranged correlations [@mp], whereas simulations give a value of 5/8 [@halpin]. This discrepancy does not seem of importance with respect to the conclusions obtained in this paper. Another point to notice is that columnar disorder is much more effective in shifting the position of the melting line as compared for point disorder in the range of parameters considered here. We have used a much weaker value of correlated disorder to achieve a similar or even larger shift of the melting line than for the case of point disorder. The fact that the random potential does not vary along the z-axis enhances its effect on the vortex lines. We thank David Nelson and Eli Zeldov for discussions. We thank the Weizmann institute for a Michael Visiting Professorship, during which this research has been carried out. G. Blatter [*et al.*]{}, Rev. Mod. Phys. [**66**]{}, 1125 (1994). E. Zeldov [*et al.*]{}, Nature [**375**]{}, 373 (1995); see also H. Pastoria [*et al.*]{} Phys. Rev. Lett. [**72**]{}, 2951 (1994) M. Feigelman, [*et al.*]{}, Phys. Rev. Lett. [**63**]{}, 2303 (1989); A. I. Larkin and V. M. Vinokur, ibid. [**75**]{}, 4666 (1995). D. S. Fisher, M. P. A. Fisher and D. A. Huse, Phys. Rev. [**B43**]{}, 130 (1990). T. Giamarchi and P. Le Doussal, Phys. Rev. Lett. [**72**]{}, 1530 (1994); Phys. Rev. [**B52**]{}, 1242 (1995); see also T. Nattermann, Phys. Rev. Lett. [**64**]{}, 2454 (1990). B. Khaykovitch [*et al.*]{}, Phys. Rev. Lett. [**76**]{}, 2555 (1996) and preprint (1996). D. Ertas and D. R. Nelson, preprint, cond-mat/9607142 (1996) D. R. Nelson, Phys. Rev. Lett. [**60**]{}, 1973 (1988); D. R. Nelson and V. M. Vinokur, Phys. Rev. [**B48**]{}, 13060 (1993). Y. Y. Goldschmidt, Phys. Rev. E [**53**]{}, 343 (1996); see also Phys. Rev. Lett. [**74**]{}, 5162 (1995) M. Mezard and G. Parisi, J. Phys. I (France)[**1**]{}, 809 (1991) T. Halpin-Healy and Y.-C. Zhang, Phys. Rep. [**254**]{}, 215 (1995) and references therein. Figure Captions: Fig1: Transverse fluctuations in the cage model for (a) no disorder (b)columnar disorder (c)point disorder (d)RSB for point disorder. Fig. 2: Melting line for (a) no disorder (b) columnar disorder (c)point disorder (d) entanglement line for point disorder.
--- abstract: 'We present the general scheme for construction of noiseless networks detecting entanglement with the help of linear, hermiticity-preserving maps. We show how to apply the method to detect entanglement of unknown state without its prior reconstruction. In particular, we prove there always exists noiseless network detecting entanglement with the help of positive, but not completely positive maps. Then the generalization of the method to the case of entanglement detection with arbitrary, not necessarily hermiticity-preserving, linear contractions on product states is presented.' author: - Paweł Horodecki - Remigiusz Augusiak - Maciej Demianowicz title: | General construction of noiseless networks detecting entanglement\ with help of linear maps --- Introduction ============ It has been known that entanglement can be detected with help of special class of maps called positive maps [@sep; @Peres; @book]. In particular there is an important criterion [@sep] saying that $\varrho$ acting on a given product Hilbert space $\mathcal{H}_{A}{\otimes}\mathcal{H}_{B}$ is separable if and only if for all positive (but not completely positive) maps $\Lambda : \mathcal{B}(\mathcal{H}_{B})\rightarrow \mathcal{B}(\mathcal{H}_{A})$ [@B] the following operator $$X_{\Lambda}(\varrho)=[I \otimes \Lambda](\varrho)$$ has all non-negative eigenvalues which usually is written as $$[I \otimes \Lambda](\varrho) \geq 0 \label{PositiveMaps1}.$$ Here by $I$ we denote the identity map acting on $\mathcal{B}(\mathcal{H}_{A})$. Since any positivity-preserving map is also hermiticity-preserving, it makes sense to speak about eigenvalues of $X_{\Lambda}(\varrho)$. However, it should be emphasized that there are many $\Lambda$s (and equivalently the corresponding criteria) and to characterize them is a hard and still unsolved problem (see, e.g., Ref. [@Kossakowski] and references therein). For a long time the above criterion has been treated as purely mathematical. One used to take matrix $\varrho$ (obtained in some [*prior*]{} state estimation procedure) and then put it into the formula (\[PositiveMaps1\]). Then its spectrum was calculated and the conclusion was drawn. However it can be seen that for, say states acting on $\mathcal{H}_{A}{\otimes}\mathcal{H}_{B}\sim \mathbb{C}^{d}{\otimes}\mathbb{C}^{d}$ and maps $\Lambda : \mathcal{B}(\mathbb{C}^{d}) \rightarrow \mathcal{B}(\mathbb{C}^{d})$, the spectrum of the operator $X_{\Lambda}(\varrho)$ consists of $n_{\mathrm{spec}}=d^{2}$ elements, while full [*prior*]{} estimation of such states corresponds to $n_{\mathrm{est}}=d^{4}-1$ parameters. The question was raised [@PHAE] as to whether one can perform the test (\[PositiveMaps1\]) physically without necessity of [*prior*]{} tomography of the state $\varrho$ despite the fact that the map $I{\otimes}\Lambda$ is not physically realizable. The corresponding answer was [@PHAE] that one can use the notion of structural physical approximation $\widetilde{I \otimes \Lambda}$ (SPA) of un–physical map $I \otimes \Lambda$ which is physically realizable already, but at the same time the spectrum of the state $$\tilde{X}_{\Lambda}(\varrho)=[\widetilde{I \otimes \Lambda}](\varrho)$$ is just an affine transformation of that of the (unphysical) operator $X_{\Lambda}(\varrho)$. The spectrum of $\tilde{X}_{\Lambda}(\varrho)$ can be measured with help of the spectrum estimator [@Estimator], which requires estimation of only $d^{2}$ parameters which (because of affinity) are in one to one correspondence with the needed spectrum of (\[PositiveMaps1\]). Note that for $2{\otimes}2$ systems (the composite system of two qubits), similar approaches lead to the method of detection of entanglement measures (concurrence [@Concurrence] and entanglement of formation [@EoF]) without the state reconstruction [@PHPRL]. The disadvantage of the above method is [@Carteret] that realization of SPA requires addition the noise to the system (we have to put some controlled ancillas, couple the system, and then trace them out). In Ref. [@Carteret] the question was raised about the existence of noiseless quantum networks, i.e., those of which the only input data are: (i) unknown quantum information represented by $\varrho^{{\otimes}m}$ (ii) the controlled measured qubit which reproduces us the spectrum moments (see Ref. [@Estimator]). It was shown that for at least one positive map (transposition) $T$ the noiseless network exists [@Carteret]. Such networks for two-qubit concurrence and three-qubit tangle have also been designed [@Carteret2]. In the present paper we ask a general question: do noiseless networks work only for special maps (functions) or do they exist for any positive map test? In the case of a positive answer to the latter: is it possible to design a general method for constructing them? Can it be adopted to any criteria other than the one defined in (\[PositiveMaps1\])? For this purpose we first show how to measure a spectrum of the matrix $\Theta(\varrho)$, where $\Theta : \mathcal{B}(\mathbb{C}^{m})\rightarrow\mathcal{B}(\mathbb{C}^{m})$ is an arbitrary linear, hermiticity-preserving map and $\varrho$ is a given density operator acting on $\mathbb{C}^{m}$, with the help of only $m$ parameters estimated instead of $m^{2}-1$. For bipartite $\varrho$ where $m=d^{2}$ this gives $d^{2}$ instead of $d^{4}-1$. This approach is consistent with previous results [@Grassl; @Leifer; @Brun] where arbitrary polynomials of elements of a given state $\varrho$ have been considered. In these works it was shown out that any at most $k$-th degree polynomial of a density matrix $\varrho$ can be measured with help of two collective observables on $k$ copies of $\varrho$. In fact one can treat the moments of $\Theta(\varrho)$ which we analyze below as polynomials belonging to such a class. We derive the explicit form of observables for the sake of possible future application. Moreover, approach presented in the present paper allows for quite natural identification of observable that detects an arbitrary polynomial of the state $\varrho$ subjected to some transformation $\Theta$. Then we provide an immediate application in entanglement detection showing that for suitable $\Theta$ the scheme constitutes just a right method for detecting entanglement without prior state reconstruction with the help of either positive map criteria (\[PositiveMaps1\]) or linear contraction methods discussed later. General scheme for construction of noiseless network detecting spectrum of $\Theta(\varrho)$ {#general} ============================================================================================ Construction of an observable ----------------------------- Since $m \times m$ matrix $\Theta(\varrho)$ is hermitian its spectrum may be calculated using only $m$ numbers $$\label{alfy} \alpha_{k}\equiv {\mathrm{Tr}}[\Theta(\varrho)]^{k}=\sum_{i=1}^{m}\lambda_{i}^{k}\qquad (k=1,\ldots,m),$$ where $\lambda_{i}$ are eigenvalues of $\Theta(\varrho)$. We shall show that all these spectrum moments can be represented by mean values of special observables. To this aim let us consider the permutation operator $V^{(k)}$ defined by the formula $$\label{VKa} V^{(k)} |e_{1}\rangle|e_{2}\rangle \otimes ... \otimes |e_{k} \rangle= |e_{k}\rangle|e_{1}\rangle \otimes ... \otimes |e_{k-1} \rangle, $$ where $(k=1,\ldots,m)$ and ${|e_{i}\rangle}$ are vectors from $\mathbb{C}^{m}$. One can see that $V^{(1)}$ is just an identity operator $\mathbbm{1}_{m}$ acting on $\mathbb{C}^{m}$. Combining Eqs. (\[alfy\]) and (\[VKa\]) we infer that $\alpha_{k}$ may be expressed by relation $$\alpha_{k}={\mathrm{Tr}}\left\{V^{(k)}[\Theta(\varrho)]^{\otimes k}\right\} \label{alfa}$$ which is generalization of the formula from Refs. [@Estimator; @PHAE] where $\Theta$ was (unlike here) required to be a physical operation. At this stage the careful analysis of the right–hand side of Eq. (\[alfa\]) shows that $\alpha_{k}$ is a polynomial of at most $k$-th degree in matrix elements of $\varrho$. This, together with the observation of Refs. [@Brun; @Grassl; @Leifer] allows us already to construct a single collective observable that detects $\alpha_{k}$. However, for the sake of possible future applications we derive the observable explicitly below. To this aim we first notice that $\alpha_{k}$ may be obtained using hermitian conjugation of $V^{(k)}$ which again is a permutation operator but permutes states ${|e_{i}\rangle}$ in the reversed order. Therefore all the numbers $\alpha_{k}$ may be expressed as $$\label{alfa2} \alpha_{k}=\frac{1}{2}{\mathrm{Tr}}\left[\left(V^{(k)}+V^{(k)\dagger}\right)\Theta(\varrho)^{\otimes k}\right].$$ Let us focus for a while on the map $\Theta$. Due to its hermiticity-preserving property it may be expressed as $$\Theta(\cdot)=\sum_{j=0}^{m^{2}-1}\eta_{j}K_{j}(\cdot)K_{j}^{\dagger}$$ with $\eta_{j}\in\mathbb{R}$ and $K_{j}$ being linearly independent $m$-by-$m$ matrices. By the virtue of this fact and some well-known properties of the trace, after rather straightforward algebra we may rewrite Eq. (\[alfa2\]) as $$\label{alfa3} \alpha_{k}=\frac{1}{2}{\mathrm{Tr}}\left[\left(\Theta^{\dagger}\right)^{{\otimes}k}\left(V^{(k)}+V^{(k)\dagger}\right)\varrho^{{\otimes}k}\right],$$ where $\Theta^{\dagger}$ is a dual map to $\Theta$ and is given by $\Theta^{\dagger}(\cdot)=\sum_{i}\eta_{i}K_{i}^{\dagger}(\cdot)K_{i}$. Here we have applied a map $(\Theta^{\dagger})^{{\otimes}k}$ on the operator $V^{(k)}+V^{(k)\dagger}$ instead of applying $\Theta^{{\otimes}k}$ to $\varrho^{\otimes k}$. This apparently purely mathematical trick with the aid of the fact that the square brackets in the above contain a hermitian operator allows us to express the numbers $\alpha_{k}$ as a mean value of some observables in the state $\varrho^{{\otimes}k}$. Indeed, introducing $$\label{obs} \mathcal{O}^{(k)}_{\Theta}= \frac{1}{2}{\mathrm{Tr}}\left[\left(\Theta^{\dagger}\right)^{{\otimes}k}\left(V^{(k)}+V^{(k)\dagger}\right)\right]$$ we arrive at $$\alpha_{k}={\mathrm{Tr}}\left[\mathcal{O}^{(k)}_{\Theta} \varrho^{\otimes k}\right]. \label{MeanValues}$$ In general, a naive measurement of all mean values would require estimation of much more parameters that $m$. But there is a possibility of building a unitary network that requires estimation of exactly $m$ parameters using the idea that we recall and refine below. Finally, let us notice that the above approach generalizes measurements of polynomials of elements of $\varrho$ in the sense that it shows explicitly how to measure the polynomials of elements of $\Theta(\varrho)$. Of course, this is only of rather conceptual importance since both issues are mathematically equivalent and have the origin in Refs. [@Grassl; @Leifer; @Brun]. Detecting mean of an observable by measurement on a single qubit revised ------------------------------------------------------------------------ Let $\mathcal{A}$ be an arbitrary observable (it may be even infinite dimensional) which spectrum lies between finite numbers $a^{\min}_{\mathcal{A}}$ and $a^{\max}_{\mathcal{A}}$ and $\sigma$ be a state acting on $\mathcal{H}$. In Ref. [@binpovm] it has been pointed out that the mean value $\langle \mathcal{A} \rangle_{\sigma}= {\mathrm{Tr}}\mathcal{A}\sigma$ may be estimated in process involving the measurement of only one qubit. This fact is in good agreement with further proof that single qubits may serve as interfaces connecting quantum devices [@Lloyd]. Below we recall the mathematical details of the measurement proposed in Ref. [@binpovm]. At the beginning one defines the following numbers $$a^{(-)}_{\mathcal{A}}\equiv \max\{0,-a^{\min}_{\mathcal{A}}\},\qquad a^{(+)}_{\mathcal{A}} \equiv a^{(-)}_{\mathcal{A}}+a^{\max}_{\mathcal{A}},$$ and observe that the hermitian operators $$\begin{aligned} V_{0}=\sqrt{\left(a^{(-)}_{\mathcal{A}}\mathbbm{1}_{\mathcal{H}}+\mathcal{A}\right)\Big/a^{(+)}_{\mathcal{A}}}\end{aligned}$$ and $$V_{1}=\sqrt{\mathbbm{1}_{\mathcal{H}} - V_{0}^\dagger V_{0}}$$ satisfy $\sum_{i=0}^{1}V_{i}^{\dagger}V_{i}=\mathbbm{1}_{\mathcal{H}}$ [@Identity] and as such define a generalized quantum measurement which can easily be extended to a unitary evolution (see Appendix A of Ref. [@APS] for a detailed description). Consider a partial isometry on the Hilbert space $\mathbb{C}^{2} \otimes \mathcal{H}$ defined by the formula $$\tilde{U}_{\mathcal{A}}=\sum_{i=0}^{1} |i\rangle \langle 0| \otimes V_{i}=\left( \begin{array}{cc} V_{0} & 0\\ V_{1} & 0 \end{array} \right).$$ The first Hilbert space $\mathbb{C}^{2}$ represents the qubit which shall be measured in order to estimate the mean value $\langle \mathcal{A} \rangle_{\sigma}$. The partial isometry can always be extended to unitary $U_{\mathcal{A}}$ such that if it acts on $|0 \rangle \langle 0| \otimes \sigma$ then the final measurement of observable $\sigma_{z}$ [@Pauli] on the first (qubit) system gives probabilities “spin-up” (of finding it in the state $|0\rangle$) and “spin-down” (of finding in state $|1\rangle$), respectively of the form $$p_{0}={\mathrm{Tr}}\left(V_{0}^\dagger V_{0}\varrho\right), \qquad p_{1}={\mathrm{Tr}}\left(V_{1}^\dagger V_{1}\varrho\right)=1-p_{0}.$$ One of the possible extensions of $\tilde{U}_{\mathcal{A}}$ to the unitary on $\mathbb{C}^{2}{\otimes}\mathcal{H}$ is the following $$\label{isometry} U_{\mathcal{A}}=\left( \begin{array}{cc} V_{0} & -V_{1}\\ V_{1} & V_{0} \end{array} \right)=\mathbbm{1}_{2}{\otimes}V_{0}-i\sigma_{y}{\otimes}V_{1}.$$ The unitarity of $U_{\mathcal{A}}$ follows from the fact that operators $V_{0}$ and $V_{1}$ commute. Due to the practical reasons instead of unitary operation representing POVM $\{V_{0},V_{1}\}$ we shall consider $$\label{Udet} U^{\mathrm{det}}(\mathcal{A},U'_{\mathcal{H}})=\left(\mathbbm{1}_{2} {\otimes}U_{\mathcal{H}}'\right)U_{\mathcal{A}}\left(\mathbbm{1}_{2}{\otimes}U_{\mathcal{H}}'\right)^{\dagger},$$ where $\mathbbm{1}_{2}$ is an identity operator on the one-qubit Hilbert space $\mathbb{C}^{2}$ and $U_{\mathcal{H}}'$ is an arbitrary unitary operation that acts on ${\cal H}$ and simplifies the decomposition of $U_{\mathcal{A}}$ into elementary gates. Now if we define a mean value of measurement of $\sigma_{z}$ on the first qubit after action of the network (which sometimes may be called visibility): $$v_{\mathcal{A}}={\mathrm{Tr}}\left[ \left(\sigma_{z}\otimes \mathbbm{1}_{\mathcal{H}}\right) \left(\mathbbm{1}_{2} \otimes U_{\mathcal{H}}'\right) U_{\mathcal{A}} \mathcal{P}_{0}{\otimes}\sigma U^{\dagger}_{\mathcal{A}}\left(\mathbbm{1}_{2} \otimes U_{\mathcal{H}}'\right)^{\dagger}\right], \label{vis}$$ where $\mathcal{P}_{0}$ is a projector onto state ${|0\rangle}$, i.e., $\mathcal{P}_{0}={{|0\rangle}{\langle0|}}$, then we have an easy formula for the mean value of the initial observable $\mathcal{A}$: $$\label{meanA} \langle \mathcal{A}\rangle_{\sigma}=a^{(+)}_{\mathcal{A}}p_{0}- a^{(-)}_{\mathcal{A}}=a^{(+)}_{\mathcal{A}}\frac{v_{\mathcal{A}}+1}{2}-a^{(-)}_{\mathcal{A}}.$$ A general scheme of a network estimating the mean value (\[meanA\]) is provided in Fig. \[Fig1\]. ![General scheme of a network for estimating mean value of an observable $\mathcal{A}$, with a bounded spectrum, in a given state $\sigma$. Both $U_{\mathcal{H}'}$ and its conjugate $U_{\mathcal{H}}'^{\dagger}$ standing before $U_{\mathcal{A}}$ can obviously be removed as they give rise to identity, last unitary on the bottom wire can be removed as it does not impact measurement statistics on the top qubit. However, they have been put to simplify subsequent network structure.[]{data-label="Fig1"}](network1.eps){width="8cm"} We put an additional unitary operation on the bottom wire after unitary $U_{\mathcal{A}}$ (which does not change the statistics of the measurement on control qubit) and divided identity operator into two unitaries acting on that wire which explicitly shows how simplification introduced in Eq. (\[Udet\]) works in practice. Now one may ask if the mean value $\langle\mathcal{A}\rangle_{\sigma}$ belongs to some fixed interval, i.e., $$\label{int} c_{1}\le\langle\mathcal{A}\rangle_{\sigma}\le c_{2},$$ where $c_{1}$ and $c_{2}$ are real numbers belonging to the spectrum of $\mathcal{A}$, i.e., $[a_{\mathcal{A}}^{\mathrm{min}},a_{\mathcal{A}}^{\mathrm{max}}]$ (e.g. if $\mathcal{A}$ is an entanglement witness and we want to check the entanglement of a state $\sigma$ then we can put $c_{1}=0$ and $c_{2}=a_{\mathcal{A}}^{\mathrm{max}}$, and condition (\[int\]) reduces to $\langle\mathcal{A}\rangle_{\sigma}\ge 0$). Then one easily infers that the condition (\[int\]) rewritten for visibility is $$2\frac{c_{1}+a_{\mathcal{A}}^{(-)}}{a_{\mathcal{A}}^{(+)}}-1\le v_{\mathcal{A}}\le 2\frac{c_{2}+a_{\mathcal{A}}^{(-)}}{a_{\mathcal{A}}^{(+)}}-1.$$ Having the general network estimating $v_{\mathcal{A}}$, one needs to decompose an isometry $U_{\mathcal{A}}$ onto elementary gates. One of possible ways to achieve this goal is, as we shall see below, to diagonalize the operator $V_{0}$. Hence we may choose $U_{\mathcal{H}}'$ (see Eq. (\[Udet\])) to be $$U_{\mathcal{H}}'=\sum_{\bold{k}} {|{\bold{k}}\rangle}{\langle\phi_{\bold{k}}|}$$ with ${|\phi _{\bold{k}}\rangle}$ being normalized eigenvectors of $V_{0}$ indexed by a binary number with length $2^k$. Since $V_{0}$ and $V_{1}$ commutes, this operation diagonalizes $V_{1}$ as well. By virtue of these facts, Eq. (\[Udet\]) reduces to $$U^{\mathrm{det}}(\mathcal{A},U_{\mathcal{H}'})=\sum_{\bold{k}} U_{\bold{k}}\otimes{{|\bold{k}\rangle}{\langle\bold{k}|}},$$ with unitaries (as previously indexed by a binary number) $$U_{\bold{k}}=\sqrt{\lambda_{\bold{k}}}\mathbbm{1}_{2}-i\sqrt{1-\lambda_{\bold{k}}}\sigma_y,$$ where $\lambda_{\bold{k}}$ are eigenvalues of $V_{0}$. So in fact we have a combination of operations on the first qubit controlled by $2^k$ wires. All this combined gives us the network shown in the Fig. \[Fig2\]. ![Noiseless network for estimating moments of $\Theta(\varrho)$ with $\varrho$ being a bipartite mixed state, i.e., density matrix acting on $\mathbb{C}^{2}{\otimes}\mathbb{C}^{2}$.[]{data-label="Fig2"}](figure2.eps){width="8.5cm"} Now we are in the position to combine all the elements presented so far an show how, if put together, they provide the general scheme for constructing noiseless network for spectrum of $\Theta(\varrho)$ for a given quantum state $\varrho$. For the sake of clarity below we itemize all steps necessary to obtain the spectrum of $\Theta(\varrho)$: (i) : Take all observables $\mathcal{O}^{(k)}$ $(k=1,\ldots,m)$ defined by Eq. (\[obs\]). (ii) : Construct unitary operations $U_{\mathcal{O}^{(k)}}$ according to the the given prescription. Consider the unitary operation $U^{\mathrm{det}}(\mathcal{A},U'_{\mathcal{H}})$ ($U'_{\mathcal{H}}$ arbitrary). Find decomposition of the operation into elementary quantum gates and minimize the number of gates in the decomposition with respect to $U_{\mathcal{H}}'$. Build the (optimal) network found in this way. (iii) : Act with the network on initial state $\mathcal{P}_{0} \otimes \varrho^{\otimes k}$. (iv) : Measure the ,,visibilities” $v_{\mathcal{O}^{(k)}_{\Theta}}$ $(k=1,\ldots,m)$ according to (\[vis\]). (v) : Using Eq. (\[meanA\]) calculate the values of $\alpha_{k}$ $(k=1,\ldots,m)$ representing the moments of $\Theta(\varrho)$. Detecting entanglement with networks: example --------------------------------------------- The first obvious application of the presented scheme is entanglement detection [*via*]{} positive but not completely positive maps. In fact for any bipartite state $\varrho\in\mathcal{B}(\mathcal{H}_{A}{\otimes}\mathcal{H}_{B})$ we only need to substitute $\Theta$ with $\mathbbm{1}_{A} \otimes \Lambda_{B}$ with $\Lambda_{B}$ being some positive map. Then application of the above scheme immediately reproduces all the results of the schemes from Ref. [@PHAE] but without additional noise added (presence of which required more precision in measurement of visibility). As an illustrative example consider $\Lambda_{B}=T$, i.e., $\Theta$ is partial transposition on the second subsystem (usually denoted by $T_B$ or by $\Gamma$), in $2\otimes 2$ systems. Due to to the fact that partial transposition is trace–preserving we need only three numbers $\alpha _{k}$, ($k=2,3,4$) measurable [*via*]{} observables $$\mathcal{O}^{(2)}_{T}=V_1^{(2)}\otimes V_2^{(2)}$$ and $$\mathcal{O}^{(3,4)}_{T}=\frac{1}{2}\left( V_1^{(3,4)}\otimes V_2^{(3,4)\dagger}+V_1^{(3,4)\dagger}\otimes V_2^{(3,4)}\right),$$ where subscripts mean that we exchange first and second subsystems respectively. The hermitian conjugation in the above may be replaced by transposition since the permutation operators have real entries. For simplicity we show only the network measuring second moment of $\varrho^{T_{B}}$. General scheme from Fig. \[Fig2\]. reduces then to the scheme from Fig. \[Fig3\]. ![Network estimating the second moment of partially transposed two-qubit density matrix $\varrho$. $U_{\mathcal{H}'}$ is decomposed to single qubit gates; here $\displaystyle U=(1/\sqrt{2})(\mathbbm{1}_{2}+i\sigma_{y})$. []{data-label="Fig3"}](figure3.eps){width="8.5cm"} Note that the network can also be regarded as a one measuring purity of a state as ${\mathrm{Tr}}(\varrho ^{T_{B}})^2={\mathrm{Tr}}\varrho ^2$. Note that the this network is not optimal since an alternative network [@Estimator] measuring ${\mathrm{Tr}}\varrho^{2}$ requires two controlled swaps. Extension to linear contractions criteria ========================================= The above approach may be generalized to the so-called [*linear contractions criteria*]{}. To see this let us recall that the powerful criterion called computable cross norm (CCN) or matrix realignment criterion has recently been introduced [@CCN; @CCN1]. This criterion is easy to apply (involves simple permutation of matrix elements) and has been shown [@CCN; @CCN1] to be independent on a positive partial transposition (PPT) test [@Peres]. It has been further generalized to the [*linear contractions criterion*]{} [@PartialCCN] which we shall recall below. If by $\varrho_{A_{i}}\;(i=1,\ldots,n)$ we denote density matrices acting on Hilbert spaces $\mathcal{H}_{A_{i}}$ and by $\tilde{\mathcal{H}}$ certain Hilbert space, then for some linear map $\mathcal{R} : \mathcal{B}(\mathcal{H}_{A_{1}}{\otimes}\ldots{\otimes}\mathcal{H}_{A_{n}})\rightarrow \mathcal{B}(\tilde{\mathcal{H}})$ we have the following [**Theorem**]{} [@PartialCCN]. [*If some ${\cal R}$ satisfies $$\label{Theorem} \left|\left|{\mathcal {R}}\left(\varrho_{A_1} {\otimes}\varrho_{A_2} {\otimes}\ldots {\otimes}\varrho_{A_n}\right)\right|\right|_{{\mathrm{Tr}}}\leq 1,$$ then for any separable state $\varrho_{A_1A_2 \ldots A_n}\in\mathcal{B}(\mathcal{H}_{A_{1}}{\otimes}\ldots{\otimes}\mathcal{H}_{A_{n}})$ one has*]{} $$\label{Theorem2} ||\mathcal {R}(\varrho_{A_1A_2 \ldots A_n})||_{{\mathrm{Tr}}}\leq 1.$$ The maps $\mathcal{R}$ satisfying (\[Theorem\]) are linear contractions on product states and hereafter they shall be called, in brief, linear contractions. In particular, the separability condition (\[Theorem2\]) comprises the generalization of the realignment test to permutation criteria [@PartialCCN; @Chen] (see also Ref. [@Fan]). The noisy network for entanglement detection with the help of the latter have been proposed in Ref. [@PHPLA2003]. Here we improve this result in two ways, namely, by taking into account all maps $\mathcal{R}$ of type (\[Theorem\]) (not only permutation maps) and introducing the corresponding noiseless networks instead of noisy ones. For these purposes we need to generalize the lemma from Ref. [@PHPLA2003] formulated previously only for real maps $\mathcal{S} : \mathcal{B}(\mathcal{H})\rightarrow\mathcal{B}(\mathcal{H})$. We represent action of $\mathcal{S}$ on any $\varrho\in \mathcal{B}(\mathcal{H})$ as $${\mathcal{S}}(\varrho)=\sum_{ij,kl}{\mathcal{S}}_{ij,kl}{\mathrm{Tr}}(\varrho P_{ij}) P_{kl},$$ where in Dirac notation $P_{xy}=|x\rangle \langle y|$. Let us define complex conjugate of the map $\mathcal{S}$ [*via*]{} complex conjugation of its elements, i.e., $${\mathcal{S}}^{*}(\varrho)=\sum_{ij,kl}{\mathcal{S}}_{ij,kl}^{*}{\mathrm{Tr}}(\varrho P_{ij}) P_{kl},$$ where asterisk stands for the complex conjugation. The we have the following lemma which is easy to proof by inspection: [**Lemma.**]{} [*Let ${\mathcal{S}}$ be an arbitrary linear map on $\mathscr{B}(\mathcal{H})$. Then the map ${\mathcal{S}}' \equiv [T \circ {\mathcal{S}}^{*} \circ T]$ satisfies ${\mathcal{S}}' (\varrho)=[{\mathcal{S}}(\varrho) ]^{\dagger}$.*]{} Now let us come to the initial problem of this section. Suppose then we have $\mathcal{R}$ satisfying Eq. (\[Theorem\]) and a given physical source producing copies of a system in state $\varrho$ for which we would like to check Eq. (\[Theorem2\]). Let us observe that $$\label{24} ||\mathcal{R}(\varrho)||_{\mathrm{Tr}}=\sum_{i}\sqrt{\gamma_{i}},$$ where $\{\gamma_{i}\}$ are eigenvalues of the operator $X_{\mathcal{R}}(\varrho)=\mathcal{R}(\varrho)\mathcal{R}(\varrho)^{\dagger}.$ Below we show how to find the spectrum $\{\gamma_{i}\}$. We need to apply our previous scheme from Sec. \[general\] to the special case. Let us define the map $L_{\mathcal{R}}=\mathcal{R} \otimes {\cal R}'$, where ${\cal R}$ is our linear contraction and $\mathcal{R}'$ is defined according to the prescription given in the Lemma above, i.e., $\mathcal{R}'=[T\circ \mathcal{R}^{*}\circ T]$. Let us also put $\varrho'=\varrho^{{\otimes}2}$ and apply the scheme presented above to detect the spectrum of $L_{\cal R}(\varrho')$. It is easy to see that the moments detected in that way are $${\mathrm{Tr}}[L_{\mathcal{R}}(\varrho')]^{k}= {\mathrm{Tr}}\left[\mathcal{R}(\varrho)\mathcal{R}(\varrho)^{\dagger}\right]^{k}=\sum_{i}\gamma_{i}^{k}.$$ From the moments one easily reconstructs $\{\gamma_{i}\}$ and may check the violation of Eq. (\[Theorem2\]). Summary {#Summary} ======= We have shown how to detect the spectrum of the operator $\Theta(\varrho)$ for arbitrary linear hermiticity-preserving map $\Theta$ given the source producing copies of the system in state $\varrho$. The network involved in the measurement is noiseless in the sense of [@Carteret] and the measurement is required only on the controlled qubit. Further we have shown how to apply the method to provide general noiseless network scheme of detection detecting entanglement with the help of criteria belonging to one of two classes, namely, those involving positive maps and applying linear contractions on product states. The structure of the proposed networks is not optimal and needs further investigations. Here however we have been interested in quite a fundamental question which is interesting by itself: [*Is it possible to get noiseless networks schemes for any criterion from one of the above classes?*]{} Up to now their existence was known [*only*]{} for special case of positive partial transpose (cf. [@Carteret2]). Here we have provided a positive answer to the question. Finally, let us note that the above approach can be viewed as an application of collective observables \[see Eq. (\[MeanValues\])\]. The general paradigm initiated in Refs. [@PHPRL; @PHPRA2003] has been recently fruitfully applied in the context of general concurrence estimates [@AolitaMintert; @MintertBuchleitner] which has been even preliminarily experimentally illustrated. Moreover, recently the universal collective observable detecting any two-qubit entanglement has been constructed [@My]. It seems that the present approach needs further analysis from the point of view of collective observables including especially collective entanglement witness (see [@PHPRA2003; @MintertBuchleitner]). P. H. thanks Artur Ekert for valuable discussions. The work is supported by the Polish Ministry of Science and Education under the grant No. 1 P03B 095 29, EU project QPRODIS (IST-2001-38877) and IP project SCALA. Figures were prepared with help of QCircuit package. [0]{} M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Lett. A [**223**]{}, 1 (1996). A. Peres, Phys. Rev. Lett. [**77**]{}, 1413 (1996). Alber G. [*et al.*]{}, [*Quantum Information: An Introduction To Basic Theoretical Concepts And Experiments, Springer Tracts in Modern Physics*]{} [**173**]{}, Springer, Berlin (2003). Here $\mathcal{B}(\mathcal{H}_{i})$ $(i=A,B)$ denotes bounded operators acting on $\mathcal{H}_{i}$. A. Kossakowski, Open Sys. Inf. Dyn. [**10**]{}, 221 (2003); G. Kimura and A. Kossakowski, [*ibid.*]{} [**11**]{}, 343 (2004). P. Horodecki and A. K. Ekert, Phys. Rev. Lett. [**89**]{}, 127902 (2002). A. K. Ekert, C. M. Alves, D. K. L. Oi, M. Horodecki, P. Horodecki, and L. C. Kwek, Phys. Rev. Lett. [**88**]{}, 217901 (2002). S. Hill and W. K. Wootters, Phys. Rev. Lett. [**78**]{}, 5022 (1997); W. K. Wootters, [*ibid.*]{} [**80**]{}, 2245 (1998). C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, Phys. Rev. A [**54**]{}, 3824 (1996). P. Horodecki, Phys. Rev. Lett. [**90**]{}, 167901 (2003). H. A. Carteret, Phys. Rev. Lett. [**94**]{}, 040502 (2005). H. A. Carteret, quant-ph/0309212. T. A. Brun, Quant. Inf. Comp. [**4**]{}, 401 (2004). M. S. Leifer, N. Linden, and A. Winter, Phys. Rev. A [**69**]{}, 052304 (2004). M. Grassl, M. Roetteler, and T. Beth, Phys. Rev. A [**58**]{}, 1833-1839 (1998). P. Horodecki, Phys. Rev. A [**67**]{}, 060101(R) (2003). S. Lloyd, A. J. Landahl, and J.-J. E. Slotine, Phys. Rev. A [**69**]{}, 012305 (2004). By $\mathbbm{1}_{\mathcal{H}}$ we denote an identity operator on $\mathcal{H}$. We use standard Pauli matrices, i.e., $\sigma_{x}={|0\rangle}{\langle1|}+{|1\rangle}{\langle0|},\sigma_{y}= -i{|0\rangle}{\langle1|}+i{|1\rangle}{\langle0|}, \sigma_{z}={{|0\rangle}{\langle0|}}-{{|1\rangle}{\langle1|}}$. P. Horodecki, Acta Phys. Pol. A [**101**]{}, 399 (2002). O. Rudolph, quant-ph/0202121. K. Chen and L. A. Wu, Quant. Inf. Comp. [**3**]{}, 193 (2003). M. Horodecki, P. Horodecki, and R. Horodecki, Open Syst. Inf. Dyn. [**13**]{}, 103 (2006); quant-ph/0206008. K. Chen and L. A. Wu, Phys. Lett. A [**306**]{}, 14 (2002). H. Fan, quant-ph/0210168; P. Wocjan and M. Horodecki, Open Sys. Inf. Dyn. [**12**]{}, 331 (2005). P. Horodecki, Phys. Lett. A [**319**]{}, 1 (2003). P. Horodecki, Phys. Rev. A [**68**]{}, 052101 (2003). L. Aolita and F. Mintert, Phys. Rev. Lett. [**97**]{}, 050501 (2006). F. Mintert and A. Buchleitner, quant-ph/0605250. R. Augusiak, P. Horodecki, and M. Demianowicz, quant-ph/0604109.
--- abstract: 'This issue of *Statistical Science* draws its inspiration from the work of James M. Robins. Jon Wellner, the Editor at the time, asked the two of us to edit a special issue that would highlight the research topics studied by Robins and the breadth and depth of Robins’ contributions. Between the two of us, we have collaborated closely with Jamie for nearly 40 years. We agreed to edit this issue because we recognized that we were among the few in a position to relate the trajectory of his research career to date.' address: - 'Thomas S. Richardson is Professor and Chair, Department of Statistics, University of Washington, Box 354322, Seattle, Washington 98195, USA .' - 'Andrea Rotnitzky is Professor, Department of Economics, Universidad Torcuato Di Tella & CONICET, Av. Figueroa Alcorta 7350, Sáenz Valiente 1010, Buenos Aires, Argentina .' author: - - title: 'Causal Etiology of the Research of James M. Robins' --- Many readers may be unfamiliar with Robins’ singular career trajectory and in particular how his early practical experience motivated many of the inferential problems with which he was subsequently involved. Robins majored in mathematics at Harvard College, but then, in the spirit of the times, left college to pursue more activist social and political goals. Several years later, Robins enrolled in Medical School at Washington University in St. Louis, graduating in 1976. His M.D. degree remains his only degree, other than his high school diploma. After graduating, he interned in medicine at Harlem Hospital in New York. After completing the internship, Robins spent a year working as a primary care physician in a community clinic in the Roxbury neighborhood of Boston. During that year, he helped organize a vertical Service Employees International Union affiliate that included all salaried personnel, from maintenance to physicians, working at the health center. In retaliation, he was dismissed by the director of the clinic and found that he was somewhat unwelcome at the other Boston community clinics. Unable to find a job and with his unemployment insurance running out, he surprisingly was able to obtain a prestigious residency in Internal Medicine at Yale University, a testament, he says with some irony, to the enduring strength of one’s Ivy League connections. At Yale, Robins and his college friend Mark Cullen, now head of General Medicine at Stanford Medical School, founded an occupational health clinic, with the goal of working with trade unions in promoting occupational health and safety. When testifying in workers’ compensation cases, Robins was regularly asked whether it was “more probable than not that a worker’s death or illness was *caused* by exposure to chemicals in the workplace.” Robins’ lifelong interest in causal inference began with his need to provide an answer. As the relevant scientific papers consisted of epidemiologic studies and biostatistical analyses, Robins enrolled in biostatistics and epidemiology classes at Yale. He was dismayed to learn that the one question he needed to answer was the one question excluded from formal discussion in the mainstream biostatistical literature.[^1] At the time, most biostatisticians insisted that evidence for causation could only be obtained through randomized controlled trials; since, for ethical reasons, potentially harmful chemicals could not be randomly assigned, it followed that statistics could play little role in disentangling causation from spurious correlation. Confounding =========== In his classes, Robins was struck by the gap present between the informal, yet insightful, language of epidemiologists such as @miettinen:1981 ([-@miettinen:1981]) expressed in terms of “confounding, comparability, and bias,” and the technical language of mathematical statistics in which these terms either did not have analogs or had other meanings. Robins’ first major paper “The foundations of confounding in Epidemiology” written in 1982, though only published in [-@robins:foundation:1987], was an attempt to bridge this gap. As one example, he offered a precise mathematical definition for the informal epidemiologic concept of a “confounding variable” that has apparently stood the test of time (see @vanderweele2013, [-@vanderweele2013]). As a second example, @efron:hinkley:78 ([-@efron:hinkley:78]) had formally considered inference accurate to order $n^{-3/2}$ in variance conditional on exact or approximate ancillary statistics. Robins showed, surprisingly, that long before their paper, epidemiologists had been intuitively and informally referring to an estimator as “unbiased” just when it was asymptotically unbiased conditional on either exact or approximate ancillary statistics; furthermore, they intuitively required that the associated conditional Wald confidence interval be accurate to $O(n^{-3/2})$ in variance. As a third example, he solved the problem of constructing the tightest Wald-type intervals guaranteed to have conservative coverage for the average causal effect among the $n$ study subjects participating in a completely randomized experiment with a binary response variable; he showed that this interval can be strictly narrower than the usual binomial interval even under the Neyman null hypothesis of no average causal effect. To do so, he constructed an estimator of the variance of the empirical difference in treatment means that improved on a variance estimator earlier proposed by @neyman:sur:1923 ([-@neyman:sur:1923]). @aronow2014 ([-@aronow2014]) have recently generalized this result in several directions including to nonbinary responses. Time-dependent Confounding and the -formula {#sec:time-dependent} =========================================== It was also in 1982 that Robins turned his attention to the subject that would become his grail: causal inference from complex longitudinal data with time-varying treatments, that eventually culminated in his revolutionary papers @robins:1986 ([-@robins:1986; -@robins:1987:addendum]). His interest in this topic was sparked by (i) a paper of @gilbert:1982 ([-@gilbert:1982])[^2] on the healthy worker survivor effect in occupational epidemiology, wherein the author raised a number of questions Robins answered in these papers and (ii) his medical experience of trying to optimally adjust a patient’s treatments in response to the evolution of the patient’s clinical and laboratory data. Overview -------- Robins career from this point on became a “quest” to solve this problem, and thereby provide methods that would address central epidemiological questions, for example, *is a given long-term exposure harmful or a treatment beneficial?* *If beneficial, what interventions, that is, treatment strategies, are optimal or near optimal?* In the process, Robins created a “bestiary” of causal models and analytic methods.[^3] There are the basic “phyla” consisting of the g-formula, marginal structural models and structural nested models. These phyla then contain “species,” for example, structural nested failure time models, structural nested distribution models, structural nested (multiplicative additive and logistic) mean models and yet further “subspecies”: direct-effect structural nested models and optimal-regime structural nested models. Each subsequent model in this taxa was developed to help answer particular causal questions in specific contexts that the “older siblings” were not quite up to. Thus, for example, Robins’ creation of structural nested and marginal structural models was driven by the so-called null paradox, which could lead to falsely finding a treatment effect where none existed, and was a serious nonrobustness of the estimated g-formula, his then current methodology. Similarly, his research on higher-order influence function estimators was motivated by a concern that, in the presence of confounding by continuous, high dimensional confounders, even doubly robust methods might fail to adequately control for confounding bias. This variety also reflects Robins’ belief that the best analytic approach varies with the causal question to be answered, and, even more importantly, that confidence in one’s substantive findings only comes when multiple, nearly orthogonal, modeling strategies lead to the same conclusion. Causally Interpreted Structured Tree Graphs {#sec:tree-graph} ------------------------------------------- Suppose one wishes to estimate from longitudinal data the causal effect of time-varying treatment or exposure, say cigarette smoking, on a failure time outcome such as all-cause mortality. In this setting, a time-dependent confounder is a time-varying covariate (e.g., presence of emphysema) that is a predictor of both future exposure and of failure. In 1982, the standard analytic approach was to model the conditional probability (i.e., the hazard) of failure time $t$ as a function of past exposure history using a time-dependent Cox proportional hazards model. Robins formally showed that, even when confounding by unmeasured factors and model specification are absent, this approach may result in estimates of effect that may fail to have a causal interpretation, regardless of whether or not one also adjusts for the measured time-dependent confounders in the analysis. In fact, if previous exposure also predicts the subsequent evolution of the time-dependent confounders (e.g., since smoking is a cause of emphysema, it predicts this disease) then the standard approach can find an artifactual exposure effect even under the sharp null hypothesis of no net, direct or indirect effect of exposure on the failure time of any subject. Prior to @robins:1986 ([-@robins:1986]), although informal discussions of net, direct and indirect (i.e., mediated) effects of time varying exposures were to be found in the discussion sections of most epidemiologic papers, no formal mathematical definitions existed. To address this, @robins:1986 ([-@robins:1986]) introduced a new counterfactual model, the *finest fully randomized causally interpreted structured tree graph* (FFRCISTG)[^4] model that extended the point treatment counterfactual model of @neyman:sur:1923 ([-@neyman:sur:1923]) and @rubin:estimating:1974 ([-@rubin:estimating:1974; -@Rubi:baye:1978])[^5] to longitudinal studies with time-varying treatments, direct and indirect effects and feedback of one cause on another. Due to his lack of formal statistical training, the notation and formalisms in @robins:1986 ([-@robins:1986]) differ from those found in the mainstream literature; as a consequence the paper can be a difficult read.[^6] @richardson:robins:2013 ([-@richardson:robins:2013], Appendix C) present the FFRCISTG model using a more familiar notation.[^7] ![Causal tree graph depicting a simple scenario with treatments at two times $A_1$, $A_2$, a response $L$ measured prior to $A_2$, and a final response $Y$. Blue circles indicate evolution of the process determined by Nature; red dots indicate potential treatment choices.[]{data-label="fig:event-tree"}](505f01.eps) We illustrate the basic ideas using a simplified example. Suppose that we obtain data from an observational or randomized study in which $n$ patients are treated at two times. Let $A_{1}$ and $A_{2}$ denote the treatments. Let $L$ be a measurement taken just prior to the second treatment and let $Y$ be a final outcome, higher values of which are desirable. To simplify matters, for now we will suppose that all of the treatments and responses are binary. As a concrete example, consider a study of HIV infected subjects with $(A_{1},L,A_{2},Y)$, respectively, being binary indicators of anti-retroviral treatment at time $1$, high CD4 count just before time $2$, anti-retroviral therapy at time $2$, and survival at time $3$ (where for simplicity we assume no deaths prior to assignment of $A_2$). There are $2^{4}=16$ possible observed data sequences for $(A_{1},L,A_{2},Y)$; these may be depicted as an event tree as in Figure \[fig:event-tree\]..2pt[^8] @robins:1986 ([-@robins:1986]) referred to such event trees as “structured tree graphs.” We wish to assess the effect of the two treatments $(a_1, a_2)$ on $Y$. In more detail, for a given subject we suppose the existence of four potential $Y(a_{1},a_{2})$ for $a_{1},a_{2}\in\{0,1\}$ which are the outcome a patient would have if (possibly counter-to-fact) they were to receive the treatments $a_{1}$ and $a_{2}$. Then $E[Y(a_{1},a_{2})]$ is the mean outcome (e.g., the survival probability) if everyone in the population were to receive the specified level of the two treatments. The particular instance of this regime under which everyone is treated at both times, so $a_{1}=a_{2}=1$, is depicted in Figure \[fig:event-tree-reg\](a). We are interested in estimation of these four means since the regime $(a_{1},a_{2})$ that maximizes $E[Y(a_{1},a_{2})]$ is the regime a new patient exchangeable with the $n$ study subjects should follow. There are two extreme scenarios: If in an observational study, the treatments are assigned, for example, by doctors, based on additional unmeasured predictors $U$ of $Y$ then $E[Y(a_{1},a_{2})]$ is not identified since those receiving $(a_{1},a_{2})$ within the study are not representative of the population as a whole. At the other extreme, if the data comes from a completely randomized clinical trial (RCT) in which treatment is assigned independently at each time by the flip of coin, then it is simple to see that the counterfactual $Y( a_{1},a_{2}) $ is independent of the treatments $ (A_{1},A_{2} ) $ and that the average potential outcomes are identified since those receiving $(a_{1},a_{2})$ in the study are a simple random sample of the whole population. Thus, $$\begin{aligned} Y( a_{1},a_{2}) &{\protect\mathpalette{\protect\independenT}{\perp}}& \{ A_{1},A_{2} \} , \label{eq:full-rand} \\ E\bigl[Y(a_{1},a_{2})\bigr] &=& E[ Y\mid A_{1} = a_{1},A_{2} = a_{2}], \label{eq:asscaus}\end{aligned}$$ where the right-hand side of (\[eq:asscaus\]) is a function of the observed data distribution. In a completely randomized experiment, association is causation: the associational quantity on the right-hand side of (\[eq:asscaus\]) equals the causal quantity on the left-hand side. Robins, however, considered an intermediate trial design in which both treatments are randomized, but the probability of receiving $A_{2}$ is dependent on both the treatment received initially ($A_{1}$) and the observed response ($L$); a scenario now termed a *sequential randomized trial*. Robins viewed his analysis as also applicable to observational data as follows. In an observational study, the role of an epidemiologist is to use subject matter knowledge to try to collect in $L$ sufficient data to eliminate confounding by unmeasured factors, and thus to have the study mimic a sequential RCT. If successful, the only difference between an actual sequential randomized trial and an observational study is that in the former the randomization probabilities $\Pr(A_{2}=1 \mid L,A_{1})$ are known by design while in the latter they must be estimated from the data.[^9] Robins viewed the sequential randomized trial as a collection of five trials in total: the original trial at $t=1$, plus a set of four randomized trials at $t=2$ nested within the original trial.[^10] Let the counterfactual $L( a_{1}) $ be the outcome $L$ when $A_{1}$ is set to $a_{1}$. Since the counterfactuals $Y(a_{1},a_{2})$ and $L( a_{1}) $ do not depend on the actual treatment received, they can be viewed, like a subject’s genetic make-up, as a fixed (possibly unobserved) characteristic of a subject and therefore independent of the randomly assigned treatment conditional on pre-randomization covariates. That is, for each $(a_{1},a_{2})$ and $l$: $$\begin{aligned} \bigl\{ Y(a_{1},a_{2}),L(a_{1}) \bigr\} & {\protect\mathpalette{\protect\independenT}{\perp}}& A_{1}, \label{eq:ind1} \\ Y(a_{1},a_{2})& {\protect\mathpalette{\protect\independenT}{\perp}}& A_{2} \mid A_{1} = a_{1},\quad L = l. \label{eq:ind2}\end{aligned}$$ These independences suffice to identify the joint density $f_{Y(a_{1},a_{2}),L(a_{1})}(y,l)$ of $(Y(a_{1},a_{2}), L(a_{1}))$ from the distribution of the factual variables by the “g-computation algorithm formula” (or simply *g-formula*) density $$f_{a_{1},a_{2}}^{\ast}(y,l)\equiv f(y \mid a_{1},l,a_{2})f(l \mid a_{1})$$ provided the conditional probabilities on the right-hand side are well-defined (@robins:1986, [-@robins:1986], page 1423). Note that $f_{a_{1},a_{2}}^{\ast}(y,l)$ is obtained from the joint density of the factuals by removing the treatment terms $f(a_{2} \mid a_{1},l,a_{2})f(a_{1})$. This is in-line with the intuition that $A_{1}$ and $A_{2}$ cease to be random since, under the regime, they are set by intervention to constants $a_{1}$ and $a_{2}$. The g-formula was later referred to as the “manipulated density” by @cps93 ([-@cps93]) and the “truncated factorization” by @pearl:2000 ([-@pearl:2000]). @robins:1987:addendum ([-@robins:1987:addendum]) showed that under the weaker condition that replaces (\[eq:ind1\]) and (\[eq:ind2\]) with $$\begin{aligned} \label{eq:statrand} Y(a_{1},a_{2}) &{\protect\mathpalette{\protect\independenT}{\perp}}& A_{1} \quad\hbox{and} \nonumber \\[-8pt] \\[-8pt] Y(a_{1},a_{2}) &{\protect\mathpalette{\protect\independenT}{\perp}}& A_{2} \mid A_{1} = a_{1}, \quad L = l, \nonumber\end{aligned}$$ the marginal density of $Y(a_{1},a_{2})$ is still identified by $$\label{eq:g-formula-for-y} f_{a_{1},a_{2}}^{\ast}(y)=\sum _{l}f(y \mid a_{1},l,a_{2})f(l \mid a_{1}),$$ the marginal under $f_{a_{1},a_{2}}^{\ast}(y,l)$.[^11] Robins called (\[eq:statrand\]) *randomization w.r.t. $Y$*.[^12] Furthermore, he provided substantive examples of observational studies in which only the weaker assumption would be expected to hold. It is much easier to describe these studies using representations of causal systems using Directed Acyclic Graphs and Single World Intervention Graphs, neither of which existed when ([-@robins:1987:addendum]) was written. ![image](505f02.eps) Causal DAGs and Single World Intervention Graphs (SWIGs) {#sec:dags} -------------------------------------------------------- Causal DAGs were first introduced in the seminal work of @cps93 ([-@cps93]); the theory was subsequently developed and extended by @pearl:biom ([-@pearl:biom; -@pearl:2000]) among others. A causal DAG with random variables $V_{1},\ldots,V_{M}$ as nodes is a graph in which (1) the lack of an arrow from node $V_{j}$ to $V_{m}$ can be interpreted as the absence of a direct causal effect of $V_{j}$ on $V_{m}$ (relative to the other variables on the graph), (2) all common causes, even if unmeasured, of any pair of variables on the graph are themselves on the graph, and (3) the Causal Markov Assumption (CMA) holds. The CMA links the causal structure represented by the Directed Acyclic Graph (DAG) to the statistical data obtained in a study. It states that the distribution of the factual variables factor according to the DAG. A distribution factors according to the DAG if nondescendants of a given variable $V_{j}$ are independent of $V_{j}$ conditional on $\hbox{pa}_{j}$, the parents of $V_{j}$. The CMA is mathematically equivalent to the statement that the density $f(v_{1},\ldots,v_{M})$ of the variables on the causal DAG $\mathcal{G}$ satisfies the Markov factorization $$\label{eq:dag-factor} f(v_{1},\ldots,v_{M})=\prod _{j=1}^{M}f(v_{j}\mid\mathrm{pa}_{j}).$$ A graphical criterion, called d-separation ([-@pearl:1988]), characterizes all the marginal and conditional independences that hold in every distribution obeying the Markov factorization (\[eq:dag-factor\]). Causal DAGs may also be used to represent the joint distribution of the observed data under the counterfactual FFRCISTG model of @robins:1986 ([-@robins:1986]). This follows because an FFRCISTG model over the variables $\{V_{1},\ldots,V_{M}\}$ induces a distribution that factors as (\[eq:dag-factor\]). Figure \[fig:seq-rand\](a) shows a causal Directed Acyclic Graph (DAG) corresponding to the sequentially randomized experiment described above: vertex $H$ represents an unmeasured common cause (e.g., immune function) of CD4 count $L$ and survival $Y$. Randomization of treatment implies $A_{1}$ has no parents and $A_{2}$ has only the observed variables $A_{1}$ and $L$ as parents. ![image](505f03.eps) Single-World Intervention Graphs (SWIGs), introduced in ([-@richardson:robins:2013]), provide a simple way to derive the counterfactual independence relations implied by an FFRCISTG model. SWIGs were designed to unify the graphical and potential outcome approaches to causality. The nodes on a SWIG are the counterfactual random variables associated with a specific hypothetical intervention on the treatment variables. The SWIG in Figure \[fig:seq-rand\](b) is derived from the causal DAG in Figure \[fig:seq-rand\](a) corresponding to a sequentially randomized experiment. The SWIG represents the counterfactual world in which $A_{1}$ and $A_{2}$ have been set to $(a_{1},a_{2})$, respectively. @richardson:robins:2013 ([-@richardson:robins:2013]) show that under the (naturally associated) FFRCISTG model the distribution of the counterfactual variables on the SWIG factors according to the graph. Applying Pearl’s d-separation criterion to the SWIG we obtain the independences (\[eq:ind1\]) and (\[eq:ind2\]).[^13] @robins:1987:addendum ([-@robins:1987:addendum]) in one of the aforementioned substantive examples described an observational study of the effect of formaldehyde exposure on the mortality of rubber workers which can represented by the causal graph in Figure \[fig:seq-rand-variant\](a). (This graph cannot represent a sequential RCT because the treatment variable $A_{1}$ and the response $L$ have an unmeasured common cause.) Follow-up begins at time of hire; time $1$ on the graph. The vertices $H_{1}$, $A_{1}$, $H_{2}$, $L_{2}$, $A_{2}$, $Y$ are indicators of sensitivity to eye irritants, formaldehyde exposure at time $1$, lung cancer, current employment, formaldehyde exposure at time $2$ and survival. Data on eye-sensitivity and lung cancer were not collected. Formaldehyde is a known eye-irritant. The presence of an arrow from $H_{1}$ to $A_{1}$ but not from $H_{1}$ to $A_{2}$ reflects the fact that subjects who believe their eyes to be sensitive to formaldehyde are given the discretion to choose a job without formaldehyde exposure at time of hire but not later. The arrow from $H_{1}$ to $L$ reflects the fact that eye sensitivity causes some subjects to leave employment. The arrows from $H_{2}$ to $L_{2}$ and $Y$ reflects the fact that lung cancer causes both death and loss of employment. The fact that $H_{1}$ and $H_{2}$ are independent reflects the fact that eye sensitivity is unrelated to the risk of lung cancer. From the SWIG in Figure \[fig:seq-rand-variant\](b), we can see that (\[eq:statrand\]) holds so we have randomization with respect to $Y$ but $L( a_{1}) $ is not independent of $A_{1}$. It follows that the g-formula $f_{a_{1},a_{2}}^{\ast}(y)$ equals the density of $Y(a_{1},a_{2})$ even though (i) the distribution of $L( a_{1}) $ is not identified and (ii) neither of the individual terms $f(l \mid a_{1})$ and $f(y \mid a_{1},l,a_{2})$ occurring in the g-formula has a causal interpretation.[^14] Subsequently, @tian02general ([-@tian02general]) developed a graphical algorithm for nonparametric identification that is “complete” in the sense that if the algorithm fails to derive an identifying formula, then the causal quantity is not identified (@shpitser06id, [-@shpitser06id]; @huang06do, [-@huang06do]). This algorithm strictly extends the set of causal identification results obtained by Robins for static regimes. Dynamic Regimes {#sec:dynamic-regimes} --------------- The “g” in “g-formula” and elsewhere in Robins’ work refers to generalized treatment regimes $g$. The set $\mathbb{G}$ of all such regimes includes *dynamic* regimes in which a subject’s treatment at time $2$ depends on the response $L$ to the treatment at time $1$. An example of a dynamic regime is the regime in which all subjects receive anti-retroviral treatment at time $1$, but continue to receive treatment at time $2$ only if their CD4 count at time $2$ is low, indicating that they have not yet responded to anti-retroviral treatment. In our study with no baseline covariates and $A_{1}$ and $A_{2}$ binary, a dynamic regime $g$ can be written as $g= (a_{1},g_{2}( l) ) $ where the function $g_{2}(l)$ specifies the treatment to be given at time $2$. The dynamic regime above has $(a_{1} = 1,g_{2}(l) = 1-l)$ and is highlighted in Figure \[fig:event-tree-reg\]. If $L$ is binary, then $\mathbb{G}$ consists of $8$ regimes comprised of the $4$ earlier static regimes $ (a_{1},a_{2} ) $ and $4$ *dynamic* regimes. The *g-formula* density associated with a regime $g= ( a_{1},g_{2}(l) ) $ is $$f_{g}^{\ast}(y,l)\equiv f( l \mid a_{1})f\bigl(y \mid A_{1}=a_{1},L=l,A_{2}=g_{2}( l) \bigr).$$ Letting $Y(g)$ be a subject’s counterfactual outcome under regime $g$, @robins:1987:addendum ([-@robins:1987:addendum]) proves that if both of the following hold: $$\begin{aligned} \label{eq:indg} Y(g)& {\protect\mathpalette{\protect\independenT}{\perp}}& A_{1}, \nonumber \\[-8pt] \\[-8pt] Y(g)& {\protect\mathpalette{\protect\independenT}{\perp}}& A_{2} \mid A_{1} = a_{1}, \quad L = l \nonumber\end{aligned}$$ then $f_{Y(g)}(y)$ is identified by the g-formula density for $Y$: $$\begin{aligned} f_{g}^{\ast}(y) &=&\sum_{l}f_{g}^{\ast}(y,l) \\ &=&\sum_{l}f\bigl(y \mid A_{1}=a_{1},L=l,A_{2}=g_{2}(l) \bigr) \\\ &&\hspace*{15pt}{} \cdot f(l \mid a_{1}).\end{aligned}$$ @robins:1987:addendum ([-@robins:1987:addendum]) refers to (\[eq:indg\]) as the assumption that regime $g$ *is randomized with respect to $Y$*. Given a causal DAG, Dynamic SWIGs (dSWIGS) can be used to check whether (\[eq:indg\]) holds. @tian:dynamic:2008 ([-@tian:dynamic:2008]) gives a complete graphical algorithm for identification of the effect of dynamic regimes based on DAGs. ![image](505f04.eps) Independences (\[eq:ind1\]) and (\[eq:ind2\]) imply that (\[eq:indg\]) is true for all $g\in\mathbb{G}$. For a drug treatment, for which, say, higher outcome values are better, the optimal regime $g_{\mathrm{opt}}$ maximizing $E[ Y(g)] $ over $g\in\mathbb{G}$ is almost always a dynamic regime, as treatment must be discontinued when toxicity, a component of $L$, develops. @robins:iv:1989 ([-@robins:iv:1989; -@robins:1986], page 1423) used the g-notation $f(y \mid g)$ as a shorthand for $f_{Y(g)}(y)$ in order to emphasize that this was the density of $Y$ *had intervention $g$ been applied to the population*. In the special case of static regimes $ ( a_{1},a_{2} ) $, he wrote $f(y \mid g=(a_{1},a_{2}))$.[^15] Statistical Limitations of the Estimated g-Formulae --------------------------------------------------- Consider a sequentially randomized experiment. In this context, randomization probabilities $f(a_{1})$ and $f(a_{2} \mid a_{1},l)$ are known by design; however, the densities $f(y \mid a_{1},a_{2},l)$ and $f(l \mid a_{1})$ are not known and, therefore, they must be replaced by estimates $\widehat{f}(y \mid a_{1},a_{2},l_{2})$ and $\widehat{f}(l \mid a_{1})$ in the g-formula. If the sample size is moderate and $l$ is high dimensional, these estimates must come from fitting dimension-reducing models. Model misspecification will then lead to biased estimators of the mean of $Y(a_{1},a_{2})$. @robins:1986 ([-@robins:1986]) and @robins97estimation ([-@robins97estimation]) described a serious nonrobustness of the g-formula: the so-called “null paradox”: In biomedical trials, it is frequently of interest to consider the possibility that the sharp causal null hypothesis of no effect of either $A_{1}$ or $A_{2}$ on $Y$ holds. Under this null, the causal DAG generating the data is as in Figure \[fig:seq-rand\] except without the arrows from $A_{1}$, $A_{2}$ and $L$ into $Y$.[^16] Then, under this null, although $f_{a_{1},a_{2}}^{\ast}(y)=\sum_{l}f(y \mid a_{1},l,a_{2})f(l \mid a_{1})$ does not depend on $ ( a_{1},a_{2} )$, nonetheless both $f(y \mid a_{1},l,a_{2})$ and $f(l \mid a_{1})$ will, in general, depend on $a_{1}$ (as may be seen via d-connection).[^17] In general, if $L$ has discrete components, it is not possible for standard nonsaturated parametric models (e.g., logistic regression models) for both $f(y\mid a_{1},a_{2},l_{2})$ and $f(l_{2} \mid a_{1})$ to be correctly specified, and thus depend on $a_{1}$ and yet for $f_{a_{1},a_{2}}^{\ast}(y)$ not to depend on $a_{1}$.[^18] As a consequence, inference based on the estimated g-formula must result in the sharp null hypothesis being falsely rejected with probability going to $1$, as the trial size increases, even when it is true. Structural Nested Models {#sec:snm} ------------------------ To overcome the null paradox, @robins:iv:1989 ([-@robins:iv:1989]) and @robins:pneumocystis:carinii:1992 ([-@robins:pneumocystis:carinii:1992]) introduced the semiparametric structural nested distribution model (SNDMs) for continuous outcomes $Y$ and structural nested failure time models (SNFTMs) for time to event outcomes. See @robins:snftm ([-@robins:longdata; -@robins:snftm]) for additional details. @robins:1986 ([-@robins:1986], Section 6) defined the *$g$-null hypothesis* as $$\begin{aligned} \label{g-null} && H_{0}\dvtx \mbox{the distribution of }Y(g) \nonumber \\[-8pt] \\[-8pt] && \hspace*{20pt} \mbox{is the same for all }g\in\mathbb{G}. \nonumber\end{aligned}$$ This hypothesis is implied by the sharp null hypothesis of no effect of $A_{1}$ or $A_{2}$ on any subject’s $Y$. If (\[eq:indg\]) holds for all $g\in\mathbb{G,}$ then the $g$-null hypothesis is equivalent to any one of the following assertions: (i) $f_{g}^{\ast}(y)$ equals the factual density $f( y) $ for all $g\in\mathbb{G}$; (ii) $Y{\protect\mathpalette{\protect\independenT}{\perp}}A_1$ and $Y{\protect\mathpalette{\protect\independenT}{\perp}}A_2 \mid L,A_1$; (iii) $f_{a_{1},a_{2}}^{\ast}(y)$ does not depend on $ (a_{1},a_{2} ) $ and $Y{\protect\mathpalette{\protect\independenT}{\perp}}A_{2} \mid L,A_1$; see @robins:1986 ([-@robins:1986], Section 6). In addition, any one of these assertions exhausts all restrictions on the observed data distribution implied by the sharp null hypothesis. Robins’ goal was to construct a causal model indexed by a parameter $\psi^{\ast}$ such that in a sequentially randomized trial (i) $\psi^{\ast}=0$ if and only if the $g$-null hypothesis (\[g-null\]) was true and (ii) if known, one could use the randomization probabilities to both construct an unbiased estimating function for $\psi^{\ast}$ and to construct tests of $\psi^{\ast}=0$ that were guaranteed (asymptotically) to reject under the null at the nominal level. The SNDMs and SNFTMs accomplish this goal for continuous and failure time outcomes $Y$. @robins:iv:1989 ([-@robins:iv:1989]) and @robins:cis:correcting:1994 ([-@robins:cis:correcting:1994]) also constructed additive and multiplicative structural nested mean models (SNMMs) which satisfied the above properties except with the hypothesis replaced by the *$g$-null mean hypothesis*: $$\label{g-null-mean} H_{0}\dvtx E\bigl[Y(g)\bigr]=E[Y]\quad\mbox{for all }g \in\mathbb{G}.$$ As an example, we consider an additive structural nested mean model. Define $$\begin{aligned} && \gamma(a_{1},l,a_{2}) \\ &&\quad= E\bigl[ Y(a_{1},a_{2})-Y(a_{1},0) \mid L=l, A_{1} = a_{1}, \\ &&\hspace*{168pt} A_{2} = a_{2}\bigr]\end{aligned}$$ and $$\gamma(a_{1})=E\bigl[ Y(a_{1},0)-Y(0,0) \mid A_{1} = a_{1}\bigr] .$$ Note $\gamma(a_{1},l,a_{2})$ is the effect of the last blip of treatment $a_{2}$ at time $2$ among subjects with observed history $ (a_{1},l,a_{2} ) $, while $\gamma( a_{1})$ is the effect of the last blip of treatment $a_{1}$ at time $1$ among subjects with history $a_{1}$. An additive SNMM specifies parametric models $\gamma(a_{1},l,a_{2};\psi_{2})$ and $\gamma(a_{1};\psi_{1})$ for these blip functions with $\gamma(a_{1};0)=\gamma(a_{1},l,a_{2};0)=0$. Under the independence assumptions (\[eq:indg\]), $H_{2}(\psi_{2})\* d(L,A_1) \{ A_{2}-E[ A_{2} \mid L,A_{1}] \} $ and $H_{1}( \psi) \{ A_{1}-E[ A_{1}] \} $ are unbiased estimating functions for the true $\psi^{\ast}$, where $H_{2}( \psi_{2}) =Y-\gamma(A_{1},L,A_{2};\psi_{2})$, $H_{1}(\psi)=H_{2}( \psi_{2}) -\gamma(A_{1};\psi_{1})$, and $d(L,A_1)$ is a user-supplied function of the same dimension as $\psi_2$. Under the $g$-null mean hypothesis (\[g-null-mean\]), the SNMM is guaranteed to be correctly specified with $\psi^{\ast}=0$. Thus, these estimating functions when evaluated at $\psi^{\ast}=0$, can be used in the construction of an asymptotically $\alpha$-level test of the $g$-null mean hypothesis when $f(a_{1})$ and $f(a_{2} \mid a_{1},l)$ are known (or are consistently estimated).[^19] When $L$ is a high-dimensional vector, the parametric blip models may well be misspecified when $g$-null mean hypothesis is false. However, because the functions $\gamma(a_{1},l,a_{2})$ and $\gamma(a_{1})$ are nonparametrically identified under assumptions (\[eq:indg\]), one can construct consistent tests of the correctness of the blip models $\gamma(a_{1},l,a_{2};\psi_{2})$ and $\gamma(a_{1};\psi_{1})$. Furthermore, one can also estimate the blip functions using cross-validation ([-@robins:optimal:2004]) and/or flexible machine learning methods in lieu of a prespecified parametric model ([-@van2011targeted]). A recent modification of a multiplicative SNMM, the structural nested cumulative failure time model, designed for censored time to event outcomes has computational advantages compared to a SNFTM, because, in contrast to a SNFTM, parameters are estimated using an unbiased estimating function that is differentiable in the model parameters; see @picciotto2012structural ([-@picciotto2012structural]). @robins:optimal:2004 ([-@robins:optimal:2004]) also introduced optimal-regime SNNMs drawing on the seminal work of @Murp:opti:2003 ([-@Murp:opti:2003]) on semiparametric methods for the estimation of optimal treatment strategies. Optimal-regime SNNM estimation, called A-learning in computer science, can be viewed as a semiparametric implementation of dynamic programming (@bellman:1957, [-@bellman:1957]).[^20] Optimal-regime SNMMs differ from standard SNMMs only in that $\gamma(a_{1})$ is redefined to be $$\begin{aligned} \gamma(a_{1}) &=& E\bigl[Y\bigl(a_{1},g_{2,\mathrm{opt}} \bigl(a_{1},L(a_{1})\bigr)\bigr) \\ &&\hspace*{11pt}{} - Y\bigl(0,g_{2,\mathrm{opt}}\bigl(0,L(0)\bigr)\bigr) \mid A_{1} =a_{1}\bigr],\end{aligned}$$ where $g_{2,\mathrm{opt}}(a_{1},l)=\arg\max_{a_{2}}\gamma(a_{1},l,a_{2})$ is the optimal treatment at time $2$ given past history $ ( a_{1},l ) $. The overall optimal treatment strategy $g_{\mathrm{opt}}$ is then $ (a_{1,\mathrm{opt}},g_{2,\mathrm{opt}}( a_{1},l) ) $ where $a_{1,\mathrm{opt}}=\arg\max_{a_{1}}\gamma(a_{1})$. More on the estimation of optimal treatment regimes can be found in @schulte:2014 ([-@schulte:2014]) in this volume. Instrumental Variables and Bounds for the Average Treatment Effect ------------------------------------------------------------------ @robins:iv:1989 ([-@robins:iv:1989; -@robins1993analytic]) also noted that structural nested models can be used to estimate treatment effects when assumptions (\[eq:indg\]) do not hold but data are available on a time dependent instrumental variable. As an example, patients sometimes fail to fill their prescriptions and thus do not comply with their prescribed treatment. In that case, we can take $A_{j}=( A_{j}^{p},A_{j}^{d}) $ for each time $j$, where $A_{j}^{p}$ denotes the treatment *prescribed* and $A_{j}^{d}$ denotes the *dose* of treatment actually received at time $j$. Robins defined $A_{j}^{p}$ to be *an instrumental variable* if (\[eq:indg\]) still holds after replacing $A_{j}$ by $A_{j}^{p}$ and for all subjects $Y( a_{1},a_{2})$ depends on $a_{j}=( a_{j}^{p},a_{j}^{d}) $ only through the actual dose $a_{j}^{d}$. Robins noted that unlike the case of full compliance (i.e., $A_{j}^{p}=A_{j}^{d}$ with probability $1)$ discussed earlier, the treatment effect functions $\gamma$ are not nonparametrically identified. Consequently, identification can only be achieved by correctly specifying (sufficiently restrictive) parametric models for $\gamma$. If we are unwilling to rely on such parametric assumptions, then the observed data distribution only implies bounds for the $\gamma$’s. In particular, in the setting of a point treatment randomized trial with noncompliance and the instrument $A_{1}^{p}$ being the assigned treatment, @robins:iv:1989 ([-@robins:iv:1989]) obtained bounds on the average causal effect $E[Y(a_{d}=1)-Y(a_{d}=0)]$ of the received treatment $A_{d}$. To the best of our knowledge, this paper was the first to derive bounds for nonidentified causal effects defined through potential outcomes.[^21] The study of such bounds has become an active area of research. Other early papers include @manski:1990 ([-@manski:1990]) and @balke:pearl:1994 ([-@balke:pearl:1994]).[^22] See @richardson:hudgens:2014 ([-@richardson:hudgens:2014]) in this volume for a survey of recent research on bounds. Limitations of Structural Nested Models {#sec:limits-of-snms} --------------------------------------- @robins00marginal ([-@robins00marginal]) noted that there exist causal questions for which SNMs are not altogether satisfactory. As an example, for $Y$ binary, @robins00marginal ([-@robins00marginal]) proposed a structural nested logistic model in order to ensure estimates of the counterfactual mean of $Y$ were between zero and one. However, he noted that knowledge of the randomization probabilities did not allow one to construct unbiased estimating function for its parameter $\psi^{\ast}$. More importantly, SNMs do not directly model the final object of public health interest—the distribution or mean of the outcome $Y$ as function of the regimes $g$—as these distributions are generally functions not only of the parameters of the SNM but also of the conditional law of the time dependent covariates $L$ given the past history. In addition, SNMs constitute a rather large conceptual leap from standard associational regression models familiar to most statisticians. @Robi:marg:1997 ([-@Robi:marg:1997; -@robins00marginal]) introduced a new class of causal models, marginal structural models, that overcame these particular difficulties. Robins also pointed out that MSMs have their own shortcomings, which we discuss below. @robins00marginal ([-@robins00marginal]) concluded that the best causal model to use will vary with the causal question of interest. Dependent Censoring and Inverse Probability Weighting {#sec:censoring} ----------------------------------------------------- Marginal Structural Models grew out of Robins’ work on censoring and *inverse probability of censoring weighted* (IPCW) estimators. Robins work on dependent censoring was motivated by the familiar clinical observation that patients who did not return to the clinic and were thus censored differed from other patients on important risk factors, for example measures of cardio-pulmonary reserve. In the 1970s and 1980s, the analysis of right censored data was a major area of statistical research, driven by the introduction of the proportional hazards model ([-@cox:1972:jrssb]; [-@Kalb:Pren:stat:1980]) and by martingale methods for their analysis ([-@Aalen:counting:1978]; [-@andersen:borgan:gill:keiding:1992]; [-@Flem:Harr:coun:1991]). This research, however, was focused on independent censoring. An important insight in @robins:1986 ([-@robins:1986]) was the recognition that by reframing the problem of censoring as a causal inference problem as we will now explain, it was possible to adjust for dependent censoring with the . @Rubi:baye:1978 ([-@Rubi:baye:1978]) had pointed out previously that counterfactual causal inference could be viewed as a missing data problem. @robins:1986 ([-@robins:1986], page 1491) recognized that the converse was indeed also true: a missing data problem could be viewed as a problem in counterfactual causal inference.[^23] Robins conceptualized right censoring as just another time dependent “treatment” $A_{t}$ and one’s inferential goal as the estimation of the outcome $Y$ under the static regime $g$ “never censored.” Inference based on the g-formula was then licensed provided that censoring was explainable in the sense that (\[eq:statrand\]) holds. This approach to dependent censoring subsumed independent censoring as the latter is a special case of the former. Robins, however, recognized once again that inference based on the estimated g-formula could be nonrobust. To overcome this difficulty, [@robins:rotnitzky:recovery:1992] introduced IPCW tests and estimators whose properties are easiest to explain in the context of a two-armed RCT of a single treatment ($A_{1}$). The standard Intention-to-Treat (ITT) analysis for comparing the survival distributions in the two arms is a log-rank test. However, data are often collected on covariates, both pre- and post-randomization, that are predictive of the outcome as well as (possibly) of censoring. An ITT analysis that tries to adjust for dependent-censoring by IPCW uses estimates of the arm-specific hazards of censoring as functions of past covariate history. The proposed IPCW tests have the following two advantages compared to the log rank test. First, if censoring is dependent but explainable by the covariates, the log-rank test is not asymptotically valid. In contrast, IPCW tests asymptotically reject at their nominal level provided the arm-specific hazard estimators are consistent. Second, when censoring is independent, although both the IPCW tests and the log-rank test asymptotically reject at their nominal level, the IPCW tests, by making use of covariates, can be more powerful than the log-rank test even against proportional-hazards alternatives. Even under independent censoring tests based on the estimated g-formula are not guaranteed to be asymptotically $\alpha$-level, and hence are not robust. To illustrate, we consider here an RCT with $A_{1}$ being the randomization indicator, $L$ a post-randomization covariate, $A_{2}$ the indicator of censoring and $Y$ the indicator of survival. For simplicity, we assume that any censoring occurs at time $2$ and that there are no failures prior to time $2$. The IPCW estimator $\widehat{\beta}$ of the ITT effect $\beta^{\ast}=E[ Y \mid A=1] -E[ Y \mid A=0] $ is defined as the solution to $$\mathbb{P}_{n}\bigl[ I ( A_{2}=0 ) U(\beta)/\widehat{ \Pr}(A_{2}=0 \mid L,A_{1})\bigr] =0, \hspace*{-25pt}$$ where $U(\beta)= ( Y-\beta A_{1} ) ( A_{1}-1/2 ) $, throughout $\mathbb{P}_{n}$ denotes the empirical mean operator and $\widehat{\Pr}( A_{2}=0 \mid L,A_{1}) $ is an estimator of the arm-specific conditional probability of being uncensored. When first introduced in 1992, IPCW estimators, even when taking the form of simple Horvitz–Thompson estimators, were met with both surprise and suspicion as they violated the then widely held belief that one should never adjust for a post-randomization variable affected by treatment in a RCT. Marginal Structural Models {#sec:msm} -------------------------- @robins1993analytic ([-@robins1993analytic], Remark A1.3, pages 257–258) noted that, for any treatment regime $g$, if randomization w.r.t. $Y$, that is, (\[eq:indg\]), holds, $\Pr\{Y(g)>y\}$ can be estimated by IPCW if one defines a person’s censoring time as the first time he/she fails to take the treatment specified by the regime. In this setting, he referred to IPCW as *inverse probability of treatment weighted* (IPTW). In actual longitudinal data in which either (i) treatment $A_{k}$ is measured at many times $k$ or (ii) the $A_{k}$ are discrete with many levels or continuous, one often finds that few study subjects follow any particular regime. In response, @Robi:marg:1997 ([-@Robi:marg:1997; -@robins00marginal]) introduced MSMs. These models address the aforementioned difficulty by borrowing information across regimes. Additionally, MSMs represent another response to the $g$-null paradox complementary to Structural Nested Models. To illustrate, suppose that in our example of Section \[sec:time-dependent\], $A_{1}$ and $A_{2}$ now have many levels. An instance of an MSM for the counterfactual means $E[ Y(a_{1},a_{2})]$ is a model that specifies that $$\Phi^{-1}\bigl\{E\bigl[Y(a_{1},a_{2})\bigr]\bigr \}=\beta_{0}^{\ast} + \gamma\bigl(a_{1},a_{2}; \beta_{1}^{\ast}\bigr),$$ where $\Phi^{-1}$ is a given link function such as the logit, log, or identity link and $\gamma( a_{1},a_{2};\beta_{1}) $ is a known function satisfying $\gamma( a_{1},a_{2};0) =0$. In this model, $\beta_{1}=0$ encodes the *static-regime mean null hypothesis* that $$\label{static null} H_{0}\dvtx E\bigl[ Y(a_{1},a_{2})\bigr] \mbox{ is the same for all } (a_{1},a_{2} ) . \hspace*{-15pt}$$ @Robi:marg:1997 ([-@Robi:marg:1997]) proposed IPTW estimators $( \widehat{\beta}_{0},\widehat{\beta}_{1}) $ of $ ( \beta_{0}^{\ast},\beta_{1}^{\ast} )$. When the treatment probabilities are known, these estimators are defined as the solution to $$\begin{aligned} \label{eq:msm2} \qquad && \mathbb{P}_{n}\bigl[ \vphantom{\hat{P}}Wv(A_{1},A_{2}) \bigl( Y-\Phi\bigl\{\beta_{0}+\gamma(A_1,A_2; \beta_{1})\bigr\} \bigr) \bigr] \nonumber\\[-8pt] \\[-8pt] &&\quad=0 \nonumber\end{aligned}$$ for a user supplied vector function $v(A_{1},A_{2})$ of the dimension of $ ( \beta_{0}^{\ast},\beta_{1}^{\ast} ) $ where $$W=1/ \bigl\{ f( A_{1}) f(A_{2} \mid A_{1},L) \bigr \}.$$ Informally, the product $f(A_{1})f(A_{2} \mid A_{1},L)$ is the “probability that a subject had the treatment history he did indeed have.”[^24] When the treatment probabilities are unknown, they are replaced by estimators. Intuitively, the reason why the estimating function of (\[eq:msm2\]) has mean zero at $ ( \beta_{0}^{\ast},\beta_{1}^{\ast} ) $ is as follows: Suppose the data had been generated from a sequentially randomized trial represented by DAG in Figure \[fig:seq-rand\]. We may create a pseudo-population by making $1/\{f(A_{1})f( A_{2}\mid A_{1},L)\}$ copies of each study subject. It can be shown that in the resulting pseudo-population $A_{2}{\protect\mathpalette{\protect\independenT}{\perp}}\{ L,A_{1} \}$, and thus is represented by the DAG in Figure \[fig:seq-rand\], except with both arrows into $A_{2}$ removed. In the pseudo-population, treatment is completely randomized (i.e., there is no confounding by either measured or unmeasured variables), and hence causation is association. Further, the mean of $Y(a_{1},a_{2})$ takes the same value in the pseudo-population as in the actual population. Thus if, for example, $\gamma(a_{1},a_{2};\beta_{1}) =\beta_{1,1}a_{1}+\beta_{1,2}a_{2}$ and $\Phi^{-1} $ is the identity link, we can estimate $ ( \beta_{0}^{\ast},\beta_{1}^{\ast} ) $ by OLS in the pseudo-population. However, OLS in the pseudo-population is precisely weighted least squares in the actual study population with weights $1/\{f(A_{1})f( A_{2}\mid A_{1},L)\}$.[^25] @robins00marginal ([-@robins00marginal], Section 4.3) also noted that the weights $W$ can be replaced by the so-called stabilized weights $SW= \{ f( A_{1}) f(A_{2} \mid A_{1}) \} / \{ f( A_{1}) f(A_{2}\mid A_{1},L) \}$, and described settings where, for efficiency reasons, using $SW$ is preferable to using $W$. MSMs are not restricted to models for the dependence of the mean of $Y(a_{1},a_{2})$ on $ ( a_{1},a_{2} ) $. Indeed, one can consider MSMs for the dependence of any functional of the law of $Y(a_{1},a_{2})$ on $ ( a_{1},a_{2} )$, such as a quantile or the hazard function if $Y$ is a time-to-event variable. If the study is fully randomized, that is, (\[eq:full-rand\]) holds, then an MSM model for a given functional of the law of $Y( a_{1},a_{2}) $ is tantamount to an associational model for the same functional of the law of $Y$ conditional on $A_{1}=a_{1}$ and $A_{2}=a_{2}$. Thus, under (\[eq:full-rand\]), the MSM model can be estimated using standard methods for estimating the corresponding associational model. If the study is only sequentially randomized, that is, (\[eq:statrand\]) holds but (\[eq:full-rand\]) does not, then the model can still be estimated by the same standard methods but weighting each subject by $W$ or $SW$. @robins00marginal ([-@robins00marginal]) discussed disadvantages of MSMs compared to SNMs. Here, we summarize some of the main drawbacks. Suppose (\[eq:indg\]) holds for all $g \in\mathbb{G}$. If the $g$-null hypothesis (\[g-null\]) is false but the static regime null hypothesis that the law of $Y( a_{1},a_{2}) $ is the same for all $ ( a_{1},a_{2} ) $ is true, then by (iii) of Section \[sec:snm\], $f( y \mid A_{1}=a_{1},A_{2}=a_{2},L=l) $ will depend on $a_{2}$ for some stratum $ ( a_{1}, l )$ thus implying a causal effect of $A_{2}$ in that stratum; estimation of an SNM model would, but estimation of an MSM model would not, detect this effect. A second drawback is that estimation of MSM models, suffers from marked instability and finite-sample bias in the presence of weights $W$ that are highly variable and skewed. This is not generally an issue in SNM estimation. A third limitation of MSMs is that when (\[eq:statrand\]) fails but an instrumental variable is available, one can still consistently estimate the parameters of a SNM but not of an MSM.[^26] An advantage of MSMs over SNMs that was not discussed in Section \[sec:limits-of-snms\] is the following. MSMs can be constructed that are indexed by easily interpretable parameters that quantify the overall effects of a subset of all possible dynamic regimes ([-@Miguel:Robins:dominique:2005]; [-@van:Pete:caus:2007]; @orellana2010dynamic ([-@orellana2010dynamic; -@orellana2010proof]). As an example consider a longitudinal study of HIV infected patients with baseline CD4 counts exceeding 600 in which we wish to determine the optimal CD4 count at which to begin anti-retroviral treatment. Let $g_{x}$ denote the dynamic regime that specifies treatment is to be initiated the first time a subject’s CD4 count falls below $x$, $x\in \{1,2,\ldots, 600 \} $. Let $Y(g_{x})$ be the associated counterfactual response and suppose few study subjects follow any given regime. If we assume $E[Y(g_{x})]$ varies smoothly with $x$, we can specify and fit (by IPTW) a dynamic regime MSM model $E[Y(g_{x})]=\beta_{0}^{\ast}+\beta_{1}^{\ast T}h(x)$ where, say, $h(x)$ is a vector of appropriate spline functions. Direct Effects ============== Robins’ analysis of sequential regimes leads immediately to the consideration of direct effects. Thus, perhaps not surprisingly, all three of the distinct direct effect concepts that are now an integral part of the causal literature are all to be found in his early papers. Intuitively, all the notions of direct effect consider whether “the outcome ($Y$) would have been different had cause ($A_{1}$) been different, but the level of ($A_{2}$) remained unchanged.” The notions differ regarding the precise meaning of $A_2$ “remained unchanged.” ![image](505f05.eps) Controlled Direct Effects {#sec:cde} ------------------------- In a setting in which there are temporally ordered treatments $A_{1}$ and $A_{2}$, it is natural to wonder whether the first treatment has any effect on the final outcome were everyone to receive the second treatment. Formally, we wish to compare the potential outcomes $Y(a_{1} = 1,a_{2} =1)$ and $Y(a_{1} = 0,a_{2} = 1)$. @robins:1986 ([-@robins:1986], Section 8) considered such contrasts, that are now referred to as *controlled direct effects*. More generally, the *average controlled direct effect of $A_{1}$ on $Y$ when $A_{2}$ is set to $a_{2}$* is defined to be $$\label{eq:acde} \hspace*{23pt} \mathrm{CDE}(a_{2})\equiv E\bigl[Y(a_{1}=1,a_{2})-Y(a_{1}=0,a_{2})\bigr] ,$$ where $Y(a_{1}=1,a_{2})-Y(a_{1}=0,a_{2})$ is the individual level direct effect. Thus, if $A_{2}$ takes $k$-levels then there are $k$ such contrasts. Under the causal graph shown in Figure \[fig:no-confound\](a), in contrast to Figures \[fig:seq-rand\] and \[fig:seq-rand-variant\], the effect of $A_{2}$ on $Y$ is unconfounded, by either measured or unmeasured variables, association is causation and thus, under the associated FFRCISTG model: $$\begin{aligned} \mathrm{CDE}(a_{2}) &=& E[ Y \mid A_{1}=1,A_{2}=a_{2}] \\ &&{} - E[Y \mid A_{1}=0,A_{2}=a_{2}] .\end{aligned}$$ The CDE can be identified even in the presence of time-dependent confounding. For example, in the context of the FFRCISTG associated with either of the causal DAGs shown in Figures \[fig:seq-rand\] and [fig:seq-rand-variant]{}, the $\operatorname{CDE}(a_2)$ will be identified via the difference in the expectations of $Y$ under the g-formula densities $f_{a_1 = 1,a_2}^*(y)$ and $f_{a_1 = 0,a_2}^*(y)$.[^27] The CDE requires that the potential outcomes $Y(a_{1},\allowbreak a_{2})$ be well-defined for all values of $a_{1}$ and $a_{2}$. This is because the CDE treats both $A_{2}$ and $A_{1}$ as causes, and interprets “$A_{2}$ remained unchanged” to mean “had there been an intervention on $A_2$ fixing it to $a_2$.” This clearly requires that the analyst be able to describe a well-defined intervention on the mediating variable $A_{2}$. There are many contexts in which there is no clear well-defined intervention on $A_{2}$ and thus it is not meaningful to refer to $Y(a_{1},a_{2})$. The CDE is not applicable in such contexts. Principal Stratum Direct Effects (PSDE) {#sec:psde} --------------------------------------- @robins:1986 ([-@robins:1986]) considered causal contrasts in the situation described in Section \[sec:censoring\] in which death from a disease of interest, for example, a heart attack, may be censored by death from other diseases. To describe these contrasts, we suppose $A_{1}$ is a treatment of interest, $Y=1$ is the indicator of death from the disease of interest (in a short interval subsequent to a given fixed time $t$) and $A_{2}=0$ is the “at risk indicator” denoting the absence of death either from other diseases or the disease of interest prior to time $t$. Earlier @Kalb:Pren:stat:1980 ([-@Kalb:Pren:stat:1980]) had argued that if $A_{2}=1$, so that the subject does not survive to time $t$, then the question of whether the subject would have died of heart disease subsequent to $t$ had death before $t$ been prevented is meaningless. In the language of counterfactuals, they were saying (i) that if $A_{1}=a_{1}$ and $A_{2}\equiv A_{2}(a_{1}) =1$, the counterfactual $Y(a_{1},a_{2}=0)$ is not well-defined and (ii) the counterfactual $Y(a_{1},a_{2}=1)$ is never well-defined. @robins:1986 ([-@robins:1986], Section 12.2) observed that if one accepts this then the only direct effect contrast that is well-defined is $Y(a_{1} =1,a_{2}=0)- Y(a_{1} = 0,a_{2}=0)$ and that is well-defined only for those subjects who would survive to $t$ regardless of whether they received $a_{1} = 0$ or $a_{1} = 1$. In other words, even though $Y(a_{1},a_{2})$ may not be well-defined for all subjects and all $a_{1}$, $a_{2}$, the contrast: $$\begin{aligned} \label{eq:psde-contrast} && E\bigl[ Y(a_{1} = 0,a_{2}) -Y(a_{1}= 1,a_{2}) \mid \nonumber\\[-8pt] \\[-8pt] &&\hspace*{16pt}{}A_{2}(a_{1} =1)=A_{2}(a_{1} = 0)=a_2\bigr] \nonumber\end{aligned}$$ is still well-defined when $a_{2}=0$. As noted by Robins, this could provide a solution to the problem of defining the causal effect of the treatment $A_{1}$ on the outcome $Y$ in the context of censoring by death due to other diseases. @Rubi:more:1998 ([-@Rubi:more:1998]) and @Fran:Rubi:addr:1999 ([-@Fran:Rubi:addr:1999; -@Fran:prin:2002]) later used this same contrast to solve precisely the same problem of “censoring by death.”[^28] In the terminology of @Fran:prin:2002 ([-@Fran:prin:2002]) for a subject with $A_{2}(a_{1} = 1)=A_{2}(a_{1} = 0)=a_{2}$, the *individual principal stratum direct effect* is defined to be:[^29] $$Y(a_{1}=1,a_{2}) - Y(a_{1}=0,a_{2})$$ (here, $A_{1}$ is assumed to be binary). The *average PSDE in principal stratum $a_{2}$* is then defined to be $$\begin{aligned} \label{eq:psde2} \qquad \operatorname{PSDE}(a_{2}) &\equiv & E\bigl[Y(a_{1} = 1,a_{2}) -Y(a_{1} = 0,a_{2})\mid \nonumber\\ &&\hspace*{16pt} A_{2}(a_{1} = 1)=A_{2}(a_{1} = 0)=a_{2}\bigr] \nonumber\\[-8pt] \\[-8pt] & =& E\bigl[ Y(a_{1} = 1)-Y(a_{1} = 0)\mid \nonumber\\ &&\hspace*{13pt} A_{2}(a_{1} = 1)=A_{2}(a_{1} = 0)=a_{2}\bigr], \nonumber\end{aligned}$$ where the second equality here follows, since $Y(a_{1},\allowbreak A_{2}(a_{1}))=Y(a_{1})$.[^30] In contrast to the CDE, the PSDE has the advantage that it may be defined, via (\[eq:psde2\]), without reference to potential outcomes involving intervention on $a_{2}$. Whereas the CDE views $A_{2}$ as a treatment, the PSDE treats $A_{2}$ as a response. Equivalently, this contrast interprets “had $A_2$ remained unchanged” to mean “we restrict attention to those people whose value of $A_{2}$ would still have been $a_{2}$, even under an intervention that set $A_{1}$ to a different value.” Although the PSDE is an interesting parameter in many settings ([-@gilbert:bosch:hudgens:biometrics:2003]), it has drawbacks beyond the obvious (but perhaps less important) ones that neither the parameter itself nor the subgroup conditioned on are nonparametrically identified. In fact, having just defined the PSDE parameter, @robins:1986 ([-@robins:1986]) criticized it for its lack of transitivity when there is a non-null direct effect of $A_1$ and $A_{1}$ has more than two levels; that is, for a given $a_2$, the PSDEs comparing $a_{1}=0$ with $a_{1}=1$ and $a_{1}=1$ with $a_{1}=2$ may both be positive but the PSDE comparing $a_{1}=0$ with $a_{1}=2$ may be negative. @Robi:Rotn:Vans:disc:2007 ([-@Robi:Rotn:Vans:disc:2007]) noted that the PSDE is undefined when $A_{1}$ has an effect on every subject’s $A_{2}$, a situation that can easily occur if $A_2$ is continuous. In that event, a natural strategy would be to, say, dichotomize $A_{2}$. However, @Robi:Rotn:Vans:disc:2007 ([-@Robi:Rotn:Vans:disc:2007]) showed that the PSDE in principal stratum $a_{2}^{\ast}$ of the dichotomized variable may fail to retain any meaningful substantive interpretation. Pure Direct Effects (PDE) {#sec:pde} ------------------------- Once it has been established that a treatment $A_{1}$ has a causal effect on a response $Y$, it is natural to ask what “fraction” of a the total effect may be attributed to a given causal pathway. As an example, consider a RCT in nonhypertensive smokers of the effect of an anti-smoking intervention ($A_{1}$) on the outcome myocardial infarction (MI) at 2 years ($Y$). For simplicity, assume everyone in the intervention arm and no one in the placebo arm quit cigarettes, that all subjects were tested for new-onset hypertension $A_{2}$ at the end of the first year, and no subject suffered an MI in the first year. Hence, $A_{1}$, $A_{2}$ and $Y$ occur in that order. Suppose the trial showed smoking cessation had a beneficial effect on both hypertension and MI. It is natural to consider the query: “What fraction of the total effect of smoking cessation $A_{1}$ on MI $Y$ is through a pathway that does not involve hypertension $A_{2}$?” @Robi:Gree:iden:1992 formalized this question via the following counterfactual contrast, which they termed the “pure direct effect”: $$Y\bigl\{a_1 = 1,A_2(a_1 = 0)\bigr\}-Y \bigl\{a_1 = 0,A_2(a_1 = 0)\bigr\}.$$ The second term here is simply $Y(a_{1} = 0)$.[^31] The contrast is thus the difference between two quantities: first, the outcome $Y$ that would result if we set $a_{1}$ to $1$, while “holding fixed” $a_{2}$ at the value $A_{2}(a_{1} = 0)$ that it would have taken had $a_{1}$ been $0$; second, the outcome $Y$ that would result from simply setting $a_{1}$ to $0$ \[and thus having $A_{2}$ again take the value $A_{2}(a_{1} = 0)$\]. Thus, the Pure Direct Effect interprets had “$A_{2}$ remained unchanged” to mean “had (somehow) $A_{2}$ taken the value that it would have taken had we fixed $A_{1}$ to $0$.” The contrast thus represents the effect of $A_{1}$ on $Y$ had the effect of $A_{1}$ on hypertension $A_{2}$ been blocked. As for the CDE, to be well-defined, potential outcomes $Y(a_{1},a_{2})$ must be well-defined. As a summary measure of the direct effect of (a binary variable) $A_{1}$ on $Y$, the PDE has the advantage (relative to the CDE and PSDE) that it is a single number. The average pure direct effect is defined as[^32] $$\begin{aligned} \operatorname{PDE} &=& E\bigl[ Y\bigl\{a_{1} = 1,A_{2}(a_{1} = 0)\bigr\}\bigr] \\ &&{} -E\bigl[Y\bigl(a_{1} = 0,A_{2}(a_{1} = 0) \bigr)\bigr] .\end{aligned}$$ Thus, the ratio of the PDE to the total effect $E[ Y\{a_{1} = 1\}] -E[Y\{a_{1} = 0\}] $ is the fraction of the total that is through a pathway that does not involve hypertension ($A_2$). Unlike the PSDE, the PDE is an average over the full population. However, unlike the CDE, the PDE is not nonparametrically identified under the FFRCISTG model associated with the simple DAG shown in Figure \[fig:no-confound\](a). @robins:mcm:2011 ([-@robins:mcm:2011], App. C) computed bounds for the PDE under the FFRCISTG associated with this DAG. @pearl:indirect:01 ([-@pearl:indirect:01]) obtains identification of the PDE under the DAG in Figure \[fig:no-confound\](a) by imposing stronger counterfactual independence assumptions, via a Nonparametric Structural Equation Model with Independent Errors (NPSEM-IE).[^33] Under these assumptions, @pearl:indirect:01 ([-@pearl:indirect:01]) obtains the following identifying formula: $$\begin{aligned} \label{eq:mediation} && \sum_{a_{2}} \bigl\{ E[Y\mid A_{1} = 1,A_{2} = a_{2}] \nonumber\\ &&\hspace*{19pt}{} -E[Y\mid A_{1} = 0,A_{2} = a_{2}] \bigr\} \\ &&\hspace*{14pt}{} \cdot P(A_{2} = a_{2}\mid A_{1} = 0), \nonumber\end{aligned}$$ which he calls the “Mediation Formula.” @robins:mcm:2011 ([-@robins:mcm:2011]) noted that the additional assumptions made by the NPSEM-IE are not testable, even in principle, via a randomized experiment. Consequently, this formula represents a departure from the principle, originating with @neyman:sur:1923 ([-@neyman:sur:1923]), that causation be reducible to experimental interventions, often expressed in the slogan “no causation without manipulation.”[^34] @robins:mcm:2011 ([-@robins:mcm:2011]) achieve a rapprochement between these opposing positions by showing that the formula (\[eq:mediation\]) is equal to the g-formula associated with an intervention on two treatment variables not appearing on the graph (but having deterministic relations with $A_{1}$) under the assumption that one of the variables has no direct effect on $A_{2} $ and the other has no direct effect on $Y$. Hence, under this assumption and in the absence of confounding, the effect of this intervention on $Y$ is point identified by (\[eq:mediation\]).[^35] Although there was a literature on direct effects in linear structural equation models (see, e.g., [-@blalock1971causal]) that preceded @robins:1986 ([-@robins:1986]) and @Robi:Gree:iden:1992 ([-@Robi:Gree:iden:1992]), the distinction between the CDE and PDE did not arise since in linear models these notions are equivalent.[^36] ![image](505f06.eps) The Direct Effect Null {#sec:direct-null} ---------------------- @robins:1986 ([-@robins:1986], Section 8) considered the null hypothesis that $Y(a_{1},a_{2}) $ does not depend on $a_{1}$ for all $a_{2}$, which we term the *sharp null-hypothesis of no direct effect of $A_{1}$ on $Y$* (*relative to $A_{2}$*) or more simply as the “sharp direct effect null.” In the context of our running example with data $ (A_{1},L,A_{2},Y )$, under (\[eq:statrand\]) the sharp direct effect null implies the following constraint on the observed data distribution: $$\label{eq:verma-constraint} \quad f_{a_{1},a_{2}}^{\ast}(y) \quad\mbox{is not a function of } a_{1} \mbox{ for all }a_{2}.$$ @robins:1986 ([-@robins:1986], Sections 8 and 9) noted that this constraint (\[eq:verma-constraint\]) is *not* a conditional independence. This is in contrast to the $g$-null hypothesis which we have seen is equivalent to the independencies in (ii) of Section \[sec:snm\] \[when equation (\[eq:indg\]) holds for all $g\in\mathbb{G}$\].[^37] He concluded that, in contrast to the $g$-null hypothesis, the constraint (\[eq:verma-constraint\]), and thus the sharp direct effect null, cannot be tested using case control data with unknown case and control sampling fractions.[^38] This constraint (\[eq:verma-constraint\]) was later independently discovered by @verma:pearl:equivalence:1990 ([-@verma:pearl:equivalence:1990]) and for this reason is called the “Verma constraint” in the Computer Science literature. @robins:1999 ([-@robins:1999]) noted that, though (\[eq:verma-constraint\]) is not a conditional independence in the observed data distribution, it does correspond to a conditional independence, but in a weighted distribution with weights proportional to $1/f(A_{2}\mid A_{1},L)$.[^39] This can be understood from the informal discussion following equation (\[eq:msm2\]) in the previous section: there it was noted that given the FFRCISTG corresponding to the DAG in Figure \[fig:seq-rand\], reweighting by $1/f(A_{2}\mid A_{1},L)$ corresponds to removing both edges into $A_{2}$. Hence, if the edges $A_{1}\rightarrow Y$ and $L\rightarrow Y$ are not present, so that the sharp direct effect null holds, as in Figure \[fig:seq-rand2\](a), then the reweighted population is described by the DAG in Figure \[fig:seq-rand2\](b). It then follows from the d-separation relations on this DAG that $Y {\protect\mathpalette{\protect\independenT}{\perp}}A_{1}\mid A_{2}$ in the reweighted distribution. This fact can also be seen as follows. If, in our running example from Section \[sec:tree-graph\], $A_{1}$, $A_{2}$, $Y$ are all binary, the sharp direct effect null implies that $\beta_{1}^{\ast}=\beta_{3}^{\ast}=0$ in the saturated MSM with $$\Phi^{-1}\bigl\{E\bigl[Y(a_{1},a_{2})\bigr]\bigr \}=\beta_{0}^{\ast} + \beta_{1}^{\ast}a_{1}+ \beta_{2}^{\ast}a_{2}+\beta_{3}^{\ast}a_{1}a_{2}.$$ Since $\beta_{1}^{\ast}$ and $\beta_{3}^{\ast}$ are the associational parameters of the weighted distribution, their being zero implies the conditional independence $Y {\protect\mathpalette{\protect\independenT}{\perp}}A_{1}\mid A_{2}$ under this weighted distribution. In more complex longitudinal settings, with the number of treatment times $k$ exceeding $2$, all the parameters multiplying terms containing a particular treatment variable in a MSM may be zero, yet there may still be evidence in the data that the sharp direct effect null for that variable is false. This is directly analogous to the limitation of MSMs relative to SNMs with regard to the sharp null hypothesis (\[g-null\]) of no effect of any treatment that we noted at the end of Section \[sec:msm\]. To overcome this problem, @robins:1999 ([-@robins:1999]) introduced direct effect structural nested models. In these models, which involve treatment at $k$ time points, if all parameters multiplying a given $a_j$ take the value $0$, then we can conclude that the distribution of the observables do not refute the natural extension of (\[eq:verma-constraint\]) to $k$ times. The latter is implied by the sharp direct effect null that $a_j$ has no direct effect on $Y$ holding $a_{j+1},\ldots,a_k$ fixed. The Foundations of Statistics and Bayesian Inference ==================================================== @Robins:Ritov:toward:1997 and @Robi:Wass:cond:2000 recognized that the lack of robustness of estimators based on the g-formula in a sequential randomized trial with known randomization probabilities had implications for the foundations of statistics and for Bayesian inference. To make their argument transparent, we will assume in our running example (from Section \[sec:tree-graph\]) that the density of $L$ is known and that $A_{1}=1$ with probability $1$ (hence we drop $A_{1}$ from the notation). We will further assume the observed data are $n$ i.i.d. copies of a random vector $ ( L,A_{2},Y ) $ with $A_{2}$ and $Y$ binary and $L$ a $d\times1$ continuous vector with support on the unit cube $ ( 0,1 )^{d}$. We consider a model for the law of $ ( L,A_{2},Y )$ that assumes that the density $f^{\ast}( l ) $ of $L$ is known, that the treatment probability $\pi^{\ast}( l)\equiv\Pr( A_{2}=1 \mid L=l)$ lies in the interval $ ( c,1-c ) $ for some known $c>0$ and that $b^{\ast}( l,a_{2}) \equiv E[Y \mid L=l,A_{2}=a_{2}] $ is continuous in $l$. Under this model, the likelihood function is $$\mathcal{L}( b, \pi) = \mathcal{L}_{1}( b) \mathcal{L}_{2}( \pi) ,$$ where $$\begin{aligned} \mathcal{L}_{1}( b) &=& \prod_{i=1}^{n}f^{\ast}( L_{i}) b(L_{i},A_{2,i}) ^{Y} \nonumber \\[-8pt] \\[-8pt] &&\hspace*{14pt} {} \cdot \bigl\{ 1-b( L_{i},A_{2,i}) \bigr \} ^{1-Y}, \nonumber \\ \qquad \mathcal{L}_{2}( \pi) &=& \prod_{i=1}^{n} \pi_{2}( L_{i})^{A_{2,i}} \bigl\{ 1-\pi_{2}( L_{i}) \bigr\} ^{1-A_{2,i}},\end{aligned}$$ and $ ( b,\pi ) \in\mathcal{B}\times\bolds{\Pi}$. Here $\mathcal{B}$ is the set of continuous functions from $ ( 0,1 )^{d}\times \{ 0,1 \} $ to $ ( 0,1 ) $ and $\bolds{\Pi}$ is the set of functions from $ ( 0,1 ) ^{d}$ to $ ( c,1-c ) $. We assume the goal is inference about $\mu( b) $ where $\mu( b) =\int b(l,1) f^{\ast}( l) \,{dl}$. Under randomization, that is (\[eq:ind1\]) and (\[eq:ind2\]), $\mu( b^{\ast})$ is the counterfactual mean of $Y$ when treatment is given at both times. When $\pi^{\ast}$ is unknown, @Robins:Ritov:toward:1997 ([-@Robins:Ritov:toward:1997]) showed that no estimator of $\mu( b^{\ast}) $ exists that is uniformly consistent over all $\mathcal{B}\times\bolds{\Pi}$. They also showed that even if $\pi^{\ast}$ is known, any estimator that does not use knowledge of $\pi^{\ast}$ cannot be uniformly consistent over $\mathcal{B}\times \{ \pi^{\ast} \} $ for all $\pi^{\ast}$. However, there do exist estimators that depend on $\pi^{\ast}$ that are uniformly $\sqrt{n} $-consistent for $\mu(b^{\ast}) $ over $\mathcal{B}\times \{ \pi^{\ast} \} $ for all $\pi^{\ast}$. The Horvitz–Thompson estimator $\mathbb{P}_{n}\{A_{2}Y/\pi^{\ast}( L) \} $ is a simple example. @Robins:Ritov:toward:1997 ([-@Robins:Ritov:toward:1997]) concluded that, in this example, any method of estimation that obeys the likelihood principle such as maximum likelihood or Bayesian estimation with independent priors on $b$ and $\pi$, must fail to be uniformly consistent. This is because any procedure that obeys the likelihood principle must result in the same inference for $\mu(b^{\ast})$ regardless of $\pi^{\ast}$, even when $\pi^{\ast}$ becomes known. @Robi:Wass:cond:2000 ([-@Robi:Wass:cond:2000]) noted that this example illustrates that the likelihood principle and frequentist performance can be in severe conflict in that any procedure with good frequentist properties must violate the likelihood principle.[^40] @ritov:2014 ([-@ritov:2014]) in this volume extends this discussion in many directions. Semiparametric Efficiency and Double Robustness in Missing Data and Causal Inference Models {#sec:semipar-eff} =========================================================================================== @robins:rotnitzky:recovery:1992 ([-@robins:rotnitzky:recovery:1992]) recognized that the inferential problem of estimation of the mean $E[ Y(g)] $ (when identified by the g-formula) of a response $Y$ under a regime $g$ is a special case of the *general problem* of estimating the parameters of an arbitrary semi-parametric model in the presence of data that had been coarsened at random ([-@Heitjan:Rubin:1991]).[^41] This viewpoint led them to recognize that the IPCW and IPTW estimators described earlier were not fully efficient. To obtain efficient estimators, @robins:rotnitzky:recovery:1992 and @robins:rotnitzky:zhao:1994 used the theory of semiparametric efficiency bounds ([-@bickel:klaasen:ritov:wellner:1993]; [-@van:on:1991]) to derive representations for the efficient score, the efficient influence function, the semiparametric variance bound, and the influence function of any asymptotically linear estimator in this *general* problem. The books by @tsiatis:2006 ([-@tsiatis:2006]) and by @vdL:robins:2003 ([-@vdL:robins:2003]) provide thorough treatments. The generality of these results allowed Robins and his principal collaborators Mark van der Laan and Andrea Rotnitzky to solve many open problems in the analysis of semiparametric models. For example, they used the efficient score representation theorem to derive locally efficient semiparametric estimators in many models of importance in biostatistics. Some examples include conditional mean models with missing regressors and/or responses ([-@robins:rotnitzky:zhao:1994]; [-@Rotn:Robi:semi:1995]), bivariate survival ([-@quale2006locally]) and multivariate survival models with explainable dependent censoring ([-@van2002locally]).[^42] In coarsened at random data models, whether missing data or causal inference models, locally efficient semiparametric estimators are also doubly robust ([-@Scha:Rotn:Robi:adju:1999], pages 1141–1144) and ([-@Robins:Rotnitzky:comment:on:bickel:2001]). See the book ([-@vdL:robins:2003]) for details and for many examples of doubly robust estimators. Doubly robust estimators had been discovered earlier in special cases. In fact, @Firth:Bennet:1998 ([-@Firth:Bennet:1998]) note that the so-called model-assisted regression estimator of a finite population mean of @Cass:Srnd:Wret:some:1976 ([-@Cass:Srnd:Wret:some:1976]) is design consistent which is tantamount to being doubly robust. See @Robins:Rotnitzky:comment:on:bickel:2001 ([-@Robins:Rotnitzky:comment:on:bickel:2001]) for other precursors. In the context of our running example, from Section \[sec:tree-graph\], suppose (\[eq:statrand\]) holds. An estimator $\widehat{\mu}_{\mathrm{dr}}$ of $\mu=E[ Y( a_{1},a_{2})]=f_{a_{1},a_{2}}^{\ast}(1)$ for, say $a_{1}=a_{2}=1$, is said to be *doubly robust* (DR) if it is consistent when either (i) a model for $\pi(L) \equiv\Pr( A_{2}=1 \mid A_{1}=1,L) $ or (ii) a model for $b( L) \equiv E[ Y \mid A_{1}=1,L,A_{2}=1]$ is correct. When $L$ is high dimensional and, as in an observational study, $\pi(\cdot)$ is unknown, double robustness is a desirable property because model misspecification is generally unavoidable, even when we use flexible, high dimensional, semiparametric models in (i) and (ii). In fact, DR estimators have advantages even when, as is usually the case, the models in (i) and (ii) are both incorrect. This happens because the bias of the DR estimator $\widehat{\mu}_{\mathrm{dr}}$ is of second order, and thus generally less than the bias of a non-DR estimator (such as a standard IPTW estimator). By second order, we mean that the bias of $\widehat{\mu}_{\mathrm{dr}}$ depends on the product of the error made in the estimation of $\Pr(A_{2}=1 \mid A_{1}=1,L) $ times the error made in the estimation of $E[ Y \mid A_{1}=1,L,A_{2}=1] $. @Scha:Rotn:Robi:adju:1999 ([-@Scha:Rotn:Robi:adju:1999]) noted that the locally efficient estimator of @robins:rotnitzky:zhao:1994 $$\begin{aligned} \widetilde{\mu}_{\mathrm{dr}} &=& \bigl\{ \mathbb{P}_{n}[A_{1}] \bigr\}^{-1} \\ &&{}\cdot \mathbb{P}_{n} \biggl[A_{1} \biggl\{ \frac{A_{2}}{\widehat{\pi}(L) }Y - \biggl\{ \frac{A_{2}}{\widehat{\pi}( L) }-1 \biggr\} \widehat{b}( L) \biggr\} \biggr]\end{aligned}$$ is doubly robust where $\widehat{\pi}( L) $ and $\widehat{b}( L) $ are estimators of $\pi( L) $ and $b(L)$. Unfortunately, in finite samples this estimator may fail to lie in the parameter space for $\mu$, that is, the interval $[0,1]$ if $Y$ is binary. In response, @Scha:Rotn:Robi:adju:1999 ([-@Scha:Rotn:Robi:adju:1999]) proposed a plug-in DR estimator, the doubly robust regression estimator $$\widehat{\mu}_{\mathrm{dr},\mathrm{reg}} = \bigl\{ \mathbb{P}_{n} [ A_{1} ] \bigr\}^{-1}\mathbb{P}_{n} \bigl\{ A_{1} \widehat{b}( L) \bigr\},$$ where now $\widehat{b}( L) =\operatorname{expit}\{ m( L;\widehat{\eta}) + \widehat{\theta}/\widehat{\pi}( L)\}$ and $( \widehat{\eta},\widehat{\theta}) $ are obtained by fitting by maximum likelihood the logistic regression model $\Pr(Y=1 \mid A_{1}=1,\allowbreak L,A_{2}=1) =\operatorname{expit}\{ m( L;\eta) +\theta /\widehat{\pi}( L) \} $ to subjects with $A_{1}=1$, $A_{2}=1$. Here, $m( L;\eta)$ is a user-specified function of $L$ and of the Euclidean parameter $\eta$. @Robi:robust:1999 ([-@Robi:robust:1999]) and @Bang:Robi:doub:2005 ([-@Bang:Robi:doub:2005]) obtained plug-in DR regression estimators in longitudinal missing data and causal inference models by reexpressing the g-formula as a sequence of iterated conditional expectations. @van:Rubi:targ:2006 ([-@van:Rubi:targ:2006]) proposed a clever general method for obtaining plug-in DR estimators called targeted maximum likelihood. In our setting, the method yields an estimator $\widehat{\mu}_{\mathrm{dr},\mathrm{TMLE}}$ that differs from $\widehat{\mu}_{\mathrm{dr},\mathrm{reg}}$ only in that $\widehat{b}( L) $ is now given by $\operatorname{expit}\{ \widehat{m}( L) +\widehat{\theta}_{\mathrm{greedy}}/\widehat{\pi}( L) \} $ where $\widehat{\theta}_{\mathrm{greedy}}$ is again obtained by maximum likelihood but with a fixed offset $\widehat{m}( L) $. This offset is an estimator of $\Pr(Y=1 \mid A_{1}=1,L,A_{2}=1) $ that might be obtained using flexible machine learning methods. Similar comments apply to models considered by Bang and Robins ([-@Bang:Robi:doub:2005]). Since 2006 there has been an explosion of research that has produced doubly robust estimators with much improved large sample efficiency and finite sample performance; @rotnitkzy:vansteelandt:2014 ([-@rotnitkzy:vansteelandt:2014]) give a review. We note that CAR models are not the only models that admit doubly robust estimators. For example, @Scha:Rotn:Robi:adju:1999 ([-@Scha:Rotn:Robi:adju:1999]) exhibited doubly robust estimators in models with nonignorable missingness. @Robins:Rotnitzky:comment:on:bickel:2001 ([-@Robins:Rotnitzky:comment:on:bickel:2001]) derived sufficient conditions, satisfied by many non-CAR models, that imply the existence of doubly robust estimators. Recently, doubly robust estimators have been obtained in a wide variety of models. See @dudik:2014 ([-@dudik:2014]) in this volume for an interesting example. Higher Order Influence Functions ================================ It may happen that the second-order bias of a doubly-robust estimator $\widehat{\mu}_{\mathrm{dr}}$ decreases slower to 0 with $n$ than $n^{-1/2}$, and thus the bias exceeds the standard error of the estimator. In that case, confidence intervals for $\mu$ based on $\widehat{\mu}_{\mathrm{dr}}$ fail to cover at their nominal rate even in large samples. Furthermore, in such a case, in terms of mean squared error, $\widehat{\mu}_{\mathrm{dr}}$ does not optimally trade off bias and variance. In an attempt to address these problems, @robins:higher:2008 ([-@robins:higher:2008]) developed a theory of point and interval estimation based on higher order influence functions and use this theory to construct estimators of $\mu$ that improve on $\widehat{\mu}_{\mathrm{dr}}$. Higher order influence functions are higher order U-statistics. The theory of @robins:higher:2008 ([-@robins:higher:2008]) extends to higher order the first order semiparametric inference theory of @bickel:klaasen:ritov:wellner:1993 ([-@bickel:klaasen:ritov:wellner:1993]) and @van:on:1991 ([-@van:on:1991]). In this issue, @vandervaart:2014 ([-@vandervaart:2014]) gives a masterful review of this theory. Here, we present an interesting result found in @robins:higher:2008 ([-@robins:higher:2008]) that can be understood in isolation from the general theory and conclude with an open estimation problem. @robins:higher:2008 ([-@robins:higher:2008]) consider the question of whether, for estimation of a conditional variance, random regressors provide for faster rates of convergence than do fixed regressors, and, if so, how? They consider a setting in which $n$ i.i.d. copies of $ ( Y,X ) $ are observed with $X$ a $d$-dimensional random vector, with bounded density $f( \cdot) $ absolutely continuous w.r.t. the uniform measure on the unit cube $ (0,1 ) ^{d}$. The regression function $b( \cdot) =E [Y \mid X=\cdot ]$ is assumed to lie in a given Hölder ball with Hölder exponent $\beta<1$.[^43] The goal is to estimate $E[\hbox{Var} \{ Y \mid X \} ]$ under the homoscedastic semiparametric model $\operatorname{Var}[ Y \mid X] =\sigma^{2}$. Under this model, the authors construct a simple estimator $\widehat{\sigma}^{2}$ that converges at rate $n^{-\fraca{4\beta/d}{1+4\beta/d}}$, when $\beta/d<1/4$. @Wang:Brow:Cai:Levi:effe:2008 ([-@Wang:Brow:Cai:Levi:effe:2008]) and @Cai2009126 ([-@Cai2009126]) earlier proved that if $X_{i},i=1,\ldots,n$, are nonrandom but equally spaced in $ ( 0,1 )^{d}$, the minimax rate of convergence for the estimation of $\sigma^{2}$ is $n^{-2\beta/d}$ (when $\beta/d<1/4$) which is slower than $n^{-\fraca{4\beta/d}{1+4\beta/d}}$. Thus, randomness in $X$ allows for improved convergence rates even though no smoothness assumptions are made regarding $f( \cdot)$. To explain how this happens, we describe the estimator of @robins:higher:2008 ([-@robins:higher:2008]). The unit cube in $\mathbb{R}^{d}$ is divided into $k=k( n ) =n^{\gamma}$, $\gamma>1$ identical subcubes each with edge length $k^{-1/d}$. A simple probability calculation shows that the number of subcubes containing at least two observations is $O_{p}( n^{2}/k)$. One may estimate $\sigma^{2}$ in each such subcube by $( Y_{i}-Y_{j})^{2}/2$.[^44] An estimator $\widehat{\sigma}^{2}$ of $\sigma^{2}$ may then be constructed by simply averaging the subcube-specific estimates $ ( Y_{i}-Y_{j} ) ^{2}/2$ over all the sub-cubes with at least two observations. The rate of convergence of the estimator is maximized at $n^{-\fraca{4\beta/d}{1+4\beta/d}}$ by taking $k=n^{\fracz{2}{1+4\beta/d}}$.[^45] @robins:higher:2008 ([-@robins:higher:2008]) conclude that the random design estimator has better bias control, and hence converges faster than the optimal equal-spaced fixed $X$ estimator, because the random design estimator exploits the $O_{p} (n^{2}/n^{\fracz{2}{1+4\beta/d}} ) $ random fluctuations for which the $X$’s corresponding to two different observations are only a distance of $O ( \{ n^{\fracz{2}{1+4\beta/d}} \}^{-1/d} )$ apart. An Open Problem[^46] {#an-open-problem .unnumbered} -------------------- Consider again the above setting with random $X$. Suppose that $\beta/d$ remains less than $1/4$ but now $\beta>1$. Does there still exist an estimator of $\sigma^{2}$ that converges at $n^{-\fraca{4\beta/d}{1+4\beta/d}}$? Analogy with other nonparametric estimation problems would suggest the answer is “yes,” but the question remains unsolved.[^47] Other Work {#sec:otherwork} ========== The available space precludes a complete treatment of all of the topics that Robins has worked on. We provide a brief description of selected additional topics and a guide to the literature. Analyzing Observational Studies as Nested Randomized Trials {#analyzing-observational-studies-as-nested-randomized-trials .unnumbered} ----------------------------------------------------------- @hernan2008observational ([-@hernan2008observational]) and @hernan2005discussion conceptualize and analyze observational studies of a time varying treatment as a nested sequence of individual RCTs trials run by nature. Their analysis is closely related to g-estimation of SNM (discussed in Section \[sec:snm\]). The critical difference is that in these papers Robins and Hernán do not specify a SNM to coherently link the trial-specific effect estimates. This has benefits in that it makes the analysis easier and also more familiar to users without training in SNMs. The downside is that, in principle, this lack of coherence can result in different analysts recommending, as optimal, contradictory interventions (@robins2007invited [-@robins2007invited]). Adjustment for “Reverse Causation” {#adjustment-for-reverse-causation .unnumbered} ---------------------------------- Consider an epidemiological study of a time- dependent treatment (say cigarette smoking) on time to a disease of interest, say clinical lung cancer. In this setting, uncontrolled confounding by undetected preclinical lung cancer (often referred to as “reverse causation") is a serious problem. @robins2008causal ([-@robins2008causal]) develops analytic methods that may still provide an unconfounded effect estimate, provided that (i) all subjects with preclinical disease severe enough to affect treatment (i.e., smoking behavior) at a given time $t$ will have their disease clinically diagnosed within the next $x$, say $2$ years and (ii) based on subject matter knowledge an upper bound, for example, $3$ years, on $x$ is known. Causal Discovery {#causal-discovery .unnumbered} ---------------- @cps93 ([-@cps93]) and @pearlverm:tark proposed statistical methods that allowed one to draw causal conclusions from associational data. These methods assume an underlying causal DAG (or equivalently an FFRCISTG). If the DAG is incomplete, then such a model imposes conditional independence relations on the associated joint distribution (via d-separation). @cps93 and @pearlverm:tark ([-@pearlverm:tark]) made the additional assumption that [*all*]{} conditional independence relations that hold in the distribution of the observables are implied by the underlying causal graph, an assumption termed “stability” by @pearlverm:tark ([-@pearlverm:tark]), and “faithfulness” by @cps93. Under this assumption, the underlying DAG may be identified up to a (“Markov”) equivalence class. @cps93 proposed two algorithms that recover such a class, entitled “PC” and “FCI.” While the former presupposes that there are no unobserved common causes, the latter explicitly allows for this possibility. @robins:impossible:1999 ([-@robins:impossible:1999]) and @robins:uniform:2003 ([-@robins:uniform:2003]) pointed out that although these procedures were consistent they were not uniformly consistent. More recent papers ([-@kalisch:2007]; [-@Colo:Maat:Kali:Rich:lear:2012]) recover uniform consistency for these algorithms by imposing additional assumptions. @spirtes:2014 ([-@spirtes:2014]) in this volume extend this work by developing a variant of the PC Algorithm which is uniformly consistent under weaker assumptions. @shpitser12parameter ([-@shpitser12parameter; -@shpitser2014introduction]), building on @tian02on and @robins:1999 ([-@robins:1999]) develop a theory of *nested Markov models* that relate the structure of a causal DAG to conditional independence relations that arise after re-weighting; see Section \[sec:direct-null\]. This theory, in combination with the theory of graphical Markov models based on Acyclic Directed Mixed Graphs ([-@richardson:2002]; [-@richardson:2003]; [-@wermuth:11]; [-@evans2014]; [-@sadeghi2014]), will facilitate the construction of more powerful[^48] causal discovery algorithms that could (potentially) reveal much more information regarding the structure of a DAG containing hidden variables than algorithms (such as FCI) that solely use conditional independence. Extrapolation and Transportability of Treatment Effects {#extrapolation-and-transportability-of-treatment-effects .unnumbered} ------------------------------------------------------- Quality longitudinal data is often only available in high resource settings. An important question is when and how can such data be used to inform the choice of treatment strategy in low resource settings. To help answer this question, @robins2008estimation ([-@robins2008estimation]) studied the extrapolation of optimal dynamic treatment strategies between two HIV infected patient populations. The authors considered the treatment strategies $g_x$, of the same form as those defined in Section \[sec:msm\], namely, “start anti-retroviral therapy the first time at which the measured CD4 count falls below $x$.” Given a utility measure $Y$, their goal is to find the regime $g_{x_{\mathrm{opt}}}$ that maximizes $E[ Y(g_x)]$ in the second low-resource population when good longitudinal data are available only in the first high-resource population. Due to differences in resources, the frequency of CD4 testing in the first population is much greater than in the second and, furthermore, for logistical and/or financial reasons, the testing frequencies cannot be altered. In this setting, the authors derived conditions under which data from the first population is sufficient to identify $g_{x_{\mathrm{opt}}}$ and construct IPTW estimators of $g_{x_{\mathrm{opt}}}$ under those conditions. A key finding is that owing to the differential rates of testing, a necessary condition for identification is that CD4 testing has no direct causal effect on $Y$ not through anti-retroviral therapy. In this issue, @pearl:2014 ([-@pearl:2014]) study the related question of transportability between populations using graphical tools. Interference, Interactions and Quantum Mechanics {#interference-interactions-and-quantum-mechanics .unnumbered} ------------------------------------------------ Within a counterfactual causal model, @cox1958 ([-@cox1958]) defined there to be *interference between treatments* if the response of some subject depends not only on their treatment but on that of others as well. On the other hand, @Vand:Robi:mini:2009 ([-@Vand:Robi:mini:2009]) defined two binary treatments $ ( a_{1},a_{2} ) $ to be *causally interacting* to cause a binary response $Y$ if for some unit $Y( 1,1) \neq Y( 1,0) =Y(0,1) $; @Vand:epis:2010 ([-@Vand:epis:2010]) defined the interaction to be *epistatic* if $Y( 1,1) \neq Y( 1,0) =Y(0,1) =Y( 0,0)$. VanderWeele with his collaborators has developed a very general theory of empirical tests for causal interaction of different types ([-@Vand:Robi:mini:2009]; [-@Vand:epis:2010], [-@Vand:suff:2010]; [-@vanderweele2012]). @robins2012proof ([-@robins2012proof]) showed, perhaps surprisingly, that this theory could be used to give a simple but novel proof of an important result in quantum mechanics known as Bell’s theorem. The proof was based on two insights: The first was that the consequent of Bell’s theorem could, by using the Neyman causal model, be recast as the statement that there is interference between a certain pair of treatments. The second was to recognize that empirical tests for causal interaction can be reinterpreted as tests for certain forms of interference between treatments, including the form needed to prove Bell’s theorem. @vanderweele2012mapping ([-@vanderweele2012mapping]) used this latter insight to show that existing empirical tests for causal interactions could be used to test for interference and spillover effects in vaccine trials and in many other settings in which interference and spillover effects may be present. The papers @ogburn:2014 ([-@ogburn:2014]) and @vanderweele:2014 in this issue contain further results on interference and spillover effects. Multiple Imputation {#multiple-imputation .unnumbered} ------------------- @wang1998large ([-@wang1998large]) and @robins2000inference ([-@robins2000inference]) studied the statistical properties of the multiple imputation approach to missing data ([-@rubin2004multiple]). They derived a variance estimator that is consistent for the asymptotic variance of a multiple imputation estimator even under misspecification and incompatibility of the imputation and the (complete data) analysis model. They also characterized the large sample bias of the variance estimator proposed by @Rubi:mult:1978 ([-@Rubi:mult:1978]). Posterior Predictive Checks {#posterior-predictive-checks .unnumbered} --------------------------- @robins2000asymptotic ([-@robins2000asymptotic]) studied the asymptotic null distributions of the posterior predictive p-value of @rubin1984bayesianly ([-@rubin1984bayesianly]) and @guttman1967use ([-@guttman1967use]) and of the conditional predictive and partial posterior predictive p-values of @bayarri2000p ([-@bayarri2000p]). They found the latter two p-values to have an asymptotic uniform distribution; in contrast they found that the posterior predictive p-value could be very conservative, thereby diminishing its power to detect a misspecified model. In response, Robins et al. derived an adjusted version of the posterior predictive p-value that was asymptotically uniform. Sensitivity Analysis {#sensitivity-analysis .unnumbered} -------------------- Understanding that epidemiologists will almost never succeed in collecting data on all covariates needed to fully prevent confounding by unmeasured factors and/or nonignorable missing data, Robins with collaborators Daniel Scharfstein and Andrea Rotnitzky developed methods for conducting sensitivity analyses. See, for example, @Scha:Rotn:Robi:adju:1999, @robins2000sensitivity and @robins2002covariance ([-@robins2002covariance], pages 319–321). In this issue, @richardson:hudgens:2014 ([-@richardson:hudgens:2014]) describe methods for sensitivity analysis and present several applied examples. Public Health Impact {#public-health-impact .unnumbered} -------------------- Finally, we have not discussed the large impact of the methods that Robins introduced on the substantive analysis of longitudinal data in epidemiology and other fields. Many researchers have been involved in transforming Robins’ work on time-varying treatments into increasingly reliable, robust analytic tools and in applying these tools to help answer questions of public health importance. List of Acronyms Used {#acronyms .unnumbered} ===================== ----------- --------------------------------- --------------------------------------------------------------------- CAR: Section \[sec:semipar-eff\] coarsened at random. CD4: Section \[sec:tree-graph\] (medical) cell line depleted by HIV. CDE: Section \[sec:cde\] controlled direct effect. CMA: Section \[sec:dags\] causal Markov assumption. DAG: Section \[sec:dags\] directed acyclic graph. DR: Section \[sec:semipar-eff\] doubly robust. dSWIG: Section \[sec:dynamic-regimes\] dynamic single-world intervention graph. FFRCISTG: Section \[sec:tree-graph\] finest fully randomized causally interpreted structured tree graph. HIV: Section \[sec:tree-graph\] (medical) human immunodeficiency virus. IPCW: Section \[sec:censoring\] inverse probability of censoring weighted. IPTW: Section \[sec:msm\] inverse probability of treatment weighted. ITT: Section \[sec:censoring\] intention to treat. MI: Section \[sec:pde\] (medical) myocardial infarction. MSM: Section \[sec:msm\] marginal structural model. ----------- --------------------------------- --------------------------------------------------------------------- ----------- ---------------------------- ------------------------------------------------------------------ NPSEM: Section \[sec:tree-graph\] nonparametric structural equation model. NPSEM-IE: Section \[sec:tree-graph\] nonparametric structural equation model with independent errors. PDE: Section \[sec:pde\] pure direct effects. PSDE: Section \[sec:psde\] principal stratum direct effects. RCT: Section \[sec:tree-graph\] randomized clinical trial. SNM: Section \[sec:snm\] structural nested model. SNDM: Section \[sec:snm\] structural nested distribution model. SNFTM: Section \[sec:snm\] structural nested failure time model. SNMM: Section \[sec:snm\] structural nested mean model. SWIG: Section \[sec:dags\] single-world intervention graph. TIE: Section \[sec:pde\] total indirect effect. ----------- ---------------------------- ------------------------------------------------------------------ Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the US National Institutes of Health Grant R01 AI032475. [142]{} (). . . Aalen, O. (1978). Nonparametric inference for a family of counting processes. *The Annals of Statistics* [6]{}, 701–726. , , (). . , . Andersen, P., O. Borgan, R. Gill, and N. Keiding (1992). *Statistical models based on counting processes*. Springer. , (). . . Aronow, P. M., D. P. Green, and D. K. K. Lee (2014). Sharp bounds on the variance in randomized experiments. *The Annals of Statistics* [42]{}(3), 850–871. (). . In . , . Balke, A. and J. Pearl (1994). Probabilistic evaluation of counterfactual queries. In [*Proceedings of the $\rm12^{th}$ Conference on Artificial Intelligence*]{}, Volume 1, Menlo Park, CA, pp. 230–7. MIT Press. (). . . Bang, H. and J. M. Robins (2005). Doubly robust estimation in missing data and causal inference models. *Biometrics* [61]{}(4), 962–973. (). . . Bayarri, M. J. and J. O. Berger (2000). ${P}$ values for composite null models (with discussion). *Journal of the American Statistical Association* [ 95]{}(452), 1127–1170. (). . , . Bellman, R. (1957). *Dynamic Programming* (1 ed.). Princeton, NJ, USA: Princeton University Press. , , (). . , . Bickel, P. J., C. A. J. Klaassen, Y. Ritov, and J. A. Wellner (1993). *Efficient and Adaptive Estimation for Semiparametric Models*. Baltimore: John Hopkins University Press. , ed. (). . , . , H. M. (Ed.) (1971). *Causal models in the social sciences*. Chicago. , (). . . Cai, T. T., M. Levine, and L. Wang (2009). Variance function estimation in multivariate nonparametric regression with fixed design. *Journal of Multivariate Analysis* [100]{}(1), 126 – 136. , (). . . Cassel, C. M., C. E. Särndal, and J. H. Wretman (1976). Some results on generalized difference estimation and generalized regression estimation for finite populations. *Biometrika* [63]{}, 615–620. (). . . Cator, E. A. (2004). On the testability of the car assumption. *The Annals of Statistics* [32]{}(5), 1957–1980. , , (). . . Colombo, D., M. H. Maathuis, M. Kalisch, and T. S. Richardson (2012). Learning high-dimensional directed acyclic graphs with latent and selection variables. *The Annals of Statistics* [40]{}(1), 294–321. (). . , . Cox, D. R. (1958). *Planning of experiments.* Wiley. (). . . Cox, D. R. (1972). Regression models and life-tables (with discussion). *Journal of the Royal Statistical Society, Series B: Methodological* [34]{}, 187–220. (). . . Cox, D. R. and N. Wermuth (1999). Likelihood factorizations for mixed discrete and continuous variables. *Scand. J. Statist.* [26]{}(2), 209–220. , , (). . . Dudik, M., D. Erhan, J. Langford, and L. Li (2014). Doubly robust policy evaluation and learning. *Statistical Science* [29]{}(4), ??–?? (). . . Efron, B. and D. V. Hinkley (1978). Assessing the accuracy of the maximum likelihood estimator: observed versus expected [F]{}isher information. *Biometrika* [65]{}(3), 457–487. With comments by Ole Barndorff-Nielsen, A. T. James, G. K. Robinson and D. A. Sprott and a reply by the authors. (). . . Evans, R. J. and T. S. Richardson (2014). Markovian acyclic directed mixed graphs for discrete data. *The Annals of Statistics* [42]{}(4), 1452–1482. (). . . Firth, D. and K. E. Bennett (1998). Robust models in probability sampling ([with Discussion]{}). *Journal of the Royal Statistical Society, Series B: Statistical Methodology* [60]{}, 3–21. (). . , . Fleming, T. R. and D. P. Harrington (1991). *Counting Processes and Survival Analysis*. John Wiley & Sons. (). . . Frangakis, C. E. and D. B. Rubin (1999). Addressing complications of intention-to-treat analysis in the combined presence of all-or-none treatment-noncompliance and subsequent missing outcomes. *Biometrika* [86]{}(2), 365–379. (). . . Frangakis, C. E. and D. B. Rubin (2002). Principal stratification in causal inference. *Biometrics* [58]{}(1), 21–29. (). . Freedman, D. A. (2006). Statistical [M]{}odels for [C]{}ausation: [W]{}hat [I]{}nferential [L]{}everage [D]{}o [T]{}hey [P]{}rovide? *Evaluation Review* [30]{}(6), 691–713. (). . . Gilbert, E. S. (1982). Some confounding factors in the study of mortality and occupational exposures. *American Journal of Epidemiology* [116]{}(1), 177–188. , (). . . Gilbert, P. B., R. J. Bosch, and M. G. Hudgens (2003). Sensitivity [A]{}nalysis for the [A]{}ssessment of [C]{}ausal [V]{}accine [E]{}ffects on [V]{}iral [L]{}oad in [HIV]{} [V]{}accine [T]{}rials. *Biometrics* [59]{}(3), 531–541. (). . . Gill, R. D. (2014). Statistics, causality and [B]{}ell’s theorem. *Statistical Science* [29]{}(4), ??–?? (). . . Gill, R. D. and J. M. Robins (2001). Causal inference for complex longitudinal data: The continuous case. *The Annals of Statistics* [29]{}(6), 1785–1811. , (). . In . . , . Gill, R. D., M. J. van der Laan, and J. M. Robins (1997). Coarsening at random: [C]{}haracterizations, conjectures, counter-examples. In [*Survival Analysis. Proceedings of the first Seattle Symposium in Biostatistics (Lecture Notes in Statistics Vol. 123)*]{}, pp. 255–294. Publisher to be added. (). . . Guttman, I. (1967). The use of the concept of a future observation in goodness-of-fit problems. *Journal of the Royal Statistical Society, Series B: Methodological* [29]{}, 83–100. (). . . Heitjan, D. F. and D. B. Rubin (1991). Ignorability and coarse data. *The Annals of Statistics* [19]{}, 2244–2253. , (). . Hern[á]{}n, M. A., J. M. Robins, and L. A. Garc[í]{}a Rodr[í]{}guez (2005). Discussion on “statistical issues arising in the women’s health initiative". *Biometrics* [61]{}(4), 922–930. , , (). . . Hern[á]{}n, M. A., E. Lanoy, D. Costagliola, and J. M. Robins (2006). Comparison of dynamic treatment regimes via inverse probability weighting. *Basic [&]{} Clinical Pharmacology [&]{} Toxicology* [ 98]{}(3), 237–242. , , , , , , , (). . . Hern[á]{}n, M. A., A. Alonso, R. Logan, F. Grodstein, K. B. Michels, M. J. Stampfer, W. C. Willett, J. E. Manson, and J. M. Robins (2008). Observational studies analyzed like randomized experiments: an application to postmenopausal hormone therapy and coronary heart disease. *Epidemiology (Cambridge, Mass.)* [19]{}(6), 766. (). . In . . Huang, Y. and M. Valtorta (2006). Pearl’s calculus of interventions is complete. In [*Proceedings of the 22nd Conference On Uncertainty in Artificial Intelligence*]{}. (). . , . Kalbfleisch, J. D. and R. L. Prentice (1980). *The Statistical Analysis of Failure Time Data*. John Wiley & Sons. (). . . Kalisch, M. and P. B[ü]{}hlmann (2007). Estimating high-dimensional directed acyclic graphs with the pc-algorithm. *Journal of Machine Learning Research* [8]{}, 613–636. (). . . Keiding, N. and D. Clayton (2014). Standardization and control for confounding in observational studies: a historical perspective. *Statist. Sci.* [29]{}, ??–?? (). . . Manski, C. (1990). Non-parametric bounds on treatment effects. *American Economic Review* [80]{}, 351–374. (). . . Miettinen, O. S. and E. F. Cook (1981). Confounding: Essence and detection. *American Journal of Epidemiology* [114]{}(4), 593–603. , (). . In . Mohan, K., J. Pearl, and J. Tian (2013). Graphical models for inference with missing data. In [*Advances in Neural Information Processing Systems 26.*]{}, pp.1277–1285. (). . . Moore, K. L. and M. J. van der Laan (2009). Covariate adjustment in randomized trials with binary outcomes: Targeted maximum likelihood estimation. *Statistics in medicine* [28]{}(1), 39–64. (). . . Murphy, S. A. (2003). Optimal dynamic treatment regimes. *Journal of the Royal Statistical Society, Series B: Statistical Methodology* [65]{}(2), 331–366. (). . . . Neyman, J. (1923). Sur les applications de la théorie des probabilités aux experiences agricoles: [E]{}ssai des principes. *Roczniki Nauk Rolniczych* [X]{}, 1–51. In Polish, English translation by D. Dabrowska and T. Speed in [*Statistical Science*]{} [**5**]{} 463–472, 1990. (). . . Ogburn, E. L. and T. J. Vander[W]{}eele (2014). Causal diagrams for interference and contagion. *Statistical Science* [29]{}(4), ??–?? , (a). . . Orellana, L., A. Rotnitzky, and J. M. Robins (2010a). Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, part i: main content. *The International Journal of Biostatistics* [6]{}(2). , (b). . . Orellana, L., A. Rotnitzky, and J. M. Robins (2010b). Dynamic regime marginal structural mean models for estimation of optimal dynamic treatment regimes, part ii: proofs of results. *The International Journal of Biostatistics* [6]{}(2). (). . , . Pearl, J. (1988). *Probabilistic [R]{}easoning in [I]{}ntelligent [S]{}ystems*. San Mateo, CA: Morgan Kaufmann. (a). . . . Pearl, J. (1995a). Causal diagrams for empirical research (with discussion). *Biometrika* [82]{}, 669–690. (b). . In . , . Pearl, J. (1995b). On the testability of causal models with latent and instrumental variables. In [*Proceedings of the Eleventh Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-95)*]{}, San Francisco, CA, pp.435–443. Morgan Kaufmann. (). . , . Pearl, J. (2000). *Causality*. Cambridge, UK: Cambridge University Press. (). . In . , . Pearl, J. (2001). Direct and indirect effects. In J. S. Breese and D. Koller (Eds.), [*[P]{}roceedings of the 17th Annual Conference on Uncertainty in Artificial Intelligence*]{}, San Francisco, pp. 411–42. Morgan Kaufmann. (). . , [Computer Science Dept., UCLA]{}. Pearl, J. (2012). Eight myths about causality and structural equation models. Technical Report R-393, Computer Science Department, UCLA. (). . . Pearl, J. and E. Bareinboim (2014). External validity: From do-calculus to transportability across populations. *Statistical Science* [29]{}(4), ??–?? (). . In . . , . Pearl, J. and T. Verma (1991). A theory of inferred causation. In J. Allen, R. Fikes, and E. Sandewall (Eds.), [*Principles of Knowledge Representation and Reasoning: Proceedings of the Second International Conference*]{}, San Mateo, CA, pp. 441–452. Morgan Kaufmann. , , , (). . . Picciotto, S., M. A. Hern[á]{}n, J. H. Page, J. G. Young, and J. M. Robins (2012). Structural nested cumulative failure time models to estimate the effects of interventions. *Journal of the American Statistical Association* [ 107]{}(499), 886–900. , (). . . Quale, C. M., M. J. van Der Laan, and J. M. Robins (2006). Locally efficient estimation with bivariate right-censored data. *Journal of the American Statistical Association* [ 101]{}(475), 1076–1084. (). . . Richardson, T. S. (2003). Markov properties for acyclic directed mixed graphs. *Scand. J. Statist.* [30]{}, 145–157. (). . , [Center for Statistics and the Social Sciences, Univ. Washington, Seattle, WA]{}. Richardson, T. S. and J. M. Robins (2013). ingle [W]{}orld [I]{}ntervention [G]{}raphs [(SWIGs)]{}: A unification of the counterfactual and graphical approaches to causality. Technical Report 128, Center for Statistics and the Social Sciences, University of Washington. (). . . Richardson, T. S. and P. Spirtes (2002). Ancestral graph [M]{}arkov models. *Ann. Statist.* [30]{}, 962–1030. , , (). . . Richardson, A., M. G. Hudgens, P. B. Gilbert, and J. P. Fine (2014). Nonparametric bounds and sensitivity analysis of treatment effects. *Statistical Science* [29]{}(4), ??–?? , , (). . Ritov, Y., P. J. Bickel, A. C. Gamst, and B. J. K. Kleijn (2014). The [B]{}ayesian analysis of complex, high-dimensional models: [C]{}an it be [CODA]{}? *Statistical Science* [29]{}(4), ??–?? (). . . Robins, J. M. (1986). A new approach to causal inference in mortality studies with sustained exposure periods [–]{} applications to control of the healthy worker survivor effect. *Mathematical [M]{}odeling* [7]{}, 1393–1512. (a). . . Robins, J. M. (1987a). . *J Chronic Dis* [40 Suppl 2]{}, 139S–161S. (b). . Robins, J. M. (1987b). Addendum to: [*A new approach to causal inference in mortality studies with sustained exposure periods – [A]{}pplication to control of the healthy worker survivor effect*]{}. *Computers and Mathematics with Applications* [14]{}, 923–945. (). . In (, , eds.). , Robins, J. M. (1989). The analysis of randomized and non-randomized [AIDS]{} treatment trials using a new approach to causal inference in longitudinal studies. In L. Sechrest, H. Freeman, and A. Mulley (Eds.), [*Health Service Research Methodology: A focus on [AIDS]{}*]{}. Washington, D.C.: U.S. Public Health Service. (). . . Robins, J. M. (1992). Estimation of the time-dependent accelerated failure time model in the presence of confounding factors. *Biometrika* [79]{}(2), 321–334. (). . In . , . Robins, J. M. (1993). Analytic methods for estimating hiv-treatment and cofactor effects. In [*Methodological Issues in AIDS Behavioral Research*]{}, pp.213–288. Springer. (). . . Robins, J. M. (1994). Correcting for non-compliance in randomized trials using structural nested mean models. *Communications in Statistics: Theory and Methods* [23]{}, 2379–2412. (a). . In . . , . Robins, J. M. (1997a). Causal inference from complex longitudinal data. In M. Berkane (Ed.), [*Latent variable modelling and applications to causality*]{}, Number 120 in Lecture notes in statistics, pp.69–117. New York: Springer-Verlag. (b). . In ( , Section eds.), ( , eds.) . , . Robins, J. M. (1997c). Structural nested failure time models. In [ P.K. Andersen and N. Keiding, [(Section Eds.)]{}, [*Survival Analysis*]{}]{}, P. Armitage, and T. Colton (Eds.), [*Encyclopedia of Biostatistics*]{}, pp. 4372–4389. John Wiley [&]{} Sons, Ltd. (). . In . , . Robins, J. M. (1997b). Marginal structural models. In [*ASA Proceedings of the Section on Bayesian Statistical Science*]{}, pp. 1–10. American Statistical Association. (a). . In . , . Robins, J. M. (1999a). Robust estimation in sequentially ignorable missing data and causal inference models. In [*ASA Proceedings of the Section on Bayesian Statistical Science*]{}, pp. 6–10. American Statistical Association. (b). . In ( , eds.) . , . Robins, J. M. (1999b). Testing and estimation of direct effects by reparameterizing directed acyclic graphs with structural nested models. In C. Glymour and G. Cooper (Eds.), [*Computation, Causation, and Discovery*]{}, pp. 349–405. Cambridge, MA: MIT Press. (). . In . . , . Robins, J. M. (2000). Marginal structural models versus structural nested models as tools for causal inference. In M. E. Halloran and D. Berry (Eds.), [*Statistical Models in Epidemiology, the Environment, and Clinical Trials*]{}, Volume 116 of [*The IMA Volumes in Mathematics and its Applications*]{}, pp. 95–133. Springer New York. (). . . Robins, J. M. (2002). Comment on “Covariance adjustment in randomized experiments and observational studies”, by [P]{}. [R]{}. [R]{}osenbaum. *Statistical Science* [17]{}(3), 309–321. (). . In . . , . Robins, J. M. (2004). Optimal structural nested models for optimal sequential decisions. In D. Y. Lin and P. J. Heagerty (Eds.), [*Proceedings of the Second Seattle Symposium on Biostatistics*]{}, Number 179 in Lecture Notes in Statistics, pp. 189–326. New York: Springer. (). . . Robins, J. M. (2008). Causal models for estimating the effects of weight gain on mortality. *International Journal of Obesity* [32]{}, S15–S41. \(a) . . Robins, J. M. and S. Greenland (1989a). Estimability and estimation of excess and etiologic fractions. *Statistics in Medicine* [8]{}, 845–859. (b). . . Robins, J. M. and S. Greenland (1989b). The probability of causation under a stochastic model for individual risk. *Biometrics* [45]{}, 1125–1138. (). . . Robins, J. M. and S. Greenland (1992). Identifiability and exchangeability for direct and indirect effects. *Epidemiology* [3]{}, 143–155. , (). . . Robins, J. M., M. A. Hern[á]{}n, and A. Rotnitzky (2007). Invited commentary: effect modification by time-varying covariates. *American [J]{}ournal of [E]{}pidemiology* [166]{}(9), 994–1002. (). . . Robins, J. M. and H. Morgenstern (1987). The foundations of confounding in epidemiology. *Comput. Math. Appl.* [14]{}(9-12), 869–916. , (). . . Robins, J. M., L. Orellana, and A. Rotnitzky (2008). Estimation and extrapolation of optimal treatment and testing strategies. *Statistics in medicine* [27]{}(23), 4678–4721. (). . In (, , eds.) . , . Robins, J. M. and T. S. Richardson (2011). Alternative graphical causal models and the identification of direct effects. In P. Shrout, K. Keyes, and K. Ornstein (Eds.), [*Causality and Psychopathology: Finding the Determinants of Disorders and their Cures*]{}, Chapter 6, pp. 1–52. Oxford University Press. (). . . Robins, J. M. and Y. Ritov (1997). Toward a curse of dimensionality appropriate ([CODA]{}) asymptotic theory for semi-parametric models. *Statistics in Medicine* [16]{}, 285–319. (). . In (, , eds.) . , . Robins, J. M. and A. Rotnitzky (1992). Recovery of information and adjustment for dependent censoring using surrogate markers. In N. P. Jewell, K. Dietz, and V. T. Farewell (Eds.), [*AIDS Epidemiology*]{}, pp. 297–331. Birkhäuser Boston. (). Robins, J. M. and A. Rotnitzky (2001). Comment on “[I]{}nference for semiparametric models: [S]{}ome questions and an answer”, by [P]{}. [B]{}ickel. *Statistica Sinica* [11]{}(4), 920–936. , (). . In . . , . Robins, J. M., A. Rotnitzky, and D. O. Scharfstein (2000). Sensitivity analysis for selection bias and unmeasured confounding in missing data and causal inference models. In [*Statistical models in epidemiology, the environment, and clinical trials*]{}, pp. 1–94. Springer. , (). . . Robins, J., A. Rotnitzky, and S. Vansteelandt (2007). Discussion of [[*[P]{}rincipal stratification designs to estimate input data missing due to death*]{}]{} by [C]{}.[E]{}. [F]{}rangakis, [D]{}. [B]{}. [R]{}ubin, [M.-W.]{} [A]{}n [&]{} [E]{}. [M]{}ac[K]{}enzie. *Biometrics* [63]{}(3), 650–653. , (). . . Robins, J. M., A. Rotnitzky, and L. P. Zhao (1994). Estimation of regression coefficients when some regressors are not always observed. *Journal of the American Statistical Association* [89]{}, 846–866. , (). . J. M. Robins, T. J. VanderWeele, and R. D. Gill (2012). A proof of [B]{}ell’s inequality in quantum mechanics using causal interactions. Available at arXiv:1207.4913. , (). . . Robins, J. M., A. van der Vaart, and V. Ventura (2000). Asymptotic distribution of p values in composite null models. *Journal of the American Statistical Association* [ 95]{}(452), 1143–1156. (). . . Robins, J. M. and N. Wang (2000). Inference for imputation estimators. *Biometrika* [87]{}(1), 113–124. (). . In . , . Robins, J. M. and L. Wasserman (1997). Estimation of effects of sequential treatments by reparameterizing directed acyclic graphs. In [*Proceedings of the 13th Conference on Uncertainty in Artificial Intelligence*]{}, pp. 309–420. Morgan Kaufmann. (). . In ( , eds.) . , . Robins, J. M. and L. Wasserman (1999). On the impossibility of inferring causation from association without background knowledge. In C. Glymour and G. Cooper (Eds.), [*Computation, Causation, and Discovery*]{}, pp. 305–321. Cambridge, MA: MIT Press. (). . . Robins, J. M. and L. Wasserman (2000). Conditioning, likelihood, and coherence: [A]{} review of some foundational concepts. *Journal of the American Statistical Association* [ 95]{}(452), 1340–1346. , , (). . . Robins, J. M., D. Blevins, G. Ritter, and M. Wulfsohn (1992). ${G}$-estimation of the effect of prophylaxis therapy for pneumocystis carinii pneumonia on the survival of [AIDS]{} patients. *Epidemiology* [3]{}, 319–336. , , (). . . Robins, J. M., R. Scheines, P. Spirtes, and L. Wasserman (2003). Uniform consistency in causal inference. *Biometrika* [90]{}(3), 491–515. , , (). . In . ( , eds.) . , . Robins, J. M., L. Li, E. Tchetgen, and A. van der Vaart (2008). Higher order influence functions and minimax estimation of nonlinear functionals. In D. Nolan and T. Speed (Eds.), [*Probability and Statistics: Essays in Honor of David A. Freedman*]{}, Volume 2 of [*Collections*]{}, pp.335–421. Beachwood, Ohio, USA: Institute of Mathematical Statistics. (). . . Rotnitzky, A. and J. M. Robins (1995). Semiparametric regression estimation in the presence of dependent censoring. *Biometrika* [82]{}, 805–820. (). . In (, , , , eds.). , . Rotnitzky, A. and S. Vansteelandt (2014). Double-robust methods. In G. Fitzmaurice, M. Kenward, G. Molenberghs, A. Tsiatis, and G. Verbeke (Eds.), [*Handbook of Missing Data Methodology*]{}. Chapman & Hall/CRC Press. (). . . Rubin, D. (1974). Estimating causal effects of treatments in randomized and non-randomized studies. *Journal of Educational Psychology* [66]{}, 688–701. (a). . . Rubin, D. B. (1978a). Bayesian inference for causal effects: [T]{}he role of randomization. *The Annals of Statistics* [6]{}, 34–58. (b). . In . , . Rubin, D. B. (1978b). Multiple imputations in sample surveys: [A]{} phenomenological [B]{}ayesian approach to nonresponse ([C]{}/[R]{}: P29-34). In [*ASA Proceedings of the Section on Survey Research Methods*]{}, pp. 20–28. American Statistical Association. (). . . Rubin, D. B. (1984). Bayesianly justifiable and relevant frequency calculations for the applied statistician. *The Annals of Statistics* [12]{}, 1151–1172. (). . , . Rubin, D. B. (2004b). *Multiple imputation for nonresponse in surveys*, Volume 81. John Wiley & Sons. (). . . Rubin, D. B. (1998). More powerful randomization-based $p$-values in double-blind trials with non-compliance. *Statistics in Medicine* [17]{}, 371–385. (). . . Rubin, D. B. (2004a). Direct and indirect causal effects via potential outcomes. *Scandinavian Journal of Statistics* [31]{}(2), 161–170. (). . . Sadeghi, K. and S. Lauritzen (2014). Markov properties for mixed graphs. *Bernoulli* [20]{}(2), 676–696. , (). . . Scharfstein, D. O., A. Rotnitzky, and J. M. Robins (1999). Adjusting for nonignorable drop-out using semiparametric nonresponse models (with discussion). *Journal of the American Statistical Association* [94]{}, 1096–1120. , , (). . . Schulte, P. J., A. A. Tsiatis, E. B. Laber, and M. Davidian (2014). Q- and [A]{}-learning methods for estimating optimal dynamic treatment regimes. *Statistical Science* [29]{}(4), ??–?? (). . In (, , eds.) . , . Sekhon, J. S. (2008). The [N]{}eyman-[R]{}ubin model of causal inference and estimation via matching methods. In J. M. Box-Steffensmeier, H. E. Brady, and D. Collier (Eds.), [*The [O]{}xford [H]{}andbook of [P]{}olitical [M]{}ethodology*]{}, Chapter 11, pp.271–299. Oxford Handbooks Online. (). . In . , . Shpitser, I. and J. Pearl (2006). Identification of joint interventional distributions in recursive semi-[M]{}arkovian causal models. In [*Proceedings of the 21st National Conference on Artificial Intelligence*]{}. , , (). . In . Shpitser, I., T. S. Richardson, J. M. Robins, and R. J. Evans (2012). Parameter and structure learning in nested [M]{}arkov models. In [*Causal Structure Learning Workshop of the 28th Conference on Uncertainty in Artificial Intelligence (UAI-12)*]{}. , , (). . . Shpitser, I., R. J. Evans, T. S. Richardson, and J. M. Robins (2014). Introduction to nested markov models. *Behaviormetrika* [41]{}(1), 3–39. , (). . . , . Spirtes, P., C. Glymour, and R. Scheines (1993). *Causation, [P]{}rediction and [S]{}earch*. Number 81 in Lecture Notes in Statistics. Springer-Verlag. (). . . Spirtes, P. and J. Zhang (2014). A uniformly consistent estimator of causal effects under the $k$-triangle-faithfulness assumption. *Statistical Science* [29]{}(4), ??–?? (). . In . , . Tian, J. (2008). Identifying dynamic sequential plans. In [*24th Conference on Uncertainty in Artificial Intelligence (UAI-08)*]{}. AUAI Press. (a). . In . , . Tian, J. and J. Pearl (2002a). A general identification condition for causal effects. In [*Eighteenth National Conference on Artificial Intelligence*]{}, pp. 567–573. (b). . In . , . Tian, J. and J. Pearl (2002b). On the testable implications of causal models with hidden variables. In [*Proceedings of UAI-02*]{}, pp. 519–527. (). . . , . , A. A. (2006). *[Semiparametric theory and missing data.]{}* New York, NY: Springer. , , (). . . Tsiatis, A. A., M. Davidian, M. Zhang, and X. Lu (2008). Covariate adjustment for two-sample treatment comparisons in randomized clinical trials: A principled yet flexible approach. *Statistics in medicine* [27]{}(23), 4658–4677. (a). . . Vander[W]{}eele, T. J. (2010a). Epistatic interactions. *Statistical Applications in Genetics and Molecular Biology* [9]{}(1), NA–NA. (b). . . Vander[W]{}eele, T. J. (2010b). Sufficient cause interactions for categorical and ordinal exposures with three levels. *Biometrika* [97]{}(3), 647–659. (). . . Vander[W]{}eele, T. J. and T. S. Richardson (2012). General theory for interactions in sufficient cause models with dichotomous exposures. *The Annals of Statistics* [40]{}(4), 2128–2161. (). . . Vander[W]{}eele, T. J. and J. M. Robins (2009). Minimal sufficient causation and directed acyclic graphs. *The Annals of Statistics* [37]{}(3), 1437–1465. (). . . VanderWeele, T. J. and I. Shpitser (2013). On the definition of a confounder. *The Annals of Statistics* [41]{}(1), 196–220. , (). . . Vander[W]{}eele, T. J., E. J. [Tchetgen Tchetgen]{}, and M. E. Halloran (2014). Interference and sensitivity analysis. *Statistical Science* [29]{}(4), ??–?? , , (). . . Vander[W]{}eele, T. J., J. P. Vandenbroucke, E. J. T. Tchetgen, and J. M. Robins (2012). A mapping between interactions and interference: implications for vaccine trials. *Epidemiology* [23]{}(2), 285–292. (). . . Vansteelandt, S. and M. Joffe (2014). Structural nested models and [G]{}-estimation: the partially realized promise. *Statistical Science* [29]{}(4), ??–?? , (). . . van der Laan, M. J., A. E. Hubbard, and J. M. Robins (2002). Locally efficient estimation of a multivariate survival function in longitudinal [S]{}tudies. *Journal of the American Statistical Association* [ 97]{}(458), 494–507. (). . . van der Laan, M. J. and M. L. Petersen (2007). Causal effect models for realistic individualized treatment and intention to treat rules. *The International Journal of Biostatistics* [3]{}(1-A3), 1–53. (). . , . , M. J. and J. M. [Robins]{} (2003). *[Unified methods for censored longitudinal data and causality.]{}* New York, NY: Springer. (). . , . van der Laan, M. J. and S. Rose (2011). *Targeted learning: causal inference for observational and experimental data*. Springer. (). . . van der Laan, M. J. and D. Rubin (2006). Targeted maximum likelihood learning. *The International Journal of Biostatistics* [2]{}(1-A11), 1–39. (). . . van der Vaart, A. (1991). On differentiable functionals. *The Annals of Statistics* [19]{}, 178–204. (). . . van der Vaart, A. (2014). Higher order tangent spaces and influence functions. *Statistical Science* [29]{}(4), ??–?? (). . In . , . Verma, T. and J. Pearl (1990). Equivalence and synthesis of causal models. In M. Henrion, R. Shachter, L. Kanal, and J. Lemmer (Eds.), [*Uncertainty in Artificial Intelligence: Proceedings of the $\rm6^{th}$ Conference*]{}, Mountain View, CA, pp. 220–227. Association for Uncertainty in AI. (). . . Wang, N. and J. M. Robins (1998). Large-sample theory for parametric multiple imputation procedures. *Biometrika* [85]{}(4), 935–948. , , (). . . Wang, L., L. D. Brown, T. T. Cai, and M. Levine (2008). Effect of mean on variance function estimation in nonparametric regression. *The Annals of Statistics* [36]{}(2), 646–664. (). . . Wermuth, N. (2011). Probability distributions with summary graph structure. *Bernoulli* [17]{}(3), 845–879. [^1]: @Robi:Gree:esti:1989 ([-@Robi:Gree:esti:1989; -@Robi:Gree:prob:1989]) provided a formal definition of the probability of causation and a definitive answer to the question in the following sense. They proved that the probability of causation was not identified from epidemiologic data even in the absence of confounding, but that sharp upper and lower bounds could be obtained. Specifically, under the assumption that a workplace exposure was never beneficial, the probability $P(t)$ that a workers death occurring $t$ years after exposure was due to that exposure was sharply upper bounded by $1$ and lower bounded by $\max [ 0,\{f_{1}(t)-f_{0}(t)\}/f_{1}(t) ] $, where $f_{1}(t)$ and $f_{0}(t)$ are, respectively, the marginal densities in the exposed and unexposed cohorts of the random variable $T$ encoding time to death. [^2]: The author, Ethel Gilbert, is the mother of Peter Gilbert who is a contributor to this special issue; see ([-@richardson:hudgens:2014]). [^3]: In the epidemiologic literature, this bestiary is sometimes referred to as the collection of “g-methods.” [^4]: A complete of acronyms used is given before the References. [^5]: See @Freedman01122006 ([-@Freedman01122006]) and @sekhon2008neyman ([-@sekhon2008neyman]) for historical reviews of the counterfactual point treatment model. [^6]: Robins published an informal, accessible, summary of his main results in the epidemiologic literature ([-@robins:simpleversion:1987]). However, it was not until [-@robins:1992] (and many rejections) that his work on causal inference with time-varying treatments appeared in a major statistical journal. [^7]: The perhaps more familiar *Non-Parametric Structural Equation Model with Independent Errors* (NPSEM-IE) considered by Pearl may be viewed as submodel of Robins’ FFRCISTG. A *Non-Parametric Structural Equation Model* (NPSEM) assumes that all variables ($V$) can be intervened on. In contrast, the FFRCISTG model does not require one to assume this. However, if all variables in $V$ can be intervened on, then the FFRCISTG specifies a set of one-step ahead counterfactuals, $V_{m}(\overline{v}_{m-1}) $ which may equivalently be written as structural equations $V_{m}(\overline{v}_{m-1})=f_{m}(\overline{v}_{m-1},\varepsilon_{m})$ for functions $f_{m}$ and (vector-valued) random errors $\varepsilon_{m}$. Thus, leaving aside notational differences, structural equations and one-step ahead counterfactuals are equivalent. All other counterfactuals, as well as factual variables, are then obtained by recursive substitution. However, the NPSEM-IE model of Pearl ([-@pearl:2000]) further assumes the errors $\varepsilon_{m}$ are jointly independent. In contrast, though an FFRCISTG model is also an NPSEM, the errors (associated with incompatible counterfactual worlds) may be dependent—though any such dependence could not be detected in a RCT. Hence, Pearl’s model is a strict submodel of an FFRCISTG model. [^8]: In practice, there will almost always exist baseline covariates measured prior to $A_{1}$. In that case, the analysis in the text is to be understood as being with a given joint stratum of a set of baseline covariates sufficient to adjust for confounding due to baseline factors. [^9]: Of course, one can never be certain that the epidemiologists were successful which is the reason RCTs are generally considered the gold standard for establishing causal effects. [^10]: That is, the trials starting at $t=2$ are on study populations defined by specific $(A_1,L)$-histories. [^11]: The g-formula density for $Y$ is a generalization of standardization of effect measures to time varying treatments. See @keiding:2014 ([-@keiding:2014]) for a historical review of standardization. [^12]: Note that the distribution of $L(a_{1})$ is no longer identified under this weaker assumption. [^13]: More precisely, we obtain the SWIG independence $Y(a_{1},a_{2}){\protect\mathpalette{\protect\independenT}{\perp}}A_{2}(a_{1}) \mid A_{1},L(a_1)$, that implies (\[eq:ind2\]) by the consistency assumption after instantiating $A_{1}$ at $a_{1}$. Note when checking d-separation on a SWIG all paths containing red “fixed” nonrandom vertices, such as $a_1$, are treated as always being blocked (regardless of the conditioning set). [^14]: Above we have assumed the variables $A_{1}$, $L$, $A_{2}$, $Y$ occurring in the g-formula are temporally ordered. Interestingly, @robins:1986 ([-@robins:1986], Section 11) showed identification by the g-formula can require a nontemporal ordering. In his analysis of the Healthy Worker Survivor Effect, data were available on temporally ordered variables $(A_{1},L_{1},A_{2},L_{2},Y)$ where the $L_{t}$ are indicators of survival until time year $t$, $A_{t}$ is the indicator of exposure to a lung carcinogen and, there exists substantive background knowledge that carcinogen exposure at $t$ cannot cause death within a year. Under these assumptions, Robins proved that equation (\[eq:statrand\]) was false if one respected temporal order and chose $L$ to be $L_{1}$, but was true if one chose $L=L_{2}$. Thus, $E[Y(a_{1},a_{2})]$ was identified by the g-formula $f_{a_{1},a_{2}}^{\ast}(y)$ only for $L=L_{2}$. See (@richardson:robins:2013, [-@richardson:robins:2013], page 54) for further details. [^15]: @pearl:biom ([-@pearl:biom]) introduced an identical notation except that he substituted the word “do” for “$g=$,” thus writing $f(y \mid \operatorname{do}(a_{1},a_{2}))$. [^16]: If the $L\rightarrow Y$ edge is present, then $A_{1}$ still has an effect on $Y$. [^17]: The dependence of $f(y \mid a_{1},l,a_{2})$ on $a_{1}$ does not represent causation but rather selection bias due to conditioning on the common effect $L$ of $H_{1}$ and $A_{1}$. [^18]: But see @cox:wermuth:1999 ([-@cox:wermuth:1999]) for another approach. [^19]: In the literature, semiparametric estimation of the parameters of a SNM based on such estimating functions is referred to as [^20]: Interestingly, @robins:iv:1989 ([-@robins:iv:1989], page 127 and App. 1), unaware of Bellman’s work, reinvented the method of dynamic programming but remarked that, due to the difficulty of the estimation problem, it would only be of theoretical interest for finding the optimal dynamic regimes from longitudinal epidemiological data. [^21]: See also @Robi:Gree:prob:1989 ([-@Robi:Gree:esti:1989; -@Robi:Gree:prob:1989]). [^22]: @balke:pearl:1994 ([-@balke:pearl:1994]) showed that Robins’ bounds were not sharp in the presence of “defiers” (i.e., subjects who would never take the treatment assigned) and derived sharp bounds in that case. [^23]: A viewpoint recently explored by @mohan:pearl:tian:2013 ([-@mohan:pearl:tian:2013]). [^24]: IPTW estimators and IPCW estimators are essentially equivalent. For instance, in the censoring example of Section \[sec:censoring\], on the event $A_{2}=0$ of being uncensored, the IPCW denominator $\widehat{pr}(A_{2}=0 \mid L,A_{1}) $ equals $f(A_{2} \mid A_{1},L)$, the IPTW denominator. [^25]: More formally, recall that under (\[eq:statrand\]), $E[ Y(a_{1},a_{2}) ] =\Phi\{ \beta_{0}^{\ast}+\gamma(a_{1},a_{2};\beta_{1}^{\ast}) \} $ is equal to the g-formula $\int yf_{ a_{1},a_{2} }^{\ast}( y) \,dy$. Now, given the joint density of the data $f( A_{1},L,A_{2},Y) $, define $$\widetilde{f}( A_{1},L,A_{2},Y) =f( Y\mid A_{1},L,A_{2}) \widetilde{f}_{2}(A_{2}) f( L\mid A_{1}) \widetilde{f}_{1}( A_{1}),$$ where $\widetilde{f}_{1}( A_{1}) \widetilde{f}_{2}( A_{2}) $ are user-supplied densities chosen so that $\widetilde{f}$ is absolutely continuous with respect to $f$. Since the g-formula depends on the joint density of the data only through $f( Y \mid A_{1},L,A_{2}) $ and $f(L \mid A_{1})$, then it is identical under $\widetilde{f}$ and under $f$. Furthermore, for each $a_{1}$, $a_{2}$ the g-formula under $\widetilde{f}$ is just equal to $\widetilde{E}[ Y \mid A_{1}=a_{1},A_{2}=a_{2}] $ since, under $\widetilde{f}$, $A_{2}$ is independent of $ \{L,A_{1} \}$. Consequently, for any $q( A_{1},A_{2})$ $$\begin{aligned} \everymath{\displaystyle} \begin{array}{rcl} 0&= & \widetilde{E} \bigl[ q( A_{1},A_{2}) \bigl( Y-\Phi \bigl\{ \beta _{0}^{\ast}+\gamma \bigl( A_{1},A_{2}; \beta_{1}^{\ast} \bigr) \bigr\} \bigr) \bigr] \\[6pt] & =& E \bigl[ q( A_{1},A_{2}) \bigl\{ \widetilde{f}( A_{1}) \widetilde{f}( A_{2}) / \bigl\{ f( A_{1}) f(A_{2} \mid A_{1},L) \bigr\} \bigr\} \\[6pt] && \hspace*{79pt} {} \cdot \bigl( Y-\Phi\bigl\{ \beta_{0}^{\ast} + \gamma\bigl( A_{1},A_{2};\beta_{1}^{\ast}\bigr)\bigr\} \bigr) \bigr], \end{array}\end{aligned}$$ where the second equality follows from the Radon–Nikodym theorem. The result then follows by taking $q( A_{1},A_{2}) =v(A_{1},A_{2})/ \{\widetilde{f}( A_{1}) \widetilde{f}( A_{2})\}$. [^26]: Note that, as observed earlier, in this case identification is achieved through parametric assumptions made by the SNM. [^27]: See (\[eq:g-formula-for-y\]). [^28]: The analysis of @Rubi:dire:2004 ([-@Rubi:dire:2004]) was also based on this contrast, with $A_2$ no longer a failure time indicator so that the contrast (\[eq:psde-contrast\]) could be considered as well-defined for any value of $a_{2}$ for which the conditioning event had positive probability. [^29]: For subjects for whom $A_{2}(a_{1} = 1)\neq A_{2}(a_{1} = 0)$, no principal stratum direct effect (PSDE) is defined. [^30]: This follows from consistency. [^31]: This follows by consistency. [^32]: @Robi:Gree:iden:1992 ([-@Robi:Gree:iden:1992]) also defined the total indirect effect (TIE) of $A_{1}$ on $Y$ through $A_{2}$ to be $$E\bigl[ Y\bigl\{a_{1} = 1,A_{2}(a_{1} = 1)\bigr \}\bigr] - E\bigl[Y\bigl\{a_{1} = 1,A_{2}(a_{1} = 0)\bigr\}\bigr] .$$ It follows that the total effect $E[ Y\{a_{1} = 1\}] -E[Y\{a_{1} = 0\}] $ can then be decomposed as the sum of the PDE and the TIE. [^33]: In more detail, the FFRCISTG associated with Figures \[fig:no-confound\](a) and (b) assumes for all $a_{1}$, $a_{2}$, $$\label{eq:ffrcistgforpde} \quad Y(a_{1},a_{2}),A_{2}(a_{1}) {\protect\mathpalette{\protect\independenT}{\perp}}A_{1},\quad Y(a_{1},a_{2}) {\protect\mathpalette{\protect\independenT}{\perp}}A_{2}(a_{1})\mid A_{1},$$ which may be read directly from the SWIG shown in Figure \[fig:no-confound\](b); recall that red nodes are always blocked when applying d-separation. In contrast, Pearl’s NPSEM-IE also implies the independence $$\label{eq:npsem-ie} Y(a_{1},a_{2}) {\protect\mathpalette{\protect\independenT}{\perp}}A_{2} \bigl(a_{1}^{*}\bigr)\mid A_{1},$$ when $a_1\neq a_1^*$. Independence (\[eq:npsem-ie\]), which is needed in order for the PDE to be identified, is a “cross-world” independence since $Y(a_{1},a_{2})$ and $A_{2}(a_{1}^{*})$ could never (even in principle) both be observed in any randomized experiment. [^34]: A point freely acknowledged by @pearl:myths:2012 ([-@pearl:myths:2012]) who argues that causation should be viewed as more primitive than intervention. [^35]: This point identification is not a “free lunch”: @robins:mcm:2011 ([-@robins:mcm:2011]) show that it is these additional assumptions that have reduced the FFRCISTG bounds for the PDE to a point. This is a consequence of the fact that these assumptions induce a model *for the original variables $\{A_1, A_2(a_1), Y(a_1,a_2)\}$* that is a strict submodel of the original FFRCISTG model. Hence to justify applying the mediation formula by this route one must first be able to specify in detail the additional treatment variables and the associated intervention so as to make the relevant potential outcomes well-defined. In addition, one must be able to argue on substantive grounds for the plausibility of the required no direct effect assumptions and deterministic relations. It should also be noted that even under Pearl’s NPSEM-IE model the PDE is not identified in causal graphs, such as those in Figures \[fig:seq-rand\] and \[fig:seq-rand-variant\] that contain a variable (whether observed or unobserved) that is present both on a directed pathway from $A_1$ to $A_2$ and on a pathway from $A_1$ to $Y$. [^36]: Note that in a linear structural equation model the PSDE is not defined unless $A_1$ has no effect on $A_2$. [^37]: Results in @pearl95on ([-@pearl95on]) imply that under the sharp direct effect null the FFRCISTGs associated with the DAGs shown in Figures \[fig:seq-rand\] and \[fig:seq-rand-variant\] also imply inequality restrictions similar to Bell’s inequality in Quantum Mechanics. See @gill:2014 ([-@gill:2014]) for discussion of statistical issues arising from experimental tests of Bell’s inequality. [^38]: To our knowledge, it is the first such causal null hypothesis considered in Epidemiology for which this is the case. [^39]: This observation motivated the development of graphical “nested” Markov models that encode constraints such as (\[eq:verma-constraint\]) in addition to ordinary conditional independence relations; see the discussion of “Causal Discovery” in Section \[sec:otherwork\] below. [^40]: In response @robins:optimal:2004 ([-@robins:optimal:2004], Section 5.2) offered a Bayes–frequentist compromise that combines honest subjective Bayesian decision making under uncertainty with good frequentist behavior even when, as above, the model is so large and the likelihood function so complex that standard (uncompromised) Bayes procedures have poor frequentist performance. The key to the compromise is that the Bayesian decision maker is only allowed to observe a specified vector function of $X$ \[depending on the known $\pi^{\ast}( X) $\] but not $X$ itself. [^41]: Given complete data $X$, an always observed coarsening variable $R$, and a known coarsening function $x_{(r)}=c(r,x)$, *coarsening at random* (CAR) is said to hold if $\Pr(R=r \mid X)$ depends only on $X_{(r)}$, the observed data part of $X$. @robins:rotnitzky:recovery:1992 ([-@robins:rotnitzky:recovery:1992]), @Gill:van:Robi:coar:1997 ([-@Gill:van:Robi:coar:1997]) and @cator2004 ([-@cator2004]) showed that in certain models assuming CAR places no restrictions on the distribution of the observed data. For such models, we can pretend CAR holds when our goal is estimation of functionals of the observed data distribution. This trick often helps to derive efficient estimators of the functional. In this section, we assume that the distribution of the observables is compatible with CAR, and further, that in the estimation problems that we consider, CAR may be assumed to hold without loss of generality. In fact, this is the case in the context of our running causal inference example from Section \[sec:tree-graph\]. Specifically, let $X= \{ Y(a_{1},a_{2}),L(a_{1});a_{j}\in \{ 0,1 \} ,j=1,2 \} $, $R= ( A_{1},A_{2} ) $, and $X_{ (a_{1},a_{2} ) }= \{ Y(a_{1},a_{2}),L(a_{1}) \} $. Consider a model $M_{X}$ for $X$ that specifies (i) $ \{ Y(1,a_{2}),L(1);a_{2}\in \{ 0,1 \} \} {\protect\mathpalette{\protect\independenT}{\perp}}\{ Y(0,a_{2}),L(0);a_{2}\in \{ 0,1 \} \}$ and (ii) $Y(a_{1},1) {\protect\mathpalette{\protect\independenT}{\perp}}Y(a_{1},0) \mid L(a_{1})$ for $a_{1}\in \{ 0,1 \}$. Results in @gill2001 ([-@gill2001], Section 6) and @robins00marginal ([-@robins00marginal], Sections 2.1 and 4.2) show that (a) model $M_{X}$ places no further restrictions on the distribution of the observed data $ ( A_{1},A_{2},L,Y ) = ( A_{1},A_{2},L( A_{1}),Y(A_{1},A_{2}) )$, (b) given model $M_{X}$, the additional independences $X {\protect\mathpalette{\protect\independenT}{\perp}}A_{1}$ and $X {\protect\mathpalette{\protect\independenT}{\perp}}A_{2} \mid A_{1},L$ together also place no further restrictions on the distribution of the observed data $ ( A_{1},A_{2},L,Y ) $ and are equivalent to assuming CAR. Further, the independences in (b) imply (\[eq:indg\]) so that $f_{Y(g)}(y)$ is identified by the g-formula $f_{g}^{\ast}(y)$. [^42]: More recently, in the context of a RCT, @tsiatis2008covariate and @moore2009covariate, following the strategy of @robins:rotnitzky:recovery:1992, studied variants of the locally efficient tests and estimators of @Scha:Rotn:Robi:adju:1999 to increase efficiency and power by utilizing data on covariates. [^43]: A function $b( \cdot) $ lies in the Hölder ball $H(\beta,C)$ with Hölder exponent $\beta>0$ and radius $C>0$, if and only if $b( \cdot) $ is bounded in supremum norm by $C$ and all partial derivatives of $b(x)$ up to order $ \lfloor\beta \rfloor$ exist, and all partial derivatives of order $ \lfloor\beta \rfloor$ are Lipschitz with exponent $ ( \beta- \lfloor\beta \rfloor )$ and constant $C$. [^44]: If a subcube contains more than two observations, two are selected randomly, without replacement. [^45]: Observe that $E [ ( Y_{i}-Y_{j} ) ^{2}/2 \mid X_{i},X_{j} ] =\sigma^{2}+ \{ b( X_{i}) -b( X_{j}) \} ^{2}/2$, ${\vert}b( X_{i}) -b( X_{j}){\vert}=O ({\Vert}X_{i}-X_{j} {\Vert}^{\beta} )$ as $\beta<1$, and ${\Vert}X_{i}-X_{j}{\Vert}=d^{1/2}O( k^{-1/d})$ when $X_{i}$ and $X_{j}$ are in the same subcube. It follows that the estimator has variance of order $k/n^{2}$ and bias of order $O(k^{-2\beta/d})$. Variance and the squared bias are equated by solving $k/n^{2}=k^{-4\beta/d}$ which gives $k=n^{\fracz{2}{1+4\beta/d}}$. [^46]: Robins has been trying to find an answer to this question without success for a number of years. He suggested that it is now time for some crowd-sourcing. [^47]: The estimator given above does not attain this rate when $\beta>1$ because it fails to exploit the fact that $b(\cdot)$ is differentiable. In the interest of simplicity, we have posed this as a problem in variance estimation. However, @robins:higher:2008 ([-@robins:higher:2008]) show that the estimation of the variance is mathematically isomorphic to the estimation of $\theta$ in the semi-parametric regression model $E[Y \mid A,X]=\theta A +h(X)$, where $A$ is a binary treatment. In the absence of confounding, $\theta$ encodes the causal effect of the treatment. [^48]: But still not uniformly consistent!
--- abstract: 'We study thin interpolating sequences $\{\lambda_n\}$ and their relationship to interpolation in the Hardy space $H^2$ and the model spaces $K_\Theta = H^2 \ominus \Theta H^2$, where $\Theta$ is an inner function. Our results, phrased in terms of the functions that do the interpolation as well as Carleson measures, show that under the assumption that $\Theta(\lambda_n) \to 0$ the interpolation properties in $H^2$ are essentially the same as those in $K_\Theta$.' address: - | Pamela Gorkin, Department of Mathematics\ Bucknell University\ Lewisburg, PA USA 17837 - | Brett D. Wick, School of Mathematics\ Georgia Institute of Technology\ 686 Cherry Street\ Atlanta, GA USA 30332-0160 author: - 'Pamela Gorkin$^\dagger$' - 'Brett D. Wick$^\ddagger$' title: Thin Sequences and Their Role in Model Spaces and Douglas Algebras --- [^1] [^2] \[section\] \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Conjecture]{} \[thm\][Problem]{} \[thm\][Proposition]{} \[thm\][Definition]{} \[thm\][Remark]{} Introduction and Motivation =========================== A sequence $\{\lambda_j\}_{j=1}^\infty$ is an [interpolating sequence]{.nodecor} for $H^\infty$, the space of bounded analytic functions, if for every $w\in\ell^\infty$ there is a function $f\in H^\infty$ such that $$f(z_j) = w_j, ~\mbox{for all}~ j\in{\mathbb{N}}.$$ Carleson’s interpolation theorem says that $\{\lambda_j\}_{j=1}^\infty$ is an interpolating sequence for $H^\infty$ if and only if $$\label{Interp_Cond} \delta = \inf_{j}\delta_j:=\inf_j \left\vert B_j(\lambda_j)\right\vert=\inf_{j}\prod_{k \ne j} \left|\frac{\lambda_j - \lambda_k}{1 - \overline{\lambda}_j \lambda_k}\right| > 0,$$ where $$B_j(z):=\prod_{k\neq j}\frac{-\overline{\lambda_k}}{{\ensuremath{\left\vert\lambda_k\right\vert}}}\frac{z-\lambda_k}{1-\overline{\lambda}_kz}$$ denotes the Blaschke product vanishing on the set of points $\{\lambda_k:k\neq j\}$. In this paper, we consider sequences that (eventually) satisfy a stronger condition than . A sequence $\{\lambda_j\}\subset{\mathbb{D}}$ is *thin* if $$\lim_{j\to\infty}\delta_j:=\lim_{j\to\infty}\prod_{k\neq j}\left\vert\frac{\lambda_j-\lambda_k}{1-\overline{\lambda}_k\lambda_j}\right\vert=1.$$ Thin sequences are of interest not only because functions solving interpolation for thin interpolating sequences have good bounds on the norm, but also because they are interpolating sequences for a very small algebra: the algebra $QA = VMO \cap H^\infty$, where $VMO$ is the space of functions on the unit circle with vanishing mean oscillation [@W]. Continuing work in [@CFT] and [@GPW], we are interested in understanding these sequences in different settings. This will require two definitions that are motivated by the work of Shapiro and Shields, [@SS], in which they gave the appropriate conditions for a sequence to be interpolating for the Hardy space $H^2$. Considering more general Hilbert spaces will require the introduction of reproducing kernels: In a reproducing kernel Hilbert space $\mathcal{H}$ (see [@AM p. 17]) we let $K_{\lambda_n}$ denote the kernel corresponding to the point $\lambda_n$; that is, for each function in the Hilbert space we have that $f(\lambda_n)=\left\langle f, K_{\lambda_n}\right\rangle_{\mathcal{H}}$. If we have an $\ell^2$ sequence $a = \{a_n\}$, we define $$\|a\|_{N, \ell^2} = \left(\sum_{j \ge N} |a_j|^2\right)^{1/2}.$$ The concepts of interest are the following. A sequence $\{\lambda_n\}\subset\Omega \subseteq \mathbb{C}^n$ is said to be [*an eventual $1$-interpolating sequence for a reproducing kernel Hilbert space $\mathcal{H}$*]{}, denoted $EIS_{\mathcal{H}}$, if for every $\varepsilon > 0$ there exists $N$ such that for each $\{a_n\} \in \ell^2$ there exists $f_{N, a} \in \mathcal{H}$ with $$f_{N, a}(\lambda_n) {\ensuremath{\left\|K_{\lambda_n}\right\|}}_{\mathcal{H}}^{-1}=f_{N, a}(\lambda_n) K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} = a_n ~\mbox{for}~ n \ge N ~\mbox{and}~ \|f_{N, a}\|_{\mathcal{H}} \le (1 + \varepsilon) \|a\|_{N, \ell^2}.$$ A sequence $\{\lambda_n\}$ is said to be a [*strong asymptotic interpolating sequence for $\mathcal{H}$*]{}, denoted $AIS_{\mathcal{H}}$, if for all $\varepsilon > 0$ there exists $N$ such that for all sequences $\{a_n\} \in \ell^2$ there exists a function $G_{N, a} \in \mathcal{H}$ such that $\|G_{N, a}\|_\mathcal{H} \le \|a\|_{N,\ell^2}$ and $$\|\{G_{N, a}(\lambda_n) K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} - a_n\}\|_{N, \ell^2} < \varepsilon \|a\|_{N, \ell^2}.$$ Given a (nonconstant) inner function $\Theta$, we are interested in these sequences in model spaces; we define the model space for $\Theta$ an inner function by $K_\Theta = H^2 \ominus \Theta H^2$. The reproducing kernel in $K_\Theta$ for $\lambda_0 \in \mathbb{D}$ is $$K_{\lambda_0}^\Theta(z) = \frac{1 - \overline{\Theta(\lambda_0)}{\Theta(z)}}{1 - \overline{\lambda_0}z}$$ and the normalized reproducing kernel is $$k_{\lambda_0}^\Theta(z) = \sqrt{\frac{1 - |\lambda_0|^2}{1 - |\Theta(\lambda_0)|^2}} K_{\lambda_0}^\Theta(z).$$ Finally, note that $$K_{\lambda_0} = K_{\lambda_0}^\Theta + \Theta \overline{\Theta(\lambda_0)}K_{\lambda_0}.$$ We let $P_\Theta$ denote the orthogonal projection of $H^2$ onto $K_\Theta$. We consider thin sequences in these settings as well as in Douglas algebras: Letting $L^\infty$ denote the algebra of essentially bounded measurable functions on the unit circle, a Douglas algebra is a closed subalgebra of $L^\infty$ containing $H^\infty$. It is a consequence of work of Chang and Marshall that a Douglas algebra $\mathcal{B}$ is equal to the closed algebra generated by $H^\infty$ and the conjugates of the interpolating Blaschke products invertible in $\mathcal{B}$, [@C; @M]. In this paper, we continue work started in [@GM] and [@GPW] investigating the relationship between thin sequences, $EIS_{\mathcal{H}}$ and $AIS_{\mathcal{H}}$ where $\mathcal{H}$ is a model space or the Hardy space $H^2$. In Section \[HSV\], we consider the notion of eventually interpolating and asymptotic interpolating sequences in the model space setting. We show that in reproducing kernel Hilbert spaces of analytic functions on domains in $\mathbb{C}^n$, these two are the same. Given results in [@GPW], this is not surprising and the proofs are similar to those in the $H^\infty$ setting. We then turn to our main result of that section. If we have a Blaschke sequence $\{\lambda_n\}$ in $\mathbb{D}$ and assume that our inner function $\Theta$ satisfies $|\Theta(\lambda_n)| \to 0$, then a sequence $\{\lambda_n\}$ is an $EIS_{K_\Theta}$ sequence if and only if it is an $EIS_{H^2}$ sequence (and therefore $AIS_{K_\Theta}$ sequence if and only if it is an $AIS_{H^2}$). In Section \[CMMS\] we rephrase these properties in terms of the Carleson embedding constants on the model spaces. Finally, in Section \[asip\_algebra\], we recall the definition of Douglas algebras and show that appropriate definitions and conditions are quite different in that setting. Preliminaries ============= Recall that a sequence $\{x_n\}$ in ${\mathcal{H}}$ is [*complete*]{} if $~\mbox{Span}\{x_n: n \ge 1\} = \mathcal{H}$, and [*asymptotically orthonormal*]{} ($AOS$) if there exists $N_0$ such that for all $N \ge N_0$ there are positive constants $c_N$ and $C_N$ such that $$\begin{aligned} \label{thininequality} c_N \sum_{n \ge N} |a_n|^2 \le \left\|\sum_{n \ge N} a_n x_n\right\|^2_{{\mathcal{H}}} \le C_N \sum_{n \ge N} |a_n|^2,\end{aligned}$$ where $c_N \to 1$ and $ C_N \to 1$ as $N \to \infty$. If we can take $N_0 = 1$, the sequence is said to be an $AOB$; this is equivalent to being $AOS$ and a Riesz sequence. Finally, the Gram matrix corresponding to $\{x_j\}$ is the matrix $G = \left(\langle x_n, x_m \rangle\right)_{n, m \ge 1}$. It is well known that if $\{\lambda_n\}$ is a Blaschke sequence with simple zeros and corresponding Blaschke product $B$, then $\{k_{\lambda_n}\}$, where $$k_{\lambda_n}(z)=\frac{(1-\left\vert \lambda_n\right\vert^2)^{\frac{1}{2}}}{(1-\overline{\lambda_n}z)},$$ is a complete minimal system in $K_B$ and we also know that $\{\lambda_n\}$ is interpolating if and only if $\{k_{\lambda_n}\}$ is a Riesz basis. The following beautiful theorem provides the connection to thin sequences. \[Volberg\] The following are equivalent: 1. $\{\lambda_n\}$ is a thin interpolating sequence; 2. The sequence $\{k_{\lambda_n}\}$ is a complete $AOB$ in $K_B$; 3. There exist a separable Hilbert space $\mathcal{K}$, an orthonormal basis $\{e_n\}$ for $\mathcal{K}$ and $U, K: \mathcal{K} \to K_B$, $U$ unitary, $K$ compact, $U + K$ invertible, such that $$(U + K)(e_n) = k_{\lambda_n} \text{ for all } n \in {\mathbb{N}}.$$ In [@F]\*[Section 3]{} and [@CFT]\*[Proposition 3.2]{}, the authors note that [@V]\*[Theorem 3]{} implies the following. \[propCFT\] Let $\{x_n\}$ be a sequence in ${\mathcal{H}}$. The following are equivalent: 1. $\{x_n\}$ is an AOB; 2. There exist a separable Hilbert space $\mathcal{K}$, an orthonormal basis $\{e_n\}$ for $\mathcal{K}$ and $U, K: \mathcal{K} \to \mathcal{H}$, $U$ unitary, $K$ compact, $U + K$ left invertible, such that $$(U + K)(e_n) = x_n;$$ 3. The Gram matrix $G$ associated to $\{x_n\}$ defines a bounded invertible operator of the form $I + K$ with $K$ compact. We also have the following, which we will use later in this paper. \[prop5.1CFT\] If $\{\lambda_n\}$ is a sequence of distinct points in $\mathbb{D}$ and $\{k_{\lambda_n}^\Theta\}$ is an $AOS$, then $\{\lambda_n\}$ is a thin interpolating sequence. \[theorem5.2CFT\] Suppose $\sup_{n \ge 1} |\Theta(\lambda_n)| < 1$. If $\{\lambda_n\}$ is a thin interpolating sequence, then either \(i) $\{k_{\lambda_n}^\Theta\}_{n\ge1}$ is an $AOB$ or \(ii) there exists $p \ge 2$ such that $\{k_{\lambda_n}^\Theta\}_{n \ge p}$ is a complete $AOB$ in $K_\Theta$. Hilbert Space Versions {#HSV} ====================== Asymptotic and Eventual Interpolating Sequences {#asip} ----------------------------------------------- Let $\mathcal{H}$ be a reproducing kernel Hilbert space of analytic functions over a domain $\Omega\subset{\mathbb{C}}^n$ with reproducing kernel $K_\lambda$ at the point $\lambda \in \Omega$. We define two properties that a sequence $\{\lambda_n\}\subset \Omega$ can have. A sequence $\{\lambda_n\}\subset\Omega$ is an [eventual $1$-interpolating sequence for $\mathcal{H}$]{.nodecor}, denoted $EIS_{\mathcal{H}}$, if for every $\varepsilon > 0$ there exists $N$ such that for each $\{a_n\} \in \ell^2$ there exists $f_{N, a} \in \mathcal{H}$ with $$f_{N, a}(\lambda_n) {\ensuremath{\left\|K_{\lambda_n}\right\|}}_{\mathcal{H}}^{-1}=f_{N, a}(\lambda_n) K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} = a_n ~\mbox{for}~ n \ge N ~\mbox{and}~ \|f_{N, a}\|_{\mathcal{H}} \le (1 + \varepsilon) \|a\|_{N, \ell^2}.$$ A sequence $\{\lambda_n\}\subset\Omega$ is a [strong asymptotic interpolating sequence for $\mathcal{H}$]{.nodecor}, denoted $AIS_{\mathcal{H}}$, if for all $\varepsilon > 0$ there exists $N$ such that for all sequences $\{a_n\} \in \ell^2$ there exists a function $G_{N, a} \in \mathcal{H}$ such that $\|G_{N, a}\|_\mathcal{H} \le \|a\|_{N,\ell^2}$ and $$\|\{G_{N, a}(\lambda_n) K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} - a_n\}\|_{N, \ell^2} < \varepsilon \|a\|_{N, \ell^2}.$$ We now wish to prove Theorem \[EISiffASI\] below. The proof, which is a modification of the proof of the open-mapping theorem, also yields a proof of the following proposition. \[Banachspace\] Let $X$ and $Y$ be Banach spaces and let $T: X \to Y$ be a bounded operator and $\varepsilon > 0$. If $$\sup_{\|y\| = 1} \inf_{\|x\| \le 1} \|Tx - y\| < \varepsilon < 1,$$ then for all $y \in Y$, there exists $x \in X$ such that $\|x\| \le \frac{1}{1 - \varepsilon} \|y\|$ and $Tx = y$. Theorem \[EISiffASI\] follows from Proposition \[Banachspace\], but doing so requires dealing with several technicalities that obfuscate the underlying ideas, and so we present a direct proof of our desired implication. When we turn to Banach algebras, the corresponding implication (in Theorem \[main\_algebra\]) will be a direct consequence of Proposition \[Banachspace\]. We thank the referee for pointing out Proposition \[Banachspace\] to us. \[EISiffASI\] Let $\mathcal{H}$ be a reproducing kernel space of analytic functions over the domain $\Omega\subset\mathbb{C}^n$ with reproducing kernel at the point $\lambda$ given by $K_\lambda$. Then $\{\lambda_n\}$ is an $EIS_{\mathcal{H}}$ sequence if and only if $\{\lambda_n\}$ is an $AIS_{\mathcal{H}}$. If a sequence is an $EIS_{\mathcal{H}}$, then it is trivially $AIS_{\mathcal{H}}$, for given $\varepsilon > 0$ we may take $G_{N, a} = \frac{f_{N, a}}{(1 + \varepsilon)}$. For the other direction, suppose $\{\lambda_n\}$ is an $AIS_{\mathcal{H}}$ sequence. Let $\varepsilon > 0$, $N := N(\varepsilon)$, and $\{a_j\}:=\{a_{j}^{(0)}\}$ be any sequence. First choose $f_0 \in \mathcal{H}$ so that for $n \ge N$ we have $$\|\{K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} f_0(\lambda_n) - a_{n}^{(0)}\}\|_{N, \ell^2} < \frac{\varepsilon}{1+\varepsilon} \|a\|_{N, \ell^2}$$ and $$\|f_0\|_{\mathcal{H}} \le \|a\|_{N,\ell^2}.$$ Now let $a_{n}^{(1)} = a_{n}^{(0)} - K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} f_0(\lambda_n)$. Note that $\|a^{(1)}\|_{N, \ell^2} < \frac{\varepsilon}{1+\varepsilon} \|a\|_{N, \ell^2}$. Since we have an $AIS_{\mathcal{H}}$ sequence, we may choose $f_1$ such that for $n \ge N$ we have $$\|\{f_1(\lambda_n)K_{\lambda_n}(\lambda_n)^{-\frac{1}{2}} - a_{n}^{(1)}\}\|_{N, \ell^2} < \frac{\varepsilon}{1+\varepsilon} \|a^{(1)}\|_{N, \ell^2} < \left(\frac{\varepsilon}{1+\varepsilon}\right)^2\|a\|_{N, \ell^2},$$ and $$\|f_1\|_{\mathcal{H}} \le \|a^{(1)}\|_{N, \ell^2}<\left(\frac{\varepsilon}{1+\varepsilon}\right)\|a\|_{N,\ell^2}.$$ In general, we let $$a_{j}^{(k)} = -f_{k - 1}(\lambda_j)K_{\lambda_j}(\lambda_j)^{-\frac{1}{2}} + a_{j}^{(k-1)}$$ so that $$\|a^{(k)}\|_{N, \ell^2} \le \frac{\varepsilon}{1+\varepsilon} \|a^{(k - 1)}\|_{N, \ell^2} \le \left(\frac{\varepsilon}{1+\varepsilon}\right)^2 \|a^{(k-2)}\|_{N, \ell^2} \le \cdots \le \left(\frac{\varepsilon}{1+\varepsilon}\right)^k \|a\|_{N, \ell^2}$$ and $$\|f_k\|_{\mathcal{H}} \le \|a^{(k)}\|_{N, \ell^2}<\left(\frac{\varepsilon}{1+\varepsilon}\right)^k\|a\|_{N,\ell^2}.$$ Then consider $f(z) = \sum_{k = 0}^\infty f_k(z)$. Since $f_k(\lambda_j) = \left(a_{j}^{(k)} - a_{j}^{(k+1)}\right)K_{\lambda_j}(\lambda_j)^{\frac{1}{2}}$ and $a_{j}^{(k)} \to 0$ as $k \to \infty$, we have for each $j \ge N$, $$f(\lambda_j) = a_{j}^{(0)} K_{\lambda_j}(\lambda_j)^{\frac{1}{2}} = a_jK_{\lambda_j}(\lambda_j)^{\frac{1}{2}}.$$ Further $\|f\|_{\mathcal{H}}\le \sum_{k = 0}^\infty \left(\frac{\varepsilon}{1+\varepsilon}\right)^{k} \|a\|_{N, \ell^2} = \frac{1}{1 - \frac{\varepsilon}{1+\varepsilon}} \|a\|_{N, \ell^2}=(1+\varepsilon)\|a\|_{N, \ell^2}$. This proves that $\{\lambda_n\}$ is an $EIS_{\mathcal{H}}$ sequence. The Hardy and Model Spaces -------------------------- We let $\Theta$ denote a nonconstant inner function and apply Theorem \[EISiffASI\] to the reproducing kernel Hilbert space $K_{\Theta}$. We also include statements and results about Carleson measures. Given a non-negative measure $\mu$ on ${\mathbb{D}}$, let us denote the (possibly infinite) constant $${\mathcal{C}}(\mu) = \sup_{f \in H^2, f \neq 0} \frac{\|f\|^2_{L^2({\mathbb{D}}, \mu)}}{\|f\|^2_2}$$ as the Carleson embedding constant of $\mu$ on $H^2$ and $${\mathcal{R}}(\mu) = \sup_{z\in\mathbb{D}} \frac{\|k_z\|_{L^2({\mathbb{D}}, \mu)}}{\|k_z\|_2}=\sup_{z} \|k_z\|_{L^2({\mathbb{D}}, \mu)}$$ as the embedding constant of $\mu$ on $k_z$, the normalized reproducing kernel of $H^2$. It is well-known that ${\mathcal{C}}(\mu)\approx {\mathcal{R}}(\mu)$, [@MR2417425; @nikolski]. \[main\] Let $\{\lambda_n\}$ be an interpolating sequence in $\mathbb{D}$ and let $\Theta$ be an inner function. Suppose that $\kappa:=\sup_{n} \left\vert \Theta(\lambda_n)\right| < 1$. The following are equivalent: 1. $\{\lambda_n\}$ is an $EIS_{H^2}$ sequence\[eish2\]; 2. $\{\lambda_n\}$ is a thin interpolating sequence\[thin1\]; 3. \[aob\] Either 1. $\{k_{\lambda_n}^\Theta\}_{n\ge1}$ is an $AOB$, or 2. there exists $p \ge 2$ such that $\{k_{\lambda_n}^\Theta\}_{n \ge p}$ is a complete $AOB$ in $K_\Theta$; 4. $\{\lambda_n\}$ is an $AIS_{H^2}$ sequence\[Aish2\]; 5. The measure $$\mu_N = \sum_{k \ge N} (1 - |\lambda_k|^2)\delta_{\lambda_k}$$ is a Carleson measure for $H^2$ with Carleson embedding constant ${\mathcal{C}}(\mu_N)$ satisfying ${\mathcal{C}}(\mu_N) \to 1$ as $N \to \infty$\[C1\]; 6. The measure $$\nu_N = \sum_{k \ge N}\frac{(1 - |\lambda_k|^2)}{\delta_k} \delta_{\lambda_k}$$ is a Carleson measure for $H^2$ with embedding constant ${\mathcal{R}}_{\nu_N}$ on reproducing kernels satisfying ${\mathcal{R}}_{\nu_N} \to 1$\[C2\]. Further, and are equivalent to each other and imply each of the statements above. If, in addition, $ \Theta(\lambda_n) \to0$, then - are equivalent. 7. $\{\lambda_n\}$ is an $EIS_{K_\Theta}$ sequence\[eis\]; 8. $\{\lambda_n\}$ is an $AIS_{K_\Theta}$ sequence\[ais\]. The equivalence between and is contained in Theorem \[EISiffASI\]. Similarly, this applies to and . In [@GPW]\*[Theorem 4.5]{}, the authors prove that , and are equivalent. The equivalence between , , and is contained in [@GPW]. That implies is Theorem \[theorem5.2CFT\]. That implies also follows from results in [@CFT], for if a sequence is an $AOB$ for some $p \ge 2$ it is an $AOS$ for $p \ge 2$ and hence thin by Proposition \[prop5.1CFT\] for $p \ge 2$. This is, of course, the same as being thin interpolating. Thus, we have the equivalence of equations , , , , , and , as well as the equivalence of and .\ Now we show that and are equivalent under the hypothesis that $\Theta(\lambda_n)\to 0$.\ $\Rightarrow$. Suppose that $\{\lambda_n\}$ is an $EIS_{K_\Theta}$ sequence. We will prove that this implies it is an $EIS_{H^2}$ sequence, establishing .\ Let $\varepsilon>0$ be given. Choose $\varepsilon^\prime < \varepsilon$ and let $N_1 = N(\varepsilon^\prime)$ be chosen according to the definition of $\{\lambda_n\}$ being an $EIS_{K_\Theta}$ sequence. Recall that $$\kappa_m = \sup_{n \ge m} |\Theta(\lambda_n)| \to 0,$$ so we may assume that we have chosen $N_1$ so large that $$\frac{1 + \varepsilon^\prime}{(1 - \kappa_{N_1}^2)^{1/2}} < 1 + \varepsilon.$$ Define $\{\tilde{a}_n\}$ to be $0$ if $n < N_1$ and $\tilde{a}_n=a_n \left(1-{\ensuremath{\left\vert\Theta(\lambda_n)\right\vert}}^2\right)^{-\frac{1}{2}}$ for $n \ge N_1$. Then $\{\tilde{a}_n\} \in \ell^2$. Select $f_a\in K_\Theta\subset H^2$ so that $$f_a(\lambda_n) \left(\frac{1-{\ensuremath{\left\vert\Theta(\lambda_n)\right\vert}}^2}{1-{\ensuremath{\left\vert\lambda_n\right\vert}}^2}\right)^{-\frac{1}{2}} = \tilde{a}_n = a_n \left(1-{\ensuremath{\left\vert\Theta(\lambda_n)\right\vert}}^2\right)^{-\frac{1}{2}} \, \textrm{ if } n \ge N_1$$ and $$\|f_a\| \le (1 + \varepsilon^\prime) \|\tilde{a}\|_{N_1, \ell^2} \le \frac{(1 + \varepsilon^\prime)}{(1 - \kappa_{N_1}^2)^{1/2}}\|a\|_{N_1, \ell^2} < (1 + \varepsilon) \|a\|_{N_1, \ell^2}.$$ Since $f_a\in K_\Theta$, we have that $f_a\in H^2$, and canceling out the common factor yields that $f_a(\lambda_n)(1-{\ensuremath{\left\vert\lambda_n\right\vert}}^2)^{-\frac{1}{2}}=a_n$ for all $n\geq N_1$. Thus $\{\lambda_n\}$ is an $EIS_{H^2}$ sequence as claimed.\ $\Rightarrow$. Suppose that $\Theta(\lambda_n) \to 0$ and $\{\lambda_n\}$ is an $EIS_{H^2}$ sequence; equivalently, that $\{\lambda_n\}$ is thin. We want to show that the sequence $\{\lambda_n\}$ is an $EIS_{K_\Theta}$ sequence. First we present some observations.\ First, looking at the definition, we see that we may assume that $\varepsilon > 0$ is small, for any choice of $N$ that works for small $\varepsilon$ also works for larger values.\ Second, if $f\in H^2$ and we let $\tilde{f}=P_{K_\Theta}f$, then we have that ${\ensuremath{\left\|\tilde{f}\right\|}}_2\leq {\ensuremath{\left\|f\right\|}}_2$ since $P_{K_\Theta}$ is an orthogonal projection. Next, we have $P_{K_\Theta} = P_+ - \Theta P_+ \overline{\Theta}$, where $P_+$ is the orthogonal projection of $L^2$ onto $H^2$, so letting $T_{\overline{\Theta}}$ denote the Toeplitz operator with symbol $\overline{\Theta}$ we have $$\label{Toeplitz} \tilde{f}(z)=f(z)-\Theta(z)T_{\overline{\Theta}}(f)(z).$$ In what follows, $\kappa_m := \sup_{n \ge m}|\Theta(\lambda_n)|$ and recall that we assume that $\kappa_m \to 0$.\ Since $\{\lambda_n\}$ is an $EIS_{H^2}$ sequence, there exists $N_1$ such that for any $a\in\ell^2$ there exists a function $f_0\in H^2$ such that $$f_0(\lambda_n)=a_n\left(\frac{1 - |\Theta(\lambda_n)|^2}{1-{\ensuremath{\left\vert\lambda_n\right\vert}}^2}\right)^\frac{1}{2}~\mbox{for all}~n\geq N_1$$ and $${\ensuremath{\left\|f_0\right\|}}_{2}\leq (1+\varepsilon){\ensuremath{\left\|\{a_k (1 - |\Theta(\lambda_k)|^2)^{\frac{1}{2}}\}\right\|}}_{N_1,\ell^2} \le (1 + \varepsilon){\ensuremath{\left\|a \right\|}}_{N_1,\ell^2}.$$ Here we have applied the $EIS_{H^2}$ property to the sequence $\{a_k(1-\left\vert \Theta(\lambda_k)\right\vert^2)^{\frac{1}{2}}\}\in\ell^2$. By we have that $$\begin{aligned} \tilde{f}_0(\lambda_k) & = & f_0(\lambda_k)-\Theta(\lambda_k) T_{\overline{\Theta}}(f_0)(\lambda_k)\\ & = & a_k(1 - |\Theta(\lambda_k)|^2)^\frac{1}{2}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{-\frac{1}{2}}-\Theta(\lambda_k) T_{\overline{\Theta}}(f_0)(\lambda_k)\quad\forall k\geq N_1 \end{aligned}$$ and ${\ensuremath{\left\|\tilde{f}_0\right\|}}_2\leq{\ensuremath{\left\|f_0\right\|}}_2\leq (1+\varepsilon){\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}$. Rearranging the above, for $k \ge N_1$ we have $$\begin{aligned} {\ensuremath{\left\vert\tilde{f}_0(\lambda_k)(1 - |\Theta(\lambda_k)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}-a_k\right\vert}} & = & {\ensuremath{\left\vert\Theta(\lambda_k) T_{\overline{\Theta}}(f_0)(\lambda_k)(1 - |\Theta(\lambda_k)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}\right\vert}}\\ & \leq & \kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|f_0\right\|}}_2\\ &\leq& (1+\varepsilon) \kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.\end{aligned}$$ We claim that $\{a^{(1)}_n\}=\{\tilde{f}_0(\lambda_n)(1 - |\Theta(\lambda_n)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_n\right\vert}}^2)^{\frac{1}{2}} - a_n\}\in\ell^2$ and that there is a constant $N_2$ depending only on $\varepsilon$ and the Carleson measure given by the thin sequence $\{\lambda_n\}$ such that $$\label{a1} {\ensuremath{\left\|a^{(1)}\right\|}}_{N_2,\ell^2}\leq (1+\varepsilon)^2\kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.$$ Since the sequence $\{\lambda_n\}$ is thin and distinct, it hence generates an $H^2$ Carleson measure with norm at most $(1+\varepsilon)$; that is, we have the existence of $N_2 \ge N_1$ such that $\kappa_{N_2}(1 - \kappa_{N_2}^2)^{-\frac{1}{2}} \le \kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}}$ and $$\begin{aligned} {\ensuremath{\left\|a^{(1)}\right\|}}_{N_2,\ell^2} & = & \left(\sum_{k\geq N_2} {\ensuremath{\left\vert\Theta(\lambda_k)\right\vert}}^2 {\ensuremath{\left\vertT_{\overline{\Theta}}(f_0)(\lambda_k)\right\vert}}^2 (1 - |\Theta(\lambda_k)|^2)^{-1}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)\right)^{\frac{1}{2}}\\ & \leq & (1+\varepsilon) \kappa_{N_2}(1 - \kappa_{N_2}^2)^{-\frac{1}{2}} {\ensuremath{\left\|T_{\overline{\Theta}}f_0\right\|}}_2 \nonumber\\ & \leq &(1+\varepsilon) \kappa_{N_2}(1 - \kappa_{N_2}^2)^{-\frac{1}{2}} {\ensuremath{\left\|f_0\right\|}}_2\nonumber\\ & \leq & (1+\varepsilon)^2 \kappa_{N_1} (1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}\nonumber<\infty,\end{aligned}$$ completing the proof of the claim. We will now iterate these estimates and ideas. Let $\widetilde{a^{(1)}_n}=-\frac{a^{(1)}_n}{(1 + \varepsilon)^2\kappa_{N_1} (1 - \kappa_{N_1}^2)^{-\frac{1}{2}} }$ for $n \ge N_2$ and $\widetilde{a^{(1)}_n} = 0$ otherwise. Then from (\[a1\]) we have that ${\ensuremath{\left\|\widetilde{a^{(1)}}\right\|}}_{N_1,\ell^2} = {\ensuremath{\left\|\widetilde{a^{(1)}}\right\|}}_{N_2,\ell^2} \le {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}$. Since $\{\lambda_n\}$ is an $EIS_{H^2}$ we may choose $f_1\in H^2$ with $$f_1(\lambda_n)=\widetilde{a_n^{(1)}}(1 - |\Theta(\lambda_n)|^2)^{\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_n\right\vert}}^2)^{-\frac{1}{2}}~\mbox{for all}~n\geq N_1$$ and, letting $\widetilde{f}_1 = P_{K_\Theta}(f_1)$, we have $${\ensuremath{\left\|\tilde{f}_1\right\|}}_{2}\leq{\ensuremath{\left\|f_1\right\|}}_{2}\leq (1+\varepsilon){\ensuremath{\left\|\widetilde{a^{(1)}}\right\|}}_{N_1,\ell^2}\leq (1+\varepsilon){\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.$$ As above, $$\begin{aligned} \widetilde{f}_1(\lambda_k) & = & f_1(\lambda_k)-\Theta(\lambda_k) T_{\overline{\Theta}}(f_1)(\lambda_k)\\ & = & \widetilde{a_k^{(1)}}(1 - |\Theta(\lambda_k)|^2)^{\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{-\frac{1}{2}}-\Theta(\lambda_k) T_{\overline{\Theta}}(f_1)(\lambda_k)\quad\forall k\geq N_1. \end{aligned}$$ And, for $k \ge N_1$ we have $$\begin{aligned} {\ensuremath{\left\vert\tilde{f}_1(\lambda_k)(1 - |\Theta(\lambda_k)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}-\widetilde{a_k^{(1)}}\right\vert}} & = & {\ensuremath{\left\vert\Theta(\lambda_k) T_{\overline{\Theta}}(f_1)(\lambda_k)(1 - |\Theta(\lambda_k)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}\right\vert}}\\ & \leq & \kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|f_1\right\|}}_2\\ & \leq & (1+\varepsilon)\kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.\end{aligned}$$ Using the definition of $\widetilde{a^{(1)}}$, for $k \ge N_2$ one arrives at $$\begin{aligned} {\ensuremath{\left\vert\left((1+\varepsilon)^2\kappa_{N_1}(1 - \kappa_{N_1}^2)^{-\frac{1}{2}} \tilde{f}_1(\lambda_k)+\tilde{f}_0(\lambda_k)\right)(1 - |\Theta(\lambda_k)|^2)^{-\frac{1}{2}}(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}-a_k\right\vert}}\\ \leq (1+\varepsilon)^3\kappa_{N_1}^2 (1 - \kappa_{N_1}^2)^{-1}{\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.\end{aligned}$$ We continue this procedure, constructing sequences $a^{(j)}\in \ell^2$ and functions $\tilde{f}_j\in K_{\Theta}$ such that $${\ensuremath{\left\|a^{(j)}\right\|}}_{N_1,\ell^2}\leq (1+\varepsilon)^{2j}\left(\frac{\kappa_{N_1}}{(1 - \kappa_{N_1}^2)^{\frac{1}{2}}}\right)^j{\ensuremath{\left\|a\right\|}}_{N_1,\ell^2},$$ $$\left\vert \frac{(1-{\ensuremath{\left\vert\lambda_k\right\vert}}^2)^{\frac{1}{2}}}{(1 - |\Theta(\lambda_k)|^2)^{\frac{1}{2}}} \left(\sum_{l=0}^{j} (1+\varepsilon)^{2l}\left(\frac{\kappa_{N_1}}{(1 - \kappa_{N_1}^2)^{\frac{1}{2}}}\right)^{l}\tilde{f}_l(\lambda_k)\right) -a_k \right\vert\leq \left(1+\varepsilon\right)^{2j+1}\left(\frac{\kappa_{N_1}}{(1 - \kappa_{N_1}^2)^{\frac{1}{2}}}\right)^{j+1}\left\Vert a\right\Vert_{N_1,\ell^2},$$ and $${\ensuremath{\left\|\tilde{f}_j\right\|}}_2\leq (1+\varepsilon){\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}~\mbox{for all}~j\in {\mathbb{N}}.$$ Define $$F=\sum_{j = 0}^{\infty} (1+\varepsilon)^{2j} \left(\frac{\kappa_{N_1}}{(1 - \kappa_{N_1}^2)^{\frac{1}{2}}}\right)^j \tilde{f}_j.$$ Then $F\in K_{\Theta}$ since each $\tilde{f}_j\in K_{\Theta}$ and, since $\kappa_m \to 0$, we may assume that $$(1 + \varepsilon)^2\left(\frac{\kappa_{N_1}}{(1 - \kappa_{N_1}^2)^{\frac{1}{2}}}\right) < 1.$$ So, $${\ensuremath{\left\|F\right\|}}_2\leq \frac{(1+\varepsilon)}{1 - (1+\varepsilon)^2\left(\frac{\kappa_{N_1}}{\left(1 - \kappa_{N_1}^2\right)^{\frac{1}{2}}}\right)} {\ensuremath{\left\|a\right\|}}_{N_1,\ell^2}.$$ For this $\varepsilon$, consider $\varepsilon_M < \varepsilon$ with $\frac{(1+\varepsilon_M)}{1 - (1+\varepsilon_M)^2\left(\frac{\kappa_{N_M}}{\left(1 - \kappa_{N_M}^2\right)^{\frac{1}{2}}}\right)}<1+\varepsilon$. Then, using the process above, we obtain $F_M$ satisfying $F_M \in K_\Theta, \|F_M\|_2 \le (1 + \varepsilon) \|a\|_{M, \ell^2}$ and $F_M(\lambda_n)\|K_{\lambda_n}\|^{-1}_\mathcal{H} = a_n$ for $n \ge M$. Taking $N(\varepsilon) = M$, we see that $F_M$ satisfies the exact interpolation conditions, completing the proof of the theorem. We present an alternate method to prove the equivalence between $(1)$ and $(7)$. As noted above, by Theorem \[EISiffASI\] it is true that $(7)\Leftrightarrow (8)$ and thus it suffices to prove that $(1)\Rightarrow (8)\Leftrightarrow (7)$. Let $\varepsilon>0$ be given. Select a sequence $\{\delta_N\}$ with $\delta_N\to 0$ as $N\to\infty$. Since $(1)$ holds, then for large $N$ and $a\in \ell^2$ it is possible to find $f_N\in H^2$ so that $$f_N(a_n)(1-\left\vert\lambda_n\right\vert^2)^{\frac{1}{2}}=\left\Vert a\right\Vert_{N,\ell^2}^{-1} a_n\quad n\geq N$$ with $\left\Vert f_N\right\Vert_{2}\leq 1+\delta_N$. Now observe that we can write $f_N=h_N+\Theta g_N$ with $h_N\in K_\Theta$. Since $h_N$ and $g_N$ are orthogonal projections of $f_N$ onto subspaces of $H^2$, we also have that $\left\Vert h_N\right\Vert_{2}\leq 1+\delta_N$ and similarly for $g_N$. By the properties of the functions above we have that: $$h_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}=f_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-\Theta(\lambda_n)g_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}.$$ Hence, one deduces that $$\begin{aligned} \left\Vert \left\{h_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-\frac{a_n}{\left\Vert a\right\Vert_{N,\ell^2}}\right\}\right\Vert_{N,\ell^2} & \leq & \left\Vert \left\{f_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-\frac{a_n}{\left\Vert a\right\Vert_{N,\ell^2}}\right\}\right\Vert_{N,\ell^2}\\ & & + \left\Vert \left\{\Theta(\lambda_n)g_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}\\ & \leq & \left\Vert \left\{\frac{a_n}{\left\Vert a\right\Vert_{N,\ell^2}}\left(\left(\frac{1}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-1\right)\right\}\right\Vert_{N,\ell^2}\\ & & +\frac{\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert}{(1-\kappa_N^2)^{\frac{1}{2}}} \left\Vert \left\{g_N(\lambda_n)\left(1-\left\vert \lambda_n\right\vert^2\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}.\end{aligned}$$ Now for $x$ sufficiently small and positive we have that $\frac{1}{\sqrt{1-x}}-1=\frac{1-\sqrt{1-x}}{\sqrt{1-x}}\lesssim \frac{x}{\sqrt{1-x}}$. Applying this with $x=\sup_{m\geq N} \left\vert\Theta(\lambda_m)\right\vert$ gives that: $$\left\Vert \left\{h_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-\frac{a_n}{\left\Vert a\right\Vert_{N,\ell^2}}\right\}\right\Vert_{N,\ell^2} \leq \frac{\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert}{(1-\kappa_N^2)^{\frac{1}{2}}} \left(1+\left\Vert \left\{g_N(\lambda_n)\left(1-\left\vert \lambda_n\right\vert^2\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}\right).$$ Define $H_N=(1+\delta_N)^{-1} \left\Vert a\right\Vert_{N,\ell^2} h_N$, and then we have $H_N\in K_{\Theta}$ and $\left\Vert H_N\right\Vert_{2}\leq \left\Vert a\right\Vert_{N,\ell^2}$. Using the last estimate and adding and subtracting the quantity $\frac{a_n}{(1+\delta_N)}$ yields that: $$\begin{aligned} \left\Vert \left\{H_N(\lambda_n)\left(\frac{1-\left\vert \lambda_n\right\vert^2}{1-\left\vert \Theta(\lambda_n)\right\vert^2}\right)^{\frac{1}{2}}-a_n\right\}\right\Vert_{N,\ell^2} \leq & & \\ \left(\frac{\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert}{(1+\delta_N)(1-\kappa_N^2)^{\frac{1}{2}}} \left(1+\left\Vert \left\{g_N(\lambda_n)\left(1-\left\vert \lambda_n\right\vert^2\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}\right)+\delta_N\right)\left\Vert a\right\Vert_{N,\ell^2}.\end{aligned}$$ Note that the quantity: $$\left(\frac{\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert}{(1+\delta_N)(1-\kappa_N^2)^{\frac{1}{2}}} \left(1+\left\Vert \left\{g_N(\lambda_n)\left(1-\left\vert \lambda_n\right\vert^2\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}\right)+\delta_N\right)\lesssim \delta_N+\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert.$$ Here we have used that the sequence $\{\lambda_n\}$ is by hypothesis an interpolating sequence and hence: $\left\Vert \left\{g_N(\lambda_n)\left(1-\left\vert \lambda_n\right\vert^2\right)^{\frac{1}{2}}\right\}\right\Vert_{N,\ell^2}\lesssim \left\Vert g_N\right\Vert_{2}\leq 1+\delta_N$. Since by hypothesis we have that $\delta_N+\sup_{m\geq N}\left\vert\Theta(\lambda_m)\right\vert\to 0$ as $N\to\infty$, it is possible to make this less than the given $\varepsilon>0$, and hence we get a function $H_N$ satisfying the properties for $\{\lambda_n\}$ to be $AIS_{K_\Theta}$. The proof above also gives an estimate on the norm of the interpolating function in the event that $\sup_n |\Theta(\lambda_n)| \le \kappa < 1$, but $(1 + \varepsilon)$ is no longer the best estimate. Carleson Measures in Model Spaces {#CMMS} --------------------------------- From Theorem \[main\], and , we have a Carleson measure statement for thin sequences in the Hardy space $H^2$. In this section, we obtain an equivalence in model spaces. We now consider the embedding constants in the case of model spaces. As before, given a positive measure $\mu$ on ${\mathbb{D}}$, we denote the (possibly infinite) constant $${\mathcal{C}}_{\Theta}(\mu) = \sup_{f \in K_{\Theta}, f \neq 0} \frac{\|f\|^2_{L^2({\mathbb{D}}, \mu)}}{\|f\|^2_2}$$ as the Carleson embedding constant of $\mu$ on $K_{\Theta}$ and $${\mathcal{R}}_{\Theta}(\mu) = \sup_{z} \|k^{\Theta}_z\|_{L^2({\mathbb{D}}, \mu)}^2$$ as the embedding constant of $\mu$ on the reproducing kernel of $K_\Theta$ (recall that the kernels $k^{\Theta}_z$ are normalized). It is known that for general measure $\mu$ the constants ${\mathcal{R}}_{\Theta}(\mu)$ and ${\mathcal{C}}_{\Theta}(\mu)$ are not equivalent, [@NV]. The complete geometric characterization of the measures for which ${\mathcal{C}}_{\Theta}(\mu)$ is finite is contained in [@LSUSW]. However, we always have that $${\mathcal{R}}_\Theta(\mu) \le {\mathcal{C}}_\Theta(\mu).$$ For $N > 1$, let $$\sigma_N = \sum_{k \ge N} \left\Vert K_{\lambda_k}^{\Theta}\right\Vert^{-2}\delta_{\lambda_k}=\sum_{k \ge N} \frac{1-\left\vert \lambda_k\right\vert^2}{1-\left\vert \Theta(\lambda_k)\right\vert^2}\delta_{\lambda_k}.$$ Note that for each $f \in K_{\Theta}$ $$\label{munorm} \| f\|^2_{L^2({\mathbb{D}}, \sigma_N)} = \sum_{k=N}^\infty \frac{(1 - |\lambda_k|^2)}{(1-\left\vert \Theta(\lambda_k)\right\vert^2)} |f(\lambda_k)|^2 = \sum_{k=N}^\infty |\langle f, k^{\Theta}_{\lambda_k}\rangle|^2,$$ and therefore we see that $$\label{e:CETests} 1 \le {\mathcal{R}}_\Theta(\sigma_N) \le {\mathcal{C}}_\Theta(\sigma_N).$$ By working in a restricted setting and imposing a condition on $\{\Theta(\lambda_n)\}$ we have the following. \[thm:Carleson\] Suppose $\Lambda = \{\lambda_n\}$ is a sequence in $\mathbb{D}$ and $\Theta$ is a nonconstant inner function such that $\kappa_m := \sup_{n \ge m}|\Theta(\lambda_n)|\to 0$. For $N > 1$, let $$\sigma_N = \sum_{k \ge N} \left\Vert K_{\lambda_k}^{\Theta}\right\Vert^{-2}\delta_{\lambda_k}=\sum_{k \ge N} \frac{1-\left\vert \lambda_k\right\vert^2}{1-\left\vert \Theta(\lambda_k)\right\vert^2}\delta_{\lambda_k}.$$ Then the following are equivalent: 1. $\Lambda$ is a thin sequence; 2. $ {\mathcal{C}}_\Theta(\sigma_N) \to 1$ as $N \to \infty$; 3. $ {\mathcal{R}}_\Theta(\sigma_N) \to 1$ as $N \to \infty$. We have $(2)\Rightarrow (3)$ by testing on the function $f=k_z^{\Theta}$ for all $z\in\mathbb{D}$, which is nothing more then . We next focus on $(1)\Rightarrow(2)$. Let $f \in K_{\Theta}$ and let the sequence $a$ be defined by $a_j = \left\|K_{\lambda_j}\right\|^{-1}f(\lambda_j)$. By , $\left\|a\right\|_{N, \ell^2}^2 = \left\| f\right\|^2_{L^2({\mathbb{D}}, \sigma_N)}$, and since $\{k_{\lambda_j}^{\Theta}\}$ is an $AOB$, there exists $C_N$ such that $$\begin{aligned} \left\|a\right\|_{N,\ell^2}^2 & = \sum_{j \ge N} \left\|K_{\lambda_j}^\Theta\right\|^{-2}_{K_{\Theta}} |f(\lambda_j)|^2 = \left\langle f, \sum_{j \ge N} a_j k_{\lambda_j}^\Theta \right\rangle_{K_{\Theta}} \le \left\| f\right\|_{2} \left\|\sum_{j \ge N} a_j k_{\lambda_j}^\Theta\right\|_{K_{\Theta}} \le C_N \left\| f\right\|_{2} \left\|a\right\|_{N,\ell^2}.\end{aligned}$$ By (1) and [@CFT]\*[Theorem 5.2]{}, we know that $C_N \to 1$ and since we have established that $\|f\|_{L^2({\mathbb{D}}, \sigma_N)} \le C_N \|f\|_2$, (1) $\Rightarrow$ (2) follows. An alternate way to prove this is to use Theorem \[main\], $(2)\Rightarrow(5)$, and the hypothesis on $\Theta$. Since it is possible to then show that $\frac{\mathcal{C}_{\Theta}(\sigma_N)}{\mathcal{C}(\mu_N)}\to 1$. Indeed, given $\varepsilon>0$, we have that $1\leq\mathcal{C}(\mu_M)$ for all $M$, and since $\{\lambda_n\}$ is thin there exists a $N$ such that $\mathcal{C}(\mu_M)<1+\varepsilon$ for all $M\geq N$. Hence, $1\leq \mathcal{C}(\mu_M)<1+\varepsilon$ for all $M\geq N$. These facts easily lead to: $$\frac{1}{1+\varepsilon}\leq\frac{\mathcal{C}_{\Theta}(\sigma_M)}{\mathcal{C}(\mu_M)}$$ Further, since $\Theta$ tends to zero on the sequence $\{\lambda_n\}$ there is an integer, without loss we may take it to be $N$, so that $\frac{1}{1-\left\vert \Theta(\lambda_n)\right\vert^2}<1+\varepsilon$ for all $n\geq N$. From this we deduce that: $$\frac{\mathcal{C}_{\Theta}(\sigma_M)}{\mathcal{C}(\mu_M)}< (1+\varepsilon)\frac{\sup\limits_{f\in K_{\theta}} \sum_{n\geq M} (1-\left\vert \lambda_m\right\vert^2)\left\vert f(\lambda_m)\right\vert^2}{\mathcal{C}(\mu_M)}\leq (1+\varepsilon)$$ in the last estimate we used that $K_\Theta\subset H^2$ and so the suprema appearing in the numerator is always at most the expression in the denominator. Combining the estimates we have that for $M\geq N$, that: $$\frac{1}{1+\varepsilon}\leq \frac{\mathcal{C}_{\Theta}(\sigma_M)}{\mathcal{C}(\mu_M)}<1+\varepsilon$$ which yields the conclusion about the ratio tending to $1$ as $N\to \infty$. Now consider $(3)\Rightarrow (1)$ and compute the quantity $\mathcal{R}_\Theta(\sigma_N)$. In what follows, we let $\Lambda_N$ denote the tail of sequence, $\Lambda_N=\{\lambda_k: k\geq N\}$. Note that we have $\left\vert 1-\overline{a}b\right\vert\geq 1-\left\vert a\right\vert$. Using this estimate we see that: $$\begin{aligned} \sup_{z\in\mathbb{D}} \|k^{\Theta}_z\|_{L^2({\mathbb{D}}, \sigma_N)}^2 & = & \sup_{z\in\mathbb{D}} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)}{(1-\left\vert \Theta(\lambda_k)\right\vert^2)} \frac{(1-\left\vert z\right\vert^2)}{(1-\left\vert \Theta(z)\right\vert^2)}\frac{\left\vert 1-\Theta(z)\overline{\Theta(\lambda_k)}\right\vert^2}{\left\vert 1-z\overline{\lambda_k}\right\vert^2}\\ & \geq & \sup_{z\in\mathbb{D}} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert z\right\vert^2)}{\left\vert 1-z\overline{\lambda_k}\right\vert^2} \frac{(1-\left\vert \Theta(z)\right\vert)(1-\left\vert \Theta(\lambda_k)\right\vert)}{(1-\left\vert \Theta(z)\right\vert^2)(1-\left\vert \Theta(\lambda_k)\right\vert^2)}\\ & = & \sup_{z\in\mathbb{D}} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert z\right\vert^2)}{\left\vert 1-z\overline{\lambda_k}\right\vert^2} \frac{1}{(1+\left\vert \Theta(z)\right\vert)(1+\left\vert \Theta(\lambda_k)\right\vert)}\\ & \geq & \sup_{z\in\Lambda_N} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert z\right\vert^2)}{\left\vert 1-z\overline{\lambda_k}\right\vert^2} \frac{1}{(1+\left\vert \Theta(z)\right\vert)(1+\left\vert \Theta(\lambda_k)\right\vert)}\\ & \geq & \frac{1}{(1+\kappa_N)^2}\sup_{z\in\Lambda_N} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert z\right\vert^2)}{\left\vert 1-z\overline{\lambda_k}\right\vert^2}.\end{aligned}$$ By the Weierstrass Inequality, we obtain for $M \ge N$ that $$\begin{aligned} \label{wi} \prod_{k \geq N, k \neq M} \left| \frac{\lambda_k - \lambda_M}{1 - \bar \lambda_k \lambda_M}\right|^2 & = & \prod_{k \geq N, k \neq M} \left( 1- \frac{(1 - |\lambda_k|^2)(1 - |\lambda_M|^2)}{|1 - \bar \lambda_k \lambda_M|^2} \right)\nonumber\\ & \ge & 1 - \sum_{k \geq N, k \neq M} \frac{(1- |\lambda_M|^2)(1- |\lambda_k|^2)}{ | 1 - \bar \lambda_k \lambda_M|^2}.\end{aligned}$$ Thus, by we have for $M \ge N$, $$\begin{aligned} \frac{1}{(1+\kappa_N)^2}\sup_{z\in\Lambda_N} \sum_{k\geq N} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert z\right\vert^2)}{\left\vert 1-z\overline{\lambda_k}\right\vert^2} & \ge & \frac{1}{(1+\kappa_N)^2}\left(\sum_{k \geq N, k\neq M} \frac{(1-\left\vert \lambda_k\right\vert^2)(1-\left\vert \lambda_M\right\vert^2)}{\left\vert 1-\lambda_M\overline{\lambda_k}\right\vert^2} + 1\right)\\ & \ge & \frac{1}{(1+\kappa_N)^2}\left(1 - \prod_{k \geq N, k\neq M} \left| \frac{\lambda_k - \lambda_M}{1 - \bar \lambda_k \lambda_M}\right|^2 + 1\right).\end{aligned}$$ Now by assumption, recalling that $\kappa_N := \sup_{n \ge N}|\Theta(\lambda_n)|$, we have $$\lim_{N \to \infty}\sup_{z\in\mathbb{D}} \|k^{\Theta}_z\|_{L^2({\mathbb{D}}, \sigma_N)}^2 = 1~\mbox{ and }~\lim_{N \to \infty} \kappa_N = 0,$$ so $$1 = \lim_{N \to \infty}\sup_{z\in\mathbb{D}} \|k^{\Theta}_z\|_{L^2({\mathbb{D}}, \sigma_N)}^2 \ge \lim_{N \to \infty} \frac{1}{(1+\kappa_N)^2}\left(1 - \prod_{k \geq N, k\neq M} \left| \frac{\lambda_k - \lambda_M}{1 - \bar \lambda_k \lambda_M}\right|^2 + 1\right) \ge 1.$$ Therefore, for any $M \ge N$ $$\label{e:large} \prod_{k \geq N, k\neq M} \left| \frac{\lambda_k - \lambda_M}{1 - \bar \lambda_k \lambda_M}\right| > 1 - \varepsilon~\mbox{as}~N \to \infty.$$ Also, for any $\varepsilon>0$ there is an integer $N_0$ such that for all $M> N_0$ we have: $$\label{e:bigk} \prod_{k \geq N_0, k\neq M} \left| \frac{\lambda_k - \lambda_M}{1 - \bar \lambda_k \lambda_M}\right| >1-\varepsilon.$$ Fix this value of $N_0$, and consider $k<N_0$. Further, for $k \ne M$ and $k<N_0$, $$\begin{aligned} 1- \rho(\lambda_M, \lambda_k)^2 & = 1- \left|\frac{\lambda_k - \lambda_M}{1 - \bar\lambda_k \lambda_M}\right|^2 = \frac{(1- |\lambda_M|^2)(1- |\lambda_k|^2)}{ | 1 - \bar\lambda_k \lambda_M|^2}\\ &= (1 - |\lambda_k|^2)\frac{(1 - |\lambda_M|^2)}{(1 - |\Theta(\lambda_M)|^2)} \frac{1 - |\Theta(\lambda_M)|^2}{\left\vert 1 - \bar \Theta(\lambda_M) \Theta(\lambda_k)\right\vert^2} \left|\frac{1 - \bar\Theta(\lambda_M) \Theta(\lambda_k)}{1 - \lambda_k \bar\lambda_M}\right|^2\\ & = \frac{1 - |\Theta(\lambda_M)|^2}{\left\vert 1 - \bar \Theta(\lambda_M) \Theta(\lambda_k)\right\vert^2}(1 - |\lambda_k|^2) |k_{\lambda_M}^\Theta(\lambda_k)|^2\\ & = \frac{1 - |\Theta(\lambda_M)|^2}{\left\vert 1 - \bar \Theta(\lambda_M) \Theta(\lambda_k)\right\vert^2} (1 - |\Theta(\lambda_k)|^2) \frac{(1 - |\lambda_k|^2)}{1 - |\Theta(\lambda_k)|^2} |k_{\lambda_M}^\Theta(\lambda_k)|^2\\ & \le \frac{1 - |\Theta(\lambda_M)|^2}{\left\vert 1 - \bar \Theta(\lambda_M) \Theta(\lambda_k)\right\vert^2} \left( \|k_{\lambda_M}^\Theta\|_{L^2(\mathbb{D}, \sigma_M)}^2 - 1\right) \\ & \leq \frac{1}{(1-\kappa_M)^2}\left( \|k_{\lambda_M}^\Theta\|_{L^2(\mathbb{D}, \sigma_M)}^2 - 1\right)\to 0 ~\mbox{as}~ M \to \infty,\end{aligned}$$ since $1\leq \|k_{\lambda_N}^\Theta\|_{L^2(\mathbb{D}, \sigma_N)}^2\leq\sup_{z} \|k_{z}^\Theta\|_{L^2(\mathbb{D}, \sigma_N)}^2$ and, by hypothesis, we have that $\kappa_N \to 0$ and $\mathcal{R}_{\Theta}(\sigma_N)\to 1$. Hence, it is possible to choose an integer $M_0$ sufficiently large compared to $N_0$ so that for all $M>M_0$ $$\rho(\lambda_k,\lambda_{M})>\left(1-\varepsilon\right)^{\frac{1}{N_0}}\quad k<N_0$$ which implies that $$\label{e:smallk} \prod_{k<N_0} \rho(\lambda_k,\lambda_{M})>1-\varepsilon.$$ Now given $\varepsilon>0$, first select $N_0$ as above in . Then select $M_0$ so that holds. Then for any $M>M_0$ by writing the product $$\prod_{k\neq M} \rho(\lambda_k,\lambda_M)=\prod_{k<N_0} \rho(\lambda_k,\lambda_{M})\prod_{k>N_0, k\neq M} \rho(\lambda_k,\lambda_M)>(1-\varepsilon)^2.$$ For the first term in the product we have used to conclude that it is greater than $1-\varepsilon$. And for $M$ sufficiently large, by , we have that the second term in the product is greater than $1-\varepsilon$ as well. Hence, $B$ is thin as claimed. Algebra Version {#asip_algebra} =============== We now compare the model-space version of our results with an algebra version. Theorem \[main\] requires that our inner function satisfy $\Theta(\lambda_n) \to 0$ for a thin interpolating sequence $\{\lambda_n\}$ to be an $AIS_{K_\Theta}$ sequence. Letting $B$ denote the Blaschke product corresponding to the sequence $\{\lambda_n\}$, denoting the algebra of continuous functions on the unit circle by $C$, and letting $H^\infty + C = \{f + g: f \in H^\infty, g \in C\}$ (see [@Sarason1] for more on this algebra), we can express this condition in the following way: $\Theta(\lambda_n) \to 0$ if and only if $\overline{B} \Theta \in H^\infty + C$. In other words, if and only if $B$ divides $\Theta$ in $H^\infty + C$, [@AG; @GIS]. We let $\mathcal{B}$ be a Douglas algebra; that is, a uniformly closed subalgebra of $L^\infty$ containing $H^\infty$. It will be helpful to use the maximal ideal space of our algebra. Throughout $M(\mathcal{B})$ denotes the maximal ideal space of the algebra $\mathcal{B}$; that is, the set of nonzero continuous multiplicative linear functionals on $\mathcal{B}$. We now consider thin sequences in uniform algebras. This work is closely connected to the study of such sequences in general uniform algebras (see [@GM]) and the special case $B = H^\infty$ is considered in [@HIZ]. With the weak-$\star$ topology, $M(\mathcal{B})$ is a compact Hausdorff space. In interpreting our results below, it is important to recall that each $x \in M(H^\infty)$ has a unique extension to a linear functional of norm one and, therefore, we may identify $M(\mathcal{B})$ with a subset of $M(H^\infty)$. In this context, the condition we will require (see Theorem \[main\_algebra\]) for an $EIS_\mathcal{B}$ sequence to be the same as an $AIS_\mathcal{B}$ sequence is that the sequence be thin near $M(\mathcal{B})$. We take the following as the definition (see [@SW]): An interpolating sequence $\{\lambda_n\}$ with corresponding Blaschke product $b$ is said to be [thin near $M(\mathcal{B})$]{.nodecor} if for any $0<\eta < 1$ there is a factorization $b = b_1 b_2$ with $b_1$ invertible in $\mathcal{B}$ and $$|b_2^\prime(\lambda_n)|(1 - |\lambda_n|^2) > \eta$$ for all $n$ such that $b_2(\lambda_n) = 0$. We will be interested in two related concepts that a sequence can have. We first introduce a norm on a sequence $\{a_n\}\in \ell^\infty$ that is induced by a second sequence $\{\lambda_n\}$ and a set $\mathcal{O}\supset M(\mathcal{B})$ that is open in $M(H^\infty)$. Set $I_\mathcal{O}=\{n\in{\mathbb{Z}}: \lambda_n\in\mathcal{O}\}$. Then we define $${\ensuremath{\left\|a\right\|}}_{\mathcal{O},\ell^\infty}=\sup\{ {\ensuremath{\left\verta_n\right\vert}}: n\in I_\mathcal{O}\}.$$ A Blaschke sequence $\{\lambda_n\}$ is an [eventual $1$-interpolating sequence in a Douglas algebra $\mathcal{B}$]{.nodecor}, denoted $EIS_{\mathcal{B}}$, if for every $\varepsilon > 0$ there exists an open set $\mathcal{O}\supset M(\mathcal{B})$ such that for each $\{a_n\} \in \ell^\infty$ there exists $f_{\mathcal{O}, a} \in H^\infty$ with $$f_{\mathcal{O}, a}(\lambda_n) = a_n ~\mbox{for}~ \lambda_n\in\mathcal{O} ~\mbox{and}~ \|f_{\mathcal{O}, a}\|_{\infty} \le (1 + \varepsilon) \|a\|_{\mathcal{O}, \ell^\infty}.$$ A Blaschke sequence $\{\lambda_n\}$ is a [strong asymptotic interpolating sequence in a Douglas algebra $\mathcal{B}$]{.nodecor}, denoted $AIS_{\mathcal{B}}$, if for all $\varepsilon > 0$ there exists an open set $\mathcal{O}\supset M(\mathcal{B})$ such that for all sequences $\{a_n\} \in \ell^\infty$ there exists a function $G_{\mathcal{O}, a} \in H^\infty$ such that $\|G_{\mathcal{O}, a}\|_{\infty} \le \|a\|_{\mathcal{O},\ell^\infty}$ and $$\|\{G_{\mathcal{O}, a}(\lambda_n) - a_n\}\|_{\mathcal{O}, \ell^\infty} < \varepsilon \|a\|_{\mathcal{O}, \ell^\infty}.$$ \[EISiffASI\_algebra\] Let $\mathcal{B}$ be a Douglas algebra. Let $\{\lambda_n\}$ be a Blaschke sequence of points in ${\mathbb{D}}$. Then $\{\lambda_n\}$ is an $EIS_{\mathcal{B}}$ sequence if and only if $\{\lambda_n\}$ is an $AIS_{\mathcal{B}}$. If a sequence is an $EIS_{\mathcal{B}}$, then it is trivially $AIS_{\mathcal{B}}$, for given $\varepsilon > 0$ we may take $G_{N, a} = \frac{f_{N, a}}{(1 + \varepsilon)}$. For the other direction, suppose $\{\lambda_n\}$ is an $AIS_{\mathcal{B}}$ sequence. Let $\varepsilon > 0$ be given and let $\varepsilon^\prime < \frac{\varepsilon}{1 + \varepsilon}$. Let $\mathcal{O} \supset M(\mathcal{B})$ denote the open set we obtain from the definition of $AIS_{\mathcal{B}}$ corresponding to $\varepsilon^\prime$. Reordering the points of the sequence in $\mathcal{O}$ so that they begin at $n = 1$ and occur in the same order, we let $T: H^\infty \to \ell^\infty$ be defined by $T(g) = \{g(\lambda_{n})\}$. We let $y_\mathcal{O}$ denote the corresponding reordered sequence. Then $T$ is a bounded linear operator between Banach spaces, so we may use Proposition \[Banachspace\] to choose $f \in H^\infty$ so that $Tf = y_\mathcal{O}$ and $\|f\| < \frac{1}{1 - \varepsilon^\prime} \|y_\mathcal{O}\|_{\ell^\infty} < (1 + \varepsilon) \|y\|_{\mathcal{O}, \ell^\infty}$ to complete the proof. Letting $\overline{B}$ denote the set of functions with conjugate in $B$, we mention one more set of equivalences. In [@SW Theorem 1] Sundberg and Wolff showed that an interpolating sequence $\{\lambda_n\}$ is thin near $M(\mathcal{B})$ if and only if for any bounded sequence of complex numbers $\{w_n\}$ there exists a function in $f \in H^\infty \cap \overline{B}$ such that $f(\lambda_n) = w_n$ for all $n$. Finally, we note that Earl ([@E Theorem 2] or [@E2]) proved that given an interpolating sequence for the algebra $H^\infty$ satisfying $$\inf_n \prod_{j \ne n} \left|\frac{z_j - z_n}{1 - \overline{z_j} z_n}\right| \ge \delta > 0$$ then for any bounded sequence $\{\omega_n\}$ and $$\label{Earl} M > \frac{2 - \delta^2 + 2(1 - \delta^2)^{1/2}}{\delta^2} \sup_n |\omega_n|$$ there exists a Blaschke product $B$ and a real number $\alpha$ so that $$M e^{i \alpha} B(\lambda_j) = \omega_j~\mbox{for all}~j.$$ Using the results of Sundberg-Wolff and Earl, we obtain the following theorem. \[main\_algebra\] Let $\{\lambda_n\}$ in $\mathbb{D}$ be an interpolating Blaschke sequence and let $\mathcal{B}$ be a Douglas algebra. The following are equivalent: 1. $\{\lambda_n\}$ is an $EIS_{\mathcal{B}}$ sequence; \[EIS\_Douglas\] 2. $\{\lambda_n\}$ is a $AIS_{\mathcal{B}}$ sequence; \[AIS\_Douglas\] 3. $\{\lambda_n\}$ is thin near $M(\mathcal{B})$;\[nearthin\] 4. for any bounded sequence of complex numbers $\{w_n\}$ there exists a function in $f \in H^\infty \cap \overline{B}$ such that $f(\lambda_n) = w_n$ for all $n$.\[SW\] The equivalence between and is contained in Theorem \[EISiffASI\_algebra\]. The equivalence of and is the Sundberg-Wolff theorem. We next prove that if a sequence is thin near $M(\mathcal{B})$, then it is an $EIS_{\mathcal{B}}$ sequence. We let $b$ denote the Blaschke product associated to the sequence $\{\lambda_n\}$. Given $\varepsilon>0$, choose $\gamma$ so that $$\left(\frac{1 + \sqrt{1 - \gamma^2}}{\gamma}\right)^2 < 1 + \varepsilon.$$ Choose a factorization $b = b_1^\gamma b_2^\gamma$ so that $\overline{b_1^\gamma} \in \mathcal{B}$ and $\delta(b_2) = \inf (1 - |\lambda|^2)|b_2^\gamma \,^\prime(\lambda)| > \gamma$. Since $|b_1^\gamma| = 1$ on $M(\mathcal{B})$ and $\gamma < 1$, there exists an open set $\mathcal{O} \supset M(\mathcal{B})$ such that $|b_1^\gamma| > \gamma$ on $\mathcal{O}$. Note that if $b(\lambda) = 0$ and $\lambda \in \mathcal{O}$, then $b_2(\lambda) = 0$. The condition on $b_2^\gamma$ coupled with Earl’s Theorem (see ), gives rise to functions $\{f_k^\gamma\}$ in $H^\infty$ (!), and hence in $\mathcal{B}$ so that $$\label{estimate} f_j^\gamma(\lambda_k) = \delta_{jk} \, ~\mbox{whenever}~ b_2^\gamma(\lambda_k) = 0 ~\mbox{and}~\sup_{z \in \mathbb{D}}\sum_{j}{\ensuremath{\left\vertf_j^\gamma(z)\right\vert}}\leq \left(\frac{1 + \sqrt{1 - \gamma^2}}{\gamma}\right)^2.$$ Now given $a\in\ell^\infty$, choose the corresponding P. Beurling functions (as in ) and let $$f^\gamma_{\mathcal{O}, a}=\sum_{j} a_j f_j^\gamma.$$ By construction we have that $f_{\mathcal{O},a}(\lambda_n)=a_n$ for all $\lambda_n\in\mathcal{O}$. Also, by Earl’s estimate , we have that $${\ensuremath{\left\|f_{\mathcal{O},a}^{\gamma}\right\|}}_{\infty} \leq (1+\varepsilon)\|a\|_\infty.$$ Thus, implies . Finally, we claim implies . Suppose $\{\lambda_n\}$ is a $EIS_{\mathcal{B}}$ sequence. Let $0 < \eta < 1$ be given and choose $\eta_1$ with $1/(1 + \eta_1) > \eta$, a function $f \in H^\infty$ and $\mathcal{O} \supset M(\mathcal{B})$ open in $M(H^\infty)$ with $$f_{\mathcal{O}, n}(\lambda_m) = \delta_{nm}~\mbox{for}~\lambda_m \in \mathcal{O}~\mbox{and}~\|f\|_{\mathcal{O}, n} \le 1 + \eta_1.$$ Let $b_2$ denote the Blaschke product with zeros in $\mathcal{O}$, $b_1$ the Blaschke product with the remaining zeros and let $$f_{\mathcal{O}, n}(z) = \left(\prod_{j \ne n: b_2(\lambda_j) = 0} \frac{z - \lambda_j}{1 - \overline{\lambda_j}z}\right) h(z),$$ for some $h \in H^\infty.$ Then $\|h\|_{\infty} \le 1 + \eta_1$ and $$1 = |f_{\mathcal{O}, n}(\lambda_n)| = \left|\left(\prod_{j \ne n; b_2(\lambda_j) = 0} \frac{\lambda_n - \lambda_j}{1 - \overline{\lambda_j}\lambda_n}\right) h(\lambda_n)\right| \le (1 + \eta_1) \prod_{j \ne n} \left|\frac{\lambda_n - \lambda_j}{1 - \overline{\lambda_j}\lambda_n}\right|.$$ Therefore $$(1 - |\lambda_n|^2)|b_2^\prime(\lambda_n)| = \prod_{j \ne n: b_2(\lambda_j) = 0} \left|\frac{\lambda_n - \lambda_j}{1 - \overline{\lambda_j}\lambda_n}\right| \ge 1/(1 + \eta_1) > \eta.$$ Now because we assume that $\{\lambda_n\}$ is interpolating, the Blaschke product $b = b_1 b_2$ with zeros at $\{\lambda_n\}$ will vanish at $x \in M(H^\infty)$ if and only if $x$ lies in the closure of the zeros of $\{\lambda_n\}$, [@Hoffman]\*[p. 206]{} or [@Garnett]\*[p. 379]{}. Now, if we choose $\mathcal{V}$ open in $M(H^\infty)$ with $M(\mathcal{B}) \subset \mathcal{V} \subset \overline{\mathcal{V}} \subset \mathcal{O}$, then $b_1$ has no zeros in ${\mathcal{V}} \cap \mathbb{D}$ and, therefore, no point of $M(\mathcal{B})$ can lie in the closure of the zeros of $b_1$. So $b_1$ has no zeros on $M(\mathcal{B})$. Thus we see that $b_1$ is bounded away from zero on $M(\mathcal{B})$ and, consequently, $b_1$ is invertible in $\mathcal{B}$. We note that we do not need the full assumption that $b$ is interpolating; it is enough to assume that $b$ does not vanish identically on a Gleason part contained in $M(\mathcal{B})$. Our goal, however, is to illustrate the difference in the Hilbert space and uniform algebra setting and so we have stated the most important setting for our problem. [^1]: $\dagger$ Research supported in part by Simons Foundation Grant 243653 [^2]: $\ddagger$ Research supported in part by a National Science Foundation DMS grant \# 0955432.
--- abstract: 'Transmission spectroscopy of exoplanets is a tool to characterize rocky planets and explore their habitability. Using the Earth itself as a proxy, we model the atmospheric cross section as a function of wavelength, and show the effect of each atmospheric species, Rayleigh scattering and refraction from 115 to 1000 nm. Clouds do not significantly affect this picture because refraction prevents the lowest 12.75 km of the atmosphere, in a transiting geometry for an Earth-Sun analog, to be sampled by a distant observer. We calculate the effective planetary radius for the primary eclipse spectrum of an Earth-like exoplanet around a Sun-like star. Below 200 nm, ultraviolet (UV) O$_2$ absorption increases the effective planetary radius by about 180 km, versus 27 km at 760.3 nm, and 14 km in the near-infrared (NIR) due predominantly to refraction. This translates into a 2.6% change in effective planetary radius over the UV-NIR wavelength range, showing that the ultraviolet is an interesting wavelength range for future space missions.' author: - 'Y. Bétrémieux and L. Kaltenegger' title: | Transmission spectrum of Earth as a transiting exoplanet\ from the ultraviolet to the near-infrared --- Introduction ============ Many planets smaller than Earth have now been detected with the Kepler mission, and with the realization that small planets are much more numerous than giant ones (Batalha et al. 2013), future space missions, such as the James Webb Space Telescope (JWST), are being planned to characterize the atmosphere of potential Earth analogs by transiting spectroscopy, explore their habitability, and search for signs of life. The simultaneous detection of large abundances of either O$_{2}$ or O$_{3}$ in conjunction with a reducing species such as CH$_{4}$, or N$_2$O, are biosignatures on Earth (see e.g. Des Marais et al. 2002; Kaltenegger et al. 2010a and reference therein). Although not a clear indicative for the presence of life, H$_2$O is essential for life. Simulations of the Earth’s spectrum as a transiting exoplanet (Ehrenreich et al. 2006; Kaltenegger & Traub 2009; Pallé et al. 2009; Vidal-Madjar et al. 2010; Rauer et al. 2011; García Muñoz et al. 2012; Hedelt et al. 2013) have focused primarily on the visible (VIS) to the infrared (IR), the wavelength range of JWST (600-5000 nm). No models of spectroscopic signatures of a transiting Earth have yet been computed from the mid- (MUV) to the far-ultraviolet (FUV). Which molecular signatures dominate this spectral range? In this paper, we present a model of a transiting Earth’s transmission spectrum from 115 to 1000 nm (UV-NIR) during primary eclipse. While no UV missions are currently in preparation, this model can serve as a basis for future UV mission concept studies. Model description {#model} ================= To simulate the spectroscopic signatures of an Earth-analog transiting its star, we modified the Smithsonian Astrophysical Observatory 1998 (SAO98) radiative transfer code (see Traub & Stier 1976; Johnson et al. 1995; Traub & Jucks 2002; Kaltenegger & Traub 2009 and references therein for details), which computes the atmospheric transmission of stellar radiation at high spectral resolution from a molecular line list database. Updates include a new database of continuous absorber’s cross sections, as well as N$_2$, O$_2$, Ar, and CO$_2$ Rayleigh scattering cross sections from the ultraviolet (UV) to the near-infrared (NIR). A new module interpolates these cross sections and derives resulting optical depths according to the mole fraction of the continuous absorbers and the Rayleigh scatterers in each atmospheric layer. We also compute the deflection of rays by atmospheric refraction to exclude atmospheric regions for which no rays from the star can reach the observer due to the observing geometry. Our database of continuous absorbers is based on the MPI-Mainz-UV-VIS Spectral Atlas of Gaseous Molecules[^1]. For each molecular species of interest (O$_2$, O$_3$, CO$_2$, CO, CH$_4$, H$_2$O, NO$_2$, N$_2$O, and SO$_2$), we created model cross sections composed of several measured cross sections from different spectral regions, at different temperatures when measurements are available, with priority given to higher spectral resolution measurements (see Table \[tbl\_crsc\]). We compute absorption optical depths for different altitudes in the atmosphere using the cross section model with the closest temperature to that of the atmospheric layer considered. Note that we do not consider line absorption from atomic or ionic species which could produce very narrow but possibly detectable features at high spectral resolution (see also Snellen et al. 2013). The Rayleigh cross sections, $\sigma_R$, of N$_2$, O$_2$, Ar, and CO$_2$, which make-up 99.999% of the Earth’s atmosphere, are computed with $$\label{rayl} {\sigma_{R}} = \frac{32\pi^{3}}{3} \left( \frac{{\nu_{0}}}{n_{0}} \right)^{2} w^{4} F_K ,$$ where $\nu_0$ is the refractivity at standard pressure and temperature (or standard refractivity) of the molecular species, $w$ is the wavenumber, $F_K$ is the King correction factor, and $n_{0}$ is Loschmidt’s constant. Various parametrized functions are used to describe the spectral dependence of $\nu_0$ and $F_K$. Table \[tbl\_rayl\] gives references for the functional form of both parameters, as well as their spectral region. The transmission of each atmospheric layer is computed with Beer’s law from all optical depths. We use disc-averaged quantities for our model atmosphere. We use a present-day Earth vertical composition (Kaltenegger et al. (2010b) for SO$_2$; Lodders & Fegley, Jr. (1998) for Ar; and Cox (2000) for all other molecules) up to 130 km altitude, unless specified otherwise. Above 130 km, we assume constant mole fraction with height for all the molecules except for SO$_2$ which we fix at zero, and for N$_2$, O$_2$, and Ar which are described below. Below 100 km, we use the US 1976 atmosphere (COESA 1976) as the temperature-pressure profile. Above 100 km, the atmospheric density is sensitive to and increases with solar activity (Hedin 1987). We use the tabulated results of the MSIS-86 model, for solar maximum (Table A1.2) and solar minimum conditions (Table A1.1) published in Rees (1989), to derive the atmospheric density, pressure, and mole fractions for N$_2$, O$_2$, and Ar above 100 km. We run our simulations in two different spectral regimes. In the VIS-NIR, from 10000 to 25000 cm$^{-1}$ (400-1000 nm), we use a 0.05 cm$^{-1}$ grid, while in the UV from 25000 to 90000 cm$^{-1}$ (111-400 nm), we use a 0.5 cm$^{-1}$ grid. For displaying the results, the VIS-NIR and the UV simulations are binned on a 4 cm$^{-1}$ and a 20 cm$^{-1}$ grid, respectively. The choice in spectral resolution impacts predominantly the detectability of spectral features. The column abundance of each species along a given ray is computed taking into account refraction, tracing specified rays from the observer back to their source. Each ray intersects the top of the model atmosphere with an impact parameter $b$, the projected radial distance of the ray to the center of the planetary disc as viewed by the observer. As rays travel through the planetary atmosphere, they are bent by refraction along paths define by an invariant $L = (1 + \nu(r)) r \sin\theta(r)$ where both the zenith angle, $\theta(r)$, of the ray, and the refractivity, $\nu(r)$, are functions of the radial position of the ray with respect to the center of the planet. The refractivity is given by $$\label{refrac} \nu(r) = \left( \frac{n(r)}{n_{0}} \right) \sum_{j} f_{j}(r) {\nu_{0}}_{j} = \left( \frac{n(r)}{n_{0}} \right) \nu_{0}(r) ,$$ where ${\nu_{0}}_{j}$ is the standard refractivity of the j$^{th}$ molecular species while $\nu_{0}(r)$ is that of the atmosphere, $n(r)$ is the local number density, and $f_{j}(r)$ is the mole fraction of the j$^{th}$ species. Here, we only consider the main contributor to the refractivity (N$_2$, O$_2$, Ar, and CO$_2$) which are well-mixed in the Earth’s atmosphere, and fix the standard refractivity at all altitudes at its surface value. If we assume a zero refractivity at the top of the atmosphere, the minimum radial position from the planet’s center, $r_{min}$, that can be reached by a ray is related to its impact parameter by $$\label{refpath} L = (1 + \nu(r_{min})) r_{min} = R_{top} \sin\theta_{0} = b ,$$ where $R_{top}$ is the radial position of the top of the atmosphere and $\theta_{0}$ is the zenith angle of the ray at the top of the atmosphere. Note that $b$ is always larger than $r_{min}$, therefore the planet appears slightly larger to a distant observer. For each ray, we specify $r_{min}$, compute $\nu(r_{min})$, and obtain the corresponding impact parameter. Then, each ray is traced through the atmosphere every 0.1 km altitude increment, and column abundances, average mole fractions, as well as cumulative deflection along the ray are computed for each atmospheric layer (Johnson et al. 1995; Kaltenegger & Traub 2009). We characterize the transmission spectrum of the exoplanet using effective atmospheric thickness, $\Delta z_{eff}$, the increase in planetary radius due to atmospheric absorption during primary eclipse. To compute $\Delta z_{eff}$ for an exoplanet, we first specify $r_{min}$ for $N$ rays spaced in constant altitude increments over the atmospheric region of interest. We then compute the transmission, $T$, and impact parameter, $b$, of each ray through the atmosphere, and finally use, $$\begin{aligned} R_{eff}^{2} = R_{top}^{2} - \sum_{i = 1}^{N} \left( \frac{T_{i+1} + T_{i}}{2} \right) (b_{i+1}^{2} - b_{i}^{2}) \label{reff} \\ R_{top} = R_{p} + \Delta z_{atm} \\ \Delta z_{eff} = R_{eff} - R_{p} , \end{aligned}$$ where $R_{eff}$ is the effective radius of the planet, $R_{top}$ is the radial position of the top of the atmosphere, $R_{p}$ is the planetary radius (6371 km), $\Delta z_{atm}$ is the thickness of the atmosphere, and $i$ denotes the ray considered. Note that $(N+1)$ refer to a ray that grazes the top of the atmosphere. The rays define $N$ projected annuli whose transmission is the average of the values at the borders of the annulus. The top of the atmosphere is defined where the transmission is 1, and no bending occurs ($b_{N+1} = R_{top}$). We choose R$_{top}$ where atmospheric absorption and refraction are negligible, and use 100 km in the VIS-NIR, and 200 km in the UV for $\Delta z_{atm}$. To first order, the total deflection of a ray through an atmosphere is proportional to the refractivity of the deepest atmospheric layer reached by a ray (Goldsmith 1963). The planetary atmosphere density increases exponentially with depth, therefore some of the deeper atmospheric regions can bend all rays away from the observer (see e.g. Sidis & Sari 2010, García Muñoz et al. 2012), and will not be sampled by the observations. At which altitudes this occurs depends on the angular extent of the star with respect to the planet. For an Earth-Sun analog, rays that reach a distant observer are deflected on average no more than 0.269$\degr$. We calculate that the lowest altitude reached by these grazing rays range from about 14.62 km at 115 nm, 13.86 km at 198 nm (shortest wavelength for which all used molecular standard refractivities are measured), to 12.95 km at 400 nm, and 12.75 km at 1000 nm. As this altitude is relatively constant in the VIS-NIR, we incorporate this effect in our model by excluding atmospheric layers below 12.75 km. To determine the effective planetary radius, we choose standard refractivities representative of the lowest opacities within each spectral region: 2.88$\times10^{-4}$ for the VIS-NIR, and 3.00$\times10^{-4}$ for the UV. We use 80 rays from 12.75 to 100 km in the VIS-NIR, and 80 rays from 12.75 to 200 km altitude in the UV. In the UV, the lowest atmospheric layers have a negligible transmission, thus the exact exclusion value of the lowest atmospheric layer, calculated to be between 14.62 and 12.75 km, do not impact the modeled UV spectrum. Results and discussion {#discussion} ====================== The increase in planetary radius due to the additional atmospheric absorption of a transiting Earth-analog is shown in Fig. \[spectrum\] from 115 to 1000 nm. The individual contribution of Rayleigh scattering by N$_2$, O$_2$, Ar, and CO$_2$ is also shown, with and without the effect of refraction by these same species, respectively. The individual contribution of each species, shown both in the lower panel of Fig. \[spectrum\] and in Fig. \[absorbers\], are computed by sampling all atmospheric layers down to the surface, assuming the species considered is the only one with a non-zero opacity. In the absence of absorption, the effective atmospheric thickness is about 1.8 km, rather than zero, because the bending of the rays due to refraction makes the planet appear larger to a distant observer. The spectral region shortward of 200 nm is shaped by O$_2$ absorption and depends on solar activity. Amongst the strongest O$_2$ features are two narrow peaks around 120.5, and 124.4 nm, which, increase the planetary radius by 179-185 and 191-195 km, respectively. The strongest O$_2$ feature, the broad Schumann-Runge continuum, increases the planetary radius by more than 150 km from 134.4 to 165.5 nm, and peaks around 177-183 km. The Schumann-Runge bands, from 180 to 200 nm, create maximum variations of 30 km in the effective planetary radius. O$_2$ features can also be seen in the VIS-NIR, but these are much smaller than in the UV. Two narrows peaks around 687.0 and 760.3 nm increase the planetary radius to about 27 km, at the spectral resolution of the simulation. Ozone absorbs in two different broad spectral regions in the UV-NIR increasing the planetary radius by 66 km around 255 nm (Hartley band), and 31 km around 602 nm (Chappuis band). Narrow ozone absorption, from 310 to 360 nm (Huggins band), produce variations in the effective planetary radius no larger than 2.5 km. Weak ozone bands are also present throughout the VIS-NIR: all features not specifically identified on the small VIS-NIR panel in Fig. \[spectrum\] are O$_3$ features, and show changes in the effective planetary radius on the order of 1 km. NO$_2$ and H$_2$O are the only other molecular absorbers that create observable features in the spectrum (Fig. \[spectrum\], small VIS-NIR panel). NO$_2$ shows a very weak band system in the visible shortward of 510 nm, which produces less than 1 km variations in the effective planetary radius. H$_2$O features are observable only around 940 nm, where they increase the effective planetary radius to about 14.5 km. Rayleigh scattering (Fig. \[spectrum\]) increases the planetary radius by about 68 km at 115 nm, 27 km at 400 nm, and 5.5 km at 1000 nm, and dominates the spectrum from about 360 to 510 nm where few molecules in the Earth’s atmosphere absorb, and refraction is not yet the dominant effect. In this spectral region, NO$_2$ is the dominant molecular absorber but its absorption is much weaker than Rayleigh scattering. The lowest 12.75 km of the atmosphere is not accessible to a distant observer because no rays below that altitude can reach the observer in a transiting geometry for an Earth-Sun analog. Clouds located below that altitude do not influence the spectrum and can therefore be ignored in this geometry. Figure \[spectrum\] also shows that refraction influences the observable spectrum for wavelengths larger than 400 nm. The combined effects of refraction and Rayleigh scattering increases the planetary radius by about 27 km at 400 nm, 16 km at 700 nm, and 14 km at 1000 nm. In the UV, the lowest 12.75 km of the atmosphere have negligible transmission, so this atmospheric region can not be seen by a distant observer irrespective of refraction. Both Rayleigh scattering and refraction can mask some of the signatures from molecular species. For instance, the individual contribution of the H$_2$O band in the 900-1000 nm region can increase the planetary radius by about 10 km. However, H$_2$O is concentrated in the lowest 10-15 km of the Earth’s atmosphere, the troposphere, hence its amount above 12.75 km increases the planetary radius above the refraction threshold only by about 1 km around 940 nm. The continuum around the visible O$_2$ features is due to the combined effects of Rayleigh scattering, ozone absorption, and refraction. It increases the effective planetary radius by about 21 and 17 km around 687.0 and 760.3 nm, respectively. The visible O$_2$ features add 6 and 10 km to the continuum values, at the spectral resolution of the simulation. [Figure \[data\] compares our effective model from Fig. \[spectrum\] with atmospheric thickness with the one deduced by Vidal-Madjar et al. (2010) from Lunar eclipse data obtained in the penumbra. The contrast of the two O$_2$ features are comparable with those in the data. However, there is a slight offset (about 3.5 km) and a tilt in the main O$_3$ absorption profile. Note that, Vidal-Madjar et al. (2010) estimate that several sources of systematic errors and statistical uncertainties prevent them from obtaining absolute values better than $\pm$2.5 km. Also, we do not include limb darkening in our calculations. However, for a transiting Earth, the atmosphere eclipses an annular region on the Sun, whereas during Lunar eclipse observations, it eclipses a band across the Sun (see Fig. 4 in Vidal-Madjar et al. 2010), leading to different limb darkening effects.]{} Many molecules, such as CO$_2$, H$_2$O, CH$_4$, and CO, absorb ultraviolet radiation shortward of 200 nm (see Fig. \[absorbers\]). However, for Earth, the O$_2$ absorption dominates in this region and effectively masks their signatures. For planets without molecular oxygen, the far UV would still show strong absorption features that increase the planet’s effective radius by a higher percentage than in the VIS to NIR wavelength range. Conclusions =========== The UV-NIR spectrum (Fig. \[spectrum\]) of a transiting Earth-like exoplanet can be divided into 5 broad spectral regions characterized by the species or process that predominantly increase the planet’s radius: one O$_2$ region (115-200 nm), two O$_3$ regions (200-360 nm and 510-700 nm), one Rayleigh scattering region (360-510 nm), and one refraction region (700-1000 nm). From 115 to 200 nm, O$_2$ absorption increases the effective planetary radius by up to 177-183 km, except for a narrow feature at 124.4 nm where it goes up to 191-195 km, depending on solar conditions. Ozone increases the effective planetary radius up to 66 km in the 200-360 nm region, and up to 31 km in the 510-700 nm region. From 360 to 510 nm, Rayleigh scattering predominantly increases the effective planetary radius up to 31 km. Above 700 nm, refraction and Rayleigh scattering increase the effective planetary radius to a minimum of 14 km, masking H$_2$O bands which only produce a further increase of at most 1 km. Narrow O$_2$ absorption bands around 687.0 and 760.3 nm, both increase the effective planetary radius by 27 km, that is 6 and 10 km above the continuum, respectively. NO$_2$ only produces variations on the order of 1 km or less above the continuum between 400 and 510 nm. One can use the NIR as a baseline against which the other regions in the UV-NIR can be compared to determine that an atmosphere exists. From the peak of the O$_2$ Schumann-Runge continuum in the FUV, to the NIR continuum, the effective planetary radius changes by about 166 km, which translates into a 2.6% change. The increase in effective radius of the Earth in the UV due to O$_2$ absorption shows that this wavelength range is very interesting for future space missions. This increase in effective planetary radius has to be traded off against the lower available stellar flux in the UV as well as the instrument sensitivity at different wavelengths for future mission studies. For habitable planets with atmospheres different from Earth’s, other molecules, such as CO$_2$, H$_2$O, CH$_4$, and CO, would dominate the absorption in the ultraviolet radiation shortward of 200nm, providing an interesting alternative to explore a planet’s atmosphere. [10]{} Ackerman, M. 1971, Mesospheric Models and Related Experiments, G. Fiocco, Dordrecht:D. Reidel Publishing Company, 149 Au, J. W., & Brion, C. E. 1997, Chem. Phys., 218, 109 Batalha, N. M., Rowe, J. F., Bryson, S. T., et al. 2013, , 204, 24 Bates, D. R. 1984, , 32, 785 Bideau-Mehu, A., Guern, Y., Abjean, R., & Johannin-Gilles, A. 1973, Opt. Commun., 9, 432 Bideau-Mehu, A., Guern, Y., Abjean, R., & Johannin-Gilles, A. 1981, , 25, 395 Bogumil, K., Orphal, J., Homann, T., Voigt, S., Spietz, P., Fleischmann, O. C., Vogel, A., Hartmann, M., Bovensmann, H., Frerick, J., & Burrows, J. P. 2003, J. Photochem. Photobiol. A.: Chem., 157, 167 Brion, J. Chakir, A., Daumont, D., Malicet, J., & Parisse, C. 1993, Chem. Phys. Lett., 213, 610 Chan, W. F., Cooper, G., & Brion, C. E. 1993, Chem. Phys., 170, 123 Chen, F. Z., & Wu, C. Y. R. 2004, , 85, 195 COESA (Committee on Extension to the Standard Atmosphere) 1976, U.S. Standard Atmosphere, Washington, D.C.:Government Printing Office Cox, A. N. 2000, Allen’s Astrophysical Quantities, 4th ed., New York:AIP Des Marais, D. J., Harwit, M. O., Jucks, K. W., Kasting, J. F., Lin, D. N. C., Lunine, J. I., Schneider, J., Seager, S., Traub, W. A., & Woolf, N. J. 2010, Astrobiology, 2, 153 Ehrenreich, D., Tinetti, G., Lecavelier des Etangs, A., Vidal-Madjar, A., & Selsis, F. 2006, , 448, 379 Fally, S., Vandaele, A. C., Carleer, M., Hermans, C., Jenouvrier, A., Mérienne, M.-F., Coquart, B., & Colin, R. 2000, J. Mol. Spectrosc., 204, 10 García Muñoz, A., Zapatero Osorio, M. R., Barrena, R., Montañés-Rodríguez, P., Martín, E. L., & Pallé, E. 2012, , 755, 103 Goldsmith, W. W. 1963, , 2, 341 Griesmann, U., & Burnett, J. H. 1999, Optics Letters, 24, 1699 Hedelt, P., von Paris, P., Godolt, M., Gebauer, S., Grenfell, J. L., Rauer, H., Schreier, F., Selsis, F., & Trautmann, T. 2013, , submitted (arXiv:astro-ph/1302.5516) Hedin, A. E. 1987, , 92, 4649 Huestis, D. L. & Berkowitz, J. 2010, Advances in Geosciences, 25, 229 Jenouvrier, A., Coquart, B., & Mérienne, M. F. 1996, J. Atmos. Chem., 25, 21 Johnson, D. G., Jucks, K. W., Traub, W. A., & Chance, K. V. 1995, , 100, 3091 Kaltenegger, L., & Traub, W. A. 2009, , 698, 519 Kaltenegger, L., Selsis, F., Friedlund, M., Lammer, H., et al. 2010a, Astrobiology, 10, 89 Kaltenegger, L., Henning, W. G., & Sasselov, D. D. 2010b, , 140, 1370 Lee, A. Y. T., Yung, Y. L., Cheng, B. M., Bahou, M., Chung, C.-Y., & Lee, Y. P. 2001, , 551, L93 Lodders, K., & Fegley, Jr., B. 1998, The Planetary Scientist’s Companion, New York:Oxford University Press Lu, H.-C., Chen, K.-K., Chen, H.-F., Cheng, B.-M., & Ogilvie, J. F. 2010, , 520, A19 Manatt, S. L., & Lane, A. L. 1993, , 50, 267 Mason, N. J., Gingell, J. M., Davies, J. A., Zhao, H., Walker, I. C., & Siggel, M. R. F. 1996, J. Phys. B: At. Mol. Opt. Phys., 29, 3075 Mérienne, M. F., Jenouvrier, A., & Coquart, B. 1995, J. Atmos. Chem., 20, 281 Mota, R., Parafita, R., Giuliani, A., Hubin-Franskin, M.-J., Lourenço, J. M. C., Garcia, G., Hoffmann, S. V., Mason, M. J., Ribeiro, P. A., Raposo, M., & Limão-Vieira, P. 2005, Chem. Phys. Lett., 416, 152 Nakayama, T., Kitamura, M. T., & Watanabe, K. 1959, J. Chem. Phys., 30, 1180 Pallé, E., Zapatero Osorio, M. R., Barrena, R., Montañés-Rodríguez, P., & Martín, E. L. 2009, , 459, 814 Rauer, H., Gebauer, S., von Paris, P., Cabrera, J., Godolt, M., Grenfell, J. L., Belu, A., Selsis, F., Hedelt, P., & Schreier, F. 2011, , 529, A8 Rees, M. H. 1989, Physics and Chemistry of the Upper Atmosphere, 1$^{st}$ ed., Cambridge:Cambridge University Press Schneider, W., Moortgat, G. K., Burrows, J. P., & Tyndall, G. S. 1987, J. Photochem. Photobiol., 40, 195 Selwyn, G., Podolske, J., & Johnston, H. S. 1977, , 4, 427 Sidis, O., & Sari, R. 2010, , 720, 904 Sneep, M. & Ubachs, W. 2005, , 92, 293 Snellen, I., de Kok, R., Le Poole, R., Brogi, M., & Birkby, J. 2013, , submitted (arXiv:astro-ph/1302.3251) Traub, W. A., & Jucks, K. W. 2002, AGU Geophysical Monograph Ser. 130, Atmospheres in the Solar System: Comparative Aeronomy, M. Mendillo, 369 Traub, W. A., & Stier, M. T. 1976, , 15, 364 Vandaele, A. C., Hermans, C., & Fally, S. 2009, , 110, 2115 Vidal-Madjar, A., Arnold, A., Ehrenreich, D., Ferlet, R., Lecavelier des Etangs, A., Bouchy, F., et al. 2010, , 523, A57 Wu, C. Y. R., Yang, B. W., Chen, F. Z., Judge, D. L., Caldwell, J., & Trafton, L. M. 2000, , 145, 289 Yoshino, K., Cheung, A. S.-C., Esmond, J. R., Parkinson, W. H., Freeman, D. E., Guberman, S. L., Jenouvrier, A., Coquart, B., & Mérienne, M. F. 1988, , 36, 1469 Yoshino, K., Esmond, J. R., Cheung, A. S.-C., Freeman, D. E., & Parkinson, W. H. 1992, , 40, 185 Zelikoff, M., Watanabe, K., & Inn, E. C. Y. 1953, , 21, 1643 [cccc]{} O$_{2}$ & 303 & 115.0 - 179.2 & Lu et al. (2010)\ & 300 & 179.2 - 203.0 & Yoshino et al. (1992)\ & 298 & 203.0 - 240.5 & Yoshino et al. (1988)\ & 298 & 240.5 - 294.0 & Fally et al. (2000)\ O$_{3}$ & 298 & 110.4 - 150.0 & Mason et al. (1996)\ & 298 & 150.0 - 194.0 & Ackerman (1971)\ & 218 & 194.0 - 230.0 & Brion et al. (1993)\ & 293, 273, 243, 223 & 230.0 - 1070.0 & Bogumil et al. (2003)\ NO$_{2}$ & 298 & 15.5 - 192.0 & Au & Brion (1997)\ & 298 & 192.0 - 200.0 & Nakayama et al. (1959)\ & 298 & 200.0 - 219.0 & Schneider et al. (1987)\ & 293 & 219.0 - 500.01 & Jenouvrier et al. (1996) + Mérienne et al. (1995)\ & 293, 273, 243, 223 & 500.01 - 930.1 & Bogumil et al. (2003)\ CO & 298 & 6.2 - 177.0 & Chan et al. (1993)\ CO$_{2}$ & 300 & 0.125 - 201.6 & Huestis & Berkowitz (2010)\ H$_{2}$O & 298 & 114.8 - 193.9 & Mota et al. (2005)\ CH$_{4}$ & 295 & 120.0 - 142.5 & Chen & Wu (2004)\ & 295 & 142.5 - 152.0 & Lee et al. (2001)\ N$_{2}$O & 298 & 108.2 - 172.5 & Zelikoff et al. (1953)\ & 302, 263, 243, 225, 194 & 172.5 - 240.0 & Selwyn et al. (1977)\ SO$_{2}$ & 293 & 106.1 - 171.95 & Manatt & Lane (1993)\ & 295 & 171.95 - 262.53 & Wu et al. (2000)\ & 358, 338, 318, 298 & 262.53 - 416.66 & Vandaele et al. (2009)\ [ccc]{} N$_{2}$ & 149 - 189 & Griesmann & Burnett (1999)\ & 189 - 2060 & Bates (1984)\ O$_{2}$ & 198 - 546 & Bates (1984)\ Ar & 140 - 2100 & Bideau-Mehu et al. (1981)\ CO$_{2}$ & 180 - 1700 & Bideau-Mehu et al. (1973)\ N$_{2}$ & $\geq$ 200 & Bates (1984)\ O$_{2}$ & $\geq$ 200 & Bates (1984)\ Ar & all & Bates (1984)\ CO$_{2}$ & 180 - 1700 & Sneep & Ubachs (2005)\ [^1]: Hannelore Keller-Rudek, Geert K. Moortgat, MPI-Mainz-UV-VIS Spectral Atlas of Gaseous Molecules, www.atmosphere.mpg.de/spectral-atlas-mainz
--- abstract: 'We propose a construction of string cohomology spaces for Calabi-Yau hypersurfaces that arise in Batyrev’s mirror symmetry construction. The spaces are defined explicitly in terms of the corresponding reflexive polyhedra in a mirror-symmetric manner. We draw connections with other approaches to the string cohomology, in particular with the work of Chen and Ruan.' address: - 'Department of Mathematics, Columbia University, New York, NY 10027, USA' - 'Max-Planck-Institut für Mathematik, Bonn, D-53111, Germany' author: - 'Lev A. Borisov' - 'Anvar R. Mavlyutov' title: 'String cohomology of Calabi-Yau hypersurfaces via Mirror Symmetry' --- Å[[A]{}]{} \[section\] \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Proposition]{} \[thm\][Remark]{} \[thm\][Example]{} \[thm\][Definition]{} \[thm\][Conjecture]{} \[thm\][Conjectural Definition]{} Introduction {#section.intro} ============ The notion of orbifold cohomology has appeared in physics as a result of studying the string theory on orbifold global quotients, (see [@dhvw]). In addition to the usual cohomology of the quotient, this space was supposed to include the so-called twisted sectors, whose existence was predicted by the modular invariance condition on the partition function of the theory. Since then, there have been several attempts to give a rigorous mathematical formulation of this cohomology theory. The first two, due to [@bd] and [@Batyrev.cangor], tried to define the topological invariants of certain algebraic varieties (including orbifold global quotients) that should correspond to the dimensions of the Hodge components of a conjectural string cohomology space. These invariants should have the property arising naturally from physics: they are preserved by partial crepant resolutions; moreover, they coincide with the usual Hodge numbers for smooth varieties. Also, these invariants must be the same as those defined by physicists for orbifold global quotients. In [@Batyrev.cangor; @Batyrev.nai], Batyrev has successfully solved this problem for a large class of singular algebraic varieties. The first mathematical definition of the orbifold cohomology [*space*]{} was given in [@cr] for arbitrary orbifolds. Moreover, this orbifold cohomology possesses a product structure arising as a limit of a natural quantum product. It is still not entirely clear if the dimensions of the Chen-Ruan cohomology coincide with the prescription of Batyrev whenever both are defined, but they do give the same result for reduced global orbifolds. In this paper, we propose a construction of string cohomology spaces for Calabi-Yau hypersurfaces that arise in the Batyrev mirror symmetry construction (see [@b2]), with the spaces defined rather explicitly in terms of the corresponding reflexive polyhedra. A peculiar feature of our construction is that instead of a single string cohomology space we construct a finite-dimensional family of such spaces, which is consistent with the physicists’ picture (see [@Greene]). We verify that this construction is consistent with the previous definitions in [@bd], [@Batyrev.cangor] and [@cr], in the following sense. The (bigraded) dimension of our space coincides with the definitions of [@bd] and [@Batyrev.cangor]. In the case of hypersurfaces that have only orbifold singularities, we recover Chen-Ruan’s orbifold cohomology as one special element of this family of string cohomology spaces. We also conjecture a partial natural ring structure on our string cohomology space, which is in correspondence with the cohomology ring of crepant resolutions. This may be used as a real test of the Chen-Ruan orbifold cohomology ring. We go further, and conjecture the B-model chiral ring on the string cohomology space. This is again consistent with the description of the B-model chiral ring of smooth Calabi-Yau hypersurfaces in [@m2]. Our construction of the string cohomology space for Calabi-Yau hypersurfaces is motivated by Mirror Symmetry. Namely, the description in [@m3] of the cohomology of semiample hypersurfaces in toric varieties applies to the smooth Calabi-Yau hypersurfaces in [@b2]. Analysis of Mirror Symmetry on this cohomology leads to a natural construction of the string cohomology space for all semiample Calabi-Yau hypersurfaces. As already mentioned, our string cohomology space depends not only on the complex structure (the defining polynomial $f$), but also on some extra parameter we call $\omega$. For special values of this parameter of an orbifold Calabi-Yau hypersurface, we get the orbifold Dolbeault cohomology of [@cr]. However, for non-orbifold Calabi-Yau hypersurfaces, there is no natural special choice of $\omega$, which means that the general definition of the string cohomology space should depend on some mysterious extra parameter. In the situation of Calabi-Yau hypersurfaces, the parameter $\omega$ corresponds to the defining polynomial of the mirror Calabi-Yau hypersurface. In general, we expect that this parameter should be related to the “stringy complexified Kähler class”, which is yet to be defined. In an attempt to extend our definitions beyond the Calabi-Yau hypersurface case, we give a conjectural definition of string cohomology vector spaces for stratified varieties with $\QQ$-Gorenstein toroidal singularities that satisfy certain restrictions on the types of singular strata. This definition involves intersection cohomology of the closures of strata, and we check that it produces spaces of correct bigraded dimension. It also reproduces orbifold cohomology of a $\QQ$-Gorenstein toric variety as a special case. Here is an outline of our paper. In Section \[s:sth\], we examine the connection between the original definition of the [*string-theoretic*]{} Hodge numbers in [@bd] and the [*stringy*]{} Hodge numbers in [@Batyrev.cangor]. We point out that these do not always give the same result and argue that the latter definition is the more useful one. In Section \[section.mirr\], we briefly review the mirror symmetry construction of Batyrev, mainly to fix our notations and to describe the properties we will use in the derivation of the string cohomology. Section \[section.anvar\] describes the cohomology of semiample hypersurfaces in toric varieties and explains how mirror symmetry provides a conjectural definition of the string cohomology of Calabi-Yau hypersurfaces. It culminates in Conjecture \[semiampleconj\], where we define the stringy Hodge spaces of semiample Calabi-Yau hypersurfaces in complete toric varieties. We spend most of the remainder of the paper establishing the expected properties of the string cohomology space. Sections \[s:hd\] and \[section.brel\] calculate the dimensions of the building blocks of our cohomology spaces. In Section \[section.brel\], we develop a theory of deformed semigroup rings which may be of independent interest. This allows us to show in Section \[section.bbo\] that Conjecture \[semiampleconj\] is compatible with the definition of the stringy Hodge numbers from [@Batyrev.cangor]. In the non-simplicial case, this requires the use of $G$-polynomials of Eulerian posets, whose relevant properties are collected in the Appendix. Having established that the dimension is correct, we try to extend our construction to the non-hypersurface case. Section \[section.general\] gives another conjectural definition of the string cohomology vector space in a somewhat more general situation. It hints that the intersection cohomology and the perverse sheaves should play a prominent role in future definitions of string cohomology. In Section \[s:vs\], we connect our work with that of Chen-Ruan [@cr] and Poddar [@p]. Finally, in Section \[section.vertex\], we provide yet another description of the string cohomology of Calabi-Yau hypersurfaces, which was inspired by the vertex algebra approach to Mirror Symmetry. [*Acknowledgments.*]{} We thank Victor Batyrev, Robert Friedman, Mainak Poddar, Yongbin Ruan and Richard Stanley for helpful conversations and useful references. The second author also thanks the Max-Planck Institut für Mathematik in Bonn for its hospitality and support. String-theoretic and stringy Hodge numbers {#s:sth} ========================================== The [*string-theoretic*]{} Hodge numbers were first defined in the paper of Batyrev and Dais (see [@bd]) for varieties with Gorenstein toroidal or quotient singularities. In subsequent papers [@Batyrev.cangor; @Batyrev.nai] Batyrev defined [*stringy*]{} Hodge numbers for arbitrary varieties with log-terminal singularities. To our knowledge, the relationship between these two concepts has never been clarified in the literature. The goal of this section is to show that the string-theoretic Hodge numbers coincide with the stringy ones under some conditions on the singular strata. We begin with the definition of the string-theoretic Hodge numbers. \[d:bd\] [@bd] Let $X = \bigcup_{i \in I} X_i$ be a stratified algebraic variety over $\CC$ with at most Gorenstein toroidal singularities such that for each $i \in I$ the singularities of $X$ along the stratum $X_i$ of codimension $k_i$ are defined by a $k_i$-dimensional finite rational polyhedral cone $\sigma_i$; i.e., $X$ is locally isomorphic to $${\CC}^{k-k_i} \times U_{\sigma_i}$$ at each point $x \in X_i$, where $U_{\sigma_i}$ is a $k_i$-dimensional affine toric variety which is associated with the cone $\sigma_i$ (see [@d]), and $k=\dim X$. Then the polynomial $$E^{\rm BD}_{\rm st}(X;u,v) := \sum_{i \in I} E(X_i;u,v) \cdot S(\sigma_i,uv)$$ is called the [*string-theoretic E-polynomial*]{} of $X$. Here, $$S(\sigma_i,t):=(1-t)^{\dim \sigma_i}\sum_{n\in \sigma_i} t^{\deg n}= (t-1)^{\dim \sigma_i}\sum_{n\in {\rm int} \sigma_i} t^{-\deg n}$$ where $\deg$ is the linear function on $\sigma_i$ that takes value $1$ on the generators of one-dimensional faces of $\sigma_i$, and ${\rm int}\sigma_i$ is the relative interior of $\sigma_i$. If we write $E_{\rm st}(X; u,v)$ in the form $$E^{\rm BD}_{\rm st}(X;u,v) = \sum_{p,q} a_{p,q} u^{p}v^{q},$$ then the numbers $h^{p,q{\rm (BD)}}_{\rm st}(X) := (-1)^{p+q}a_{p,q}$ are called the [*string-theoretic Hodge numbers*]{} of $X$. \[r:epol\] The E-polynomial in the above definition is defined for an arbitrary algebraic variety $X$ as $$E(X;u,v)=\sum_{p,q}e^{p,q}u^p v^q,$$ where $e^{p,q}=\sum_{k\ge0}(-1)^k h^{p,q}(H^k_c(X))$. Stringy Hodge numbers of $X$ are defined in terms of the resolutions of its singularities. In general, one can only define the $E$-function in this case, which may or not be a polynomial. We refer to [@Kollar] for the definitions of log-terminal singularities and related issues. \[d:bcangor\][@Batyrev.cangor] Let $X$ be a normal irreducible algebraic variety with at worst log-terminal singularities, $\rho\, : \, Y \rightarrow X$ a resolution of singularities such that the irreducible components $D_1, \ldots, D_r$ of the exceptional locus is a divisor with simple normal crossings. Let $\alpha_j>-1$ be the discrepancy of $D_j$, see [@Kollar]. Set $I: = \{1, \ldots, r\}$. For any subset $J \subset I$ we consider $$D_J := \left\{ \begin{array}{ll} \bigcap_{ j \in J} D_j & \mbox{\rm if $J \neq \emptyset$} \\ Y & \mbox{\rm if $J = \emptyset$} \end{array} \right. \;\;\;\; \;\;\;\; \,\mbox{\rm and} \;\;\;\; \;\;\;\; D_J^{\circ} := D_J \setminus \bigcup_{ j \in\, I \setminus J} D_j.$$ We define an algebraic function $E_{\rm st}(X; u,v)$ in two variables $u$ and $v$ as follows: $$E_{\rm st}(X; u,v) := \sum_{J \subset I} E(D_J^{\circ}; u,v) \prod_{j \in J} \frac{uv-1}{(uv)^{a_j +1} -1}$$ (it is assumed $\prod_{j \in J}$ to be $1$, if $J = \emptyset$). We call $E_{\rm st}(X; u,v)$ [*the stringy $E$-function of*]{} $X$. If $E_{\rm st}(X; u,v)$ is a polynomial, define the stringy Hodge numbers the same way as Definition \[d:bd\] does. It is not obvious at all that the above definition is independent of the choice of the resolution. The original proof of Batyrev uses a motivic integration over the spaces of arcs to relate the $E$-functions obtained via different resolutions. Since the work of D. Abramovich, K. Karu, K. Matsuki, J. Włodarsczyk [@AKMW], it is now possible to check the independence from the resolution by looking at the case of a single blowup with a smooth center compatible with the normal crossing condition. \[strataE\] Let $X$ be a disjoint union of strata $X_i$, which are locally closed in Zariski topology, and let $\rho$ be a resolution as in Definition \[d:bcangor\]. For each $X_i$ consider $$E_{\rm st}(X_i\subseteq X; u,v) := \sum_{J \subset I} E(D_J^{\circ}\cap \rho^{-1}(X_i); u,v) \prod_{j \in J} \frac{uv-1}{(uv)^{a_j +1} -1}.$$ Then this $E$-function is independent of the choice of the resolution $Y$. The $E$-function of $X$ decomposes as $$E_{\rm st}(X;u,v) = \sum_i E_{\rm st}(X_i\subseteq X; u,v).$$ [*Proof.*]{} Each resolution of $X$ induces a resolution of the complement of $\bar{X_i}$. This shows that for each $X_i$ the sum $$\sum_{j,X_j\subseteq \bar{X_i}} E_{\rm st}(X_j\subseteq X; u,v)$$ is independent from the choice of the resolution and is thus well-defined. Then one uses the induction on dimension of $X_i$. The last statement is clear. It is a delicate question what data are really necessary to calculate $E_{\rm st}(X_i\subseteq X; u,v)$. It is clear that the knowledge of a Zariski open set of $X$ containing $X_i$ is enough. However, it is not clear whether it is enough to know an analytic neighborhood of $X_i$. We will use the above lemma to show that the string-theoretic Hodge numbers and the stringy Hodge numbers coincide in a wide class of examples. \[BDvsB\] Let $X=\bigcup_i X_i$ be a stratified algebraic variety with at worst Gorenstein toroidal singularities as in Definition \[d:bd\]. Assume in addition that for each $i$ there is a desingularization $Y$ of $X$ so that its restriction to the preimage of $X_i$ is a locally trivial fibration in Zariski topology. Moreover, for a point $x\in X_i$ the preimage in $Y$ of an analytic neighborhood of $x$ is complex-analytically isomorphic to a preimage of a neighborhood of $\{0\}$ in $U_{\sigma_i}$ under some resolution of singularities of $U_{\sigma_i}$, times a complex disc, so that the isomorphism is compatible with the resolution morphisms. Then $$E_{\rm st}^{\rm BD}(X;u,v) = E_{\rm st}(X;u,v).$$ [*Proof.*]{} Since $E$-polynomials are multiplicative for Zariski locally trivial fibrations (see [@dk]), the above assumptions on the singularities show that $$E_{\rm st}(X_i\subseteq X; u,v) = E(X_i;u,v)E_{\rm st}(\{0\}\subseteq U_{\sigma_i};u,v).$$ We have also used here the fact that since the fibers are projective, the analytic isomorphism implies the algebraic one, by GAGA. By the second statement of Lemma \[strataE\], it is enough to show that $$E_{\rm st}(\{0\}\subseteq U_{\sigma_i};u,v) = S(\sigma_i,uv).$$ This result follows from the proof of [@Batyrev.cangor], Theorem 4.3 where the products $$\prod_{j \in J} \frac{uv-1}{(uv)^{a_j +1} -1}$$ are interpreted as a geometric series and then as sums of $t^{\deg(n)}$ over points $n$ of $\sigma_i$. String-theoretic and stringy Hodge numbers coincide for nondegenerate hypersurfaces (complete intersections) in Gorenstein toric varieties. Indeed, in this case, the toric desingularizations of the ambient toric variety induce the desingularizations with the required properties. We will keep this corollary in mind and from now on will silently transfer all the results on string-theoretic Hodge numbers of hypersurfaces and complete intersections in toric varieties in [@bb], [@bd] to their stringy counterparts. An example of the variety where string-theoretic and stringy Hodge numbers [*differ*]{} is provided by the quotient of $\CC^2\times E$ by the finite group of order six generated by $$r_1:(x,y;z)\mapsto (x\ee^{2\pi\ii/3},y\ee^{-2\pi\ii/3};z),~ r_2:(x,y;z)\mapsto(y,x;z+p)$$ where $(x,y)$ are coordinates on $\CC^2$, $z$ is the uniformizing coordinate on the elliptic curve $E$ and $p$ is a point of order two on $E$. In its natural stratification, the quotient has a stratum of $A_2$ singularities, so that going around a loop in the stratum results in the non-trivial automorphism of the singularity. We expect that the stringy Hodge numbers of algebraic varieties with abelian quotient singularities coincide with the dimensions of their orbifold cohomology, [@cr]. This is not going to be true for the string-theoretic Hodge numbers. Also, the latter numbers are not preserved by the partial crepant resolutions as required by physics, see the above example. As a result, we believe that the stringy Hodge numbers are the truly interesting invariant, and that the string-theoretic numbers is a now obsolete first attempt to define them. Mirror symmetry construction of Batyrev {#section.mirr} ======================================= In this section, we review the mirror symmetry construction from [@b2]. We can describe it starting with a semiample nondegenerate (transversal to the torus orbits) anticanonical hypersurface $X$ in a complete simplicial toric variety ${{{\PP}_{\Sigma}}}$. Such a hypersurface is Calabi-Yau. The semiampleness property produces a contraction map, the unique properties of which are characterized by the following statement. \[p:sem\] [@m1] Let $\PP_\Sigma$ be a complete toric variety with a big and nef divisor class $[X]\in A_{d-1}({{{\PP}_{\Sigma}}})$. Then, there exists a unique complete toric variety ${{\PP}_{\Sigma_X}}$ with a toric birational map $\pi:{{{\PP}_{\Sigma}}}@>>>{{\PP}_{\Sigma_X}}$, such that $\Sigma$ is a subdivision of $\Sigma_X$, $\pi_*[X]$ is ample and $\pi^*\pi_*[X]=[X]$. Moreover, if $X=\sum_{\rho}a_\rho D_\rho$ is torus-invariant, then $\Sigma_X$ is the normal fan of the associated polytope $$\Delta_X=\{m\in M:\langle m,e_\rho\rangle\geq-a_\rho \text{ for all } \rho\}\subset M_{\Bbb R}.$$ Our notation is a standard one taken from [@bc; @c2]: $M$ is a lattice of rank $d$; $N=\text{Hom}(M,{\Bbb Z})$ is the dual lattice; $M_{\Bbb R}$ and $N_{\Bbb R}$ are the $\Bbb R$-scalar extensions of $M$ and $N$; $\Sigma$ is a finite rational polyhedral fan in $N_{\Bbb R}$; ${\PP}_{\Sigma}$ is a $d$-dimensional toric variety associated with $\Sigma$; $\Sigma(k)$ is the set of all $k$-dimensional cones in $\Sigma$; $e_\rho$ is the minimal integral generator of the $1$-dimensional cone $\rho\in\Sigma$ corresponding to a torus invariant irreducible divisor $D_\rho$. Applying Proposition \[p:sem\] to the semiample Calabi-Yau hypersurface, we get that the push-forward $\pi_*[X]$ is anticanonical and ample, whence, by Lemma 3.5.2 in [@ck], the toric variety ${{\PP}_{\Sigma_X}}$ is Fano, associated with the polytope $\Delta\subset M_{\Bbb R}$ of the anticanonical divisor $\sum_{\rho} D_{\rho}$ on ${{{\PP}_{\Sigma}}}$. Then, [@m1 Proposition 2.4] shows that the image $Y:=\pi(X)$ is an ample nondegenerate hypersurface in ${{\PP}_{\Sigma_X}}={{\PP}}_\Delta$. The fact that ${{\PP}}_\Delta$ is Fano means by Proposition 3.5.5 in [@ck] that the polytope $\Delta$ is reflexive, i.e., its dual $$\Delta^*=\{n\in N_{\Bbb R}: \langle m,n\rangle\ge-1\text{ for } m\in\Delta\}$$ has all its vertices at lattice points in $N$, and the only lattice point in the interior of $\Delta^*$ is the origin $0$. Now, consider the toric variety ${{\PP}}_{\Delta^*}$ associated to the polytope $\Delta^*$ (the minimal integral generators of its fan are precisely the vertices of $\Delta$). Theorem 4.1.9 in [@b2] says that an anticanonical nondegenerate hypersurface $Y^*\subset{{\PP}}_{\Delta^*}$ is a Calabi-Yau variety with canonical singularities. The Calabi-Yau hypersurface $Y^*$ is expected to be a mirror of $Y$. In particular, they pass the topological mirror symmetry test for the stringy Hodge numbers: $$h^{p,q}_{\rm st}(Y)=h_{\rm st}^{d-1-p,q}(Y^*), 0\le p,q\le d-1,$$ by [@bb Theorem 4.15]. Moreover, all crepant partial resolutions $X$ of $Y$ have the same stringy Hodge numbers: $$h^{p,q}_{\rm st}(X)=h^{p,q}_{\rm st}(Y).$$ Physicists predict that such resolutions of Calabi-Yau varieties have indistinguishable physical theories. Hence, all crepant partial resolutions of $Y$ may be called the mirrors of crepant partial resolutions of $Y^*$. To connect this to the classical formulation of mirror symmetry, one needs to note that if there exist crepant smooth resolutions $X$ and $X^*$ of $Y$ and $Y^*$, respectively, then $$h^{p,q}(X)=h^{d-1-p,q}(X^*), 0\le p,q\le d-1,$$ since the stringy Hodge numbers coincide with the usual ones for smooth Calabi-Yau varieties. The equality of Hodge numbers is expected to extend to an isomorphism ([*mirror map*]{}) of the corresponding Hodge spaces, which is compatible with the chiral ring products of A and B models (see [@ck] for more details). String cohomology construction for Calabi-Yau hypersurfaces {#section.anvar} =========================================================== In this section, we show how the description of cohomology of semiample hypersurfaces in [@m3] leads to a construction of the string cohomology space of Calabi-Yau hypersurfaces. We first review the building blocks participating in the description of the cohomology in [@m3], and then explain how these building blocks should interchange under mirror symmetry for a pair of smooth Calabi-Yau hypersurfaces in Batyrev’s mirror symmetry construction. Mirror symmetry and the fact that the dimension of the string cohomology is the same for all partial crepant resolutions of ample Calabi-Yau hypersurfaces leads us to a conjectural description of string cohomology for all semiample Calabi-Yau hypersurfaces. In the next three sections, we will prove that this space has the dimension prescribed by [@bd]. The cohomology of a semiample nondegenerate hypersurface $X$ in a complete simplicial toric variety ${{{\PP}_{\Sigma}}}$ splits into the [*toric*]{} and [*residue*]{} parts: $$H^*(X)=H^*_{\rm toric}(X)\oplus H^*_{\rm res}(X),$$ where the first part is the image of the cohomology of the ambient space, while the second is the residue map image of the cohomology of the complement to the hypersurface. By [@m2 Theorem 5.1], $$\label{e:ann} H^*_{\rm toric}(X)\cong H^*({{{\PP}_{\Sigma}}})/Ann(X)$$ where $Ann(X)$ is the annihilator of the class $[X]\in H^2({{{\PP}_{\Sigma}}})$. The cohomology of ${{{\PP}_{\Sigma}}}$ is isomorphic to $${\Bbb C}[D_\rho:\rho\in\Sigma(1)]/(P(\Sigma)+SR(\Sigma)),$$ where $$P(\Sigma)=\biggl\langle \sum_{\rho\in\Sigma(1)}\langle m,e_\rho\rangle D_\rho: m\in M\biggr\rangle$$ is the ideal of linear relations among the divisors, and $$SR(\Sigma)=\bigl\langle D_{\rho_1}\cdots D_{\rho_k}:\{e_{\rho_1},\dots,e_{\rho_k}\} \not\subset\sigma \text{ for all }\sigma\in\Sigma\bigr\rangle$$ is the Stanley-Reisner ideal. Hence, $H^{*}_{\rm toric}(X)$ is isomorphic to the bigraded ring $$T(X)_{*,*}:={\Bbb C}[D_\rho:\rho\in\Sigma(1)]/I,$$ where $I=(P(\Sigma)+SR(\Sigma)):[X]$ is the ideal quotient, and $D_\rho$ have the degree $(1,1)$. The following modules over the ring $T(X)$ have appeared in the description of cohomology of semiample hypersurfaces: Given a big and nef class $[X]\in A_{d-1}({{{\PP}_{\Sigma}}})$ and $\sigma\in\Sigma_X$, let $$U^\sigma(X)=\biggl\langle \prod_{\rho\subset\gamma\in\Sigma}D_\rho: {{\rm int}}\gamma\subset{{\rm int}}\sigma\biggr\rangle$$ be the bigraded ideal in ${\Bbb C}[D_\rho:\rho\in\Sigma(1)]$, where $D_\rho$ have the degree (1,1). Define the bigraded space $$T^\sigma(X)_{*,*}=U^\sigma(X)_{*,*}/I^\sigma,$$ where $$I^\sigma=\{u\in U^\sigma(X)_{*,*}:\,uvX^{d-\dim\sigma} \in(P(\Sigma)+SR(\Sigma))\text{ for }v\in U^\sigma(X)_{\dim\sigma-*,\dim\sigma-*}\}.$$ Next, recall from [@c] that [*any*]{} toric variety ${{{\PP}_{\Sigma}}}$ has a homogeneous coordinate ring $$S({{{\PP}_{\Sigma}}})={\Bbb C}[x_\rho:\rho\in\Sigma(1)]$$ with variables $x_\rho$ corresponding to the irreducible torus invariant divisors $D_\rho$. This ring is graded by the Chow group $A_{d-1}({{{\PP}_{\Sigma}}})$, assigning $[\sum_{\rho} a_\rho D_\rho]$ to $\deg(\prod_{\rho} x_\rho^{a_\rho})$. For a Weil divisor $D$ on ${{{\PP}_{\Sigma}}}$, there is an isomorphism $H^0({{{\PP}_{\Sigma}}}, O_{{{\PP}_{\Sigma}}}(D))\cong S({{{\PP}_{\Sigma}}})_\alpha$, where $\alpha=[D]\in A_{d-1}({{{\PP}_{\Sigma}}})$. If $D$ is torus invariant, the monomials in $S({{{\PP}_{\Sigma}}})_\alpha$ correspond to the lattice points of the associated polyhedron $\Delta_D$. In [@bc], the following rings have been used to describe the residue part of cohomology of ample hypersurfaces in complete simplicial toric varieties: \[d:r1\] [@bc] Given $f\in S({{{\PP}_{\Sigma}}})_\beta$, set $J_0(f):=\langle x_\rho\partial f/\partial x_\rho:\rho\in\Sigma(1)\rangle$ and $J_1(f):=J_0(f):x_1\cdots x_n$. Then define the rings $R_0(f)=S({{{\PP}_{\Sigma}}})/J_0(f)$ and $R_1(f)=S({{{\PP}_{\Sigma}}})/J_1(f)$, which are graded by the Chow group $A_{d-1}({{{\PP}_{\Sigma}}})$. In [@m3 Definition 6.5], similar rings were introduced to describe the residue part of cohomology of semiample hypersurfaces: \[d:rs1\] [@m3] Given $f\in S({{{\PP}_{\Sigma}}})_\beta$ of big and nef degree $\beta=[D]\in A_{d-1}({{{\PP}_{\Sigma}}})$ and $\sigma\in\Sigma_D$, let $J^\sigma_0(f)$ be the ideal in $S({{{\PP}_{\Sigma}}})$ generated by $x_\rho\partial f/\partial x_\rho$, $\rho\in\Sigma(1)$ and all $x_{\rho'}$ such that $\rho'\subset\sigma$, and let $J^\sigma_1(f)$ be the ideal quotient $J^\sigma_0(f):(\prod_{\rho\not\subset\sigma}x_\rho)$. Then we get the quotient rings $R_0^\sigma(f)=S({{{\PP}_{\Sigma}}})/J_0^\sigma(f)$ and $R_1^\sigma(f)=S({{{\PP}_{\Sigma}}})/J_1^\sigma(f)$ graded by the Chow group $A_{d-1}({{{\PP}_{\Sigma}}})$. As a special case of [@m3 Theorem 2.11], we have: \[t:main\] Let $X$ be an anticanonical semiample nondegenerate hypersurface defined by $f\in S_\beta$ in a complete simplicial toric variety ${{{\PP}_{\Sigma}}}$. Then there is a natural isomorphism $$\bigoplus_{p,q}H^{p,q}(X)\cong\bigoplus_{p,q} T(X)_{p,q}\oplus\biggl(\bigoplus_{\sigma\in\Sigma_X} T^\sigma(X)_{s,s} \otimes R^\sigma_1(f)_{(q-s)\beta+\beta_1^\sigma}\biggr),$$ where $s=(p+q-d+\dim\sigma+1)/2$ and $\beta_1^\sigma= \deg(\prod_{\rho_k\subset\sigma}x_k)$. By the next statement, we can immediately see that all the building blocks $R^\sigma_1(f)_{(q-s)\beta+\beta_1^\sigma}$ of the cohomology of partial resolutions in Theorem \[t:main\] are independent of the resolution and intrinsic to an ample Calabi-Yau hypersurface: \[p:iso\] [@m3] Let $X$ be a big and nef nondegenerate hypersurface defined by $f\in S_\beta$ in a complete toric variety ${{{\PP}_{\Sigma}}}$ with the associated contraction map $\pi:{{{\PP}_{\Sigma}}}@>>>{{\PP}_{\Sigma_X}}$. If $f_\sigma\in S(V(\sigma))_{\beta^\sigma}$ denotes the polynomial defining the hypersurface $\pi(X)\cap V(\sigma)$ in the toric variety $V(\sigma)\subset{{\PP}_{\Sigma_X}}$ corresponding to $\sigma\in\Sigma_X$, then, there is a natural isomorphism induced by the pull-back: $$H^{d(\sigma)-*,*-1}H^{d(\sigma)-1}(\pi(X)\cap{{\TT}_\sigma})\cong R_1(f_{\sigma})_{*\beta^{\sigma}-\beta_0^{\sigma}}{\cong} R^\sigma_1(f)_{*\beta-\beta_0+\beta_1^\sigma},$$ where $d(\sigma)=d-\dim\sigma$, ${{\TT}_\sigma}\subset V(\sigma)$ is the maximal torus, and $\beta_0$ and $\beta_0^{\sigma}$ denote the anticanonical degrees on ${{{\PP}_{\Sigma}}}$ and $V(\sigma)$, respectively. Given a mirror pair $(X,X^*)$ of smooth Calabi-Yau hypersurfaces in Batyrev’s construction, we expect that, for a pair of cones $\sigma$ and $\sigma^*$ over the dual faces of the reflexive polytopes $\Delta^*$ and $\Delta$, $T^{\sigma}(X)_{s,s}$ with $s=(p+q-d+\dim\sigma+1)/2$, in $H^{p,q}(X)$ interchanges, by the mirror map (the isomorphism which maps the quantum cohomology of one Calabi-Yau hypersurface to the B-model chiral ring of the other one), with $R^{\sigma^*}_1(g)_{(p+q-\dim\sigma^*)\beta^*/2+\beta_1^{\sigma^*}}$ in $H^{d-1-p,q}(X^*)$ (note that $\dim\sigma^*=d-\dim\sigma+1$), where $g\in S({{\PP}}_{\Sigma^*})_{\beta^*}$ determines $X^*$. For the 0-dimensional cones $\sigma$ and $\sigma^*$, the interchange goes between the [*polynomial part*]{} $R_1(g)_{*\beta^*}$ of one smooth Calabi-Yau hypersurface and the toric part of the cohomology of the other one. This correspondence was already confirmed by the construction of the generalized monomial-divisor mirror map in [@m3]. On the other hand, one can deduce that the dimensions of these spaces coincide for the pair of 3-dimensional smooth Calabi-Yau hypersurfaces, by using Remark 5.3 in [@m1]. The correspondence between the toric and polynomial parts was discussed in [@ck]. Now, let us turn our attention to a mirror pair of semiample singular Calabi-Yau hypersurfaces $Y$ and $Y^*$. We know that their string cohomology should have the same dimension as the usual cohomology of possible crepant smooth resolutions $X$ and $X^*$, respectively. Moreover, the A-model and B-model chiral rings on the string cohomology should be isomorphic for $X$ and $X^*$, respectively. We also know that the polynomial $g$ represents the complex structure of the hypersurface $Y^*$ and its resolution $X^*$, and, by mirror symmetry, $g$ should correspond to the complexified Kähler class of the mirror Calabi-Yau hypersurface. Therefore, based on the mirror correspondence of smooth Calabi-Yau hypersurfaces, we make the following prediction for the small quantum ring presentation on the string cohomology space: $$\label{e:conj} QH_{\rm st}^{p,q}(Y)\cong \hspace{-0.05in} \bigoplus_{(\sigma,\sigma^*)} R_1(\omega_{\sigma^*})_{(p+q-\dim\sigma^*+2)\beta^{\sigma^*}/2-\beta_0^{\sigma^*}}\otimes R_1(f_{\sigma})_{(q-p+d-\dim\sigma+1)\beta^\sigma/2-\beta_0^{\sigma}},$$ where the sum is by all pairs of the cones $\sigma$ and $\sigma^*$ (including 0-dimensional cones) over the dual faces of the reflexive polytopes, and where $\omega_{\sigma^*}\in S(V(\sigma^*))_{\beta^{\sigma^*}}$ is a formal restriction of $\omega\in S({{\PP}}_{\Delta^*})_{\beta^*}$, which should be related to the complexified Kähler class of the mirror (we will discuss this in Section \[s:vs\]). This construction can be rewritten in simpler terms, which will help us to give a conjectural description of the usual string cohomology space for all semiample Calabi-Yau hypersurfaces. First, recall Batyrev’s presentation of the toric variety ${{\PP}}_\Delta$ for an [*arbitrary*]{} polytope $\Delta$ in $M$ (see [@b1], [@c2]). Consider the [*Gorenstein*]{} cone $K$ over $\Delta\times\{1\}\subset M\oplus{\Bbb Z}$. Let $S_\Delta$ be the subring of ${\Bbb C}[t_0,t_1^{\pm1},\dots,t_d^{\pm1}]$ spanned over $\Bbb C$ by all monomials of the form $t_0^k t^m=t_0^kt_1^{m_1}\cdots t_d^{m_d}$ where $k\ge0$ and $m\in k\Delta$. This ring is graded by the assignment $\deg(t_0^k t^m)=k$. Since the vector $(m,k)\in K$ if and only if $m\in k\Delta$, the ring $S_\Delta$ is isomorphic to the semigroup algebra ${\Bbb C}[K]$. The toric variety ${{\PP}}_\Delta$ can be represented as $${\rm Proj}(S_\Delta)={\rm Proj}({\Bbb C}[K]).$$ The ring $S_\Delta$ has a nice connection to the homogeneous coordinate ring $S({{\PP}}_\Delta)={\Bbb C}[x_\rho:\rho\in\Sigma_\Delta(1)]$ of the toric variety ${{\PP}}_\Delta$, corresponding to a fan $\Sigma_\Delta$. If $\beta\in A_{d-1}({{\PP}}_\Delta)$ is the class of the ample divisor $\sum_{\rho\in\Sigma_\Delta(1)} b_\rho D_\rho$ giving rise to the polytope $\Delta$, then there is a natural isomorphism of graded rings $$\label{e:isom} {\Bbb C}[K]\cong S_\Delta\cong\bigoplus_{k=0}^\infty S({{\PP}}_\Delta)_{k\beta},$$ sending $(m,k)\in\CC[K]_k$ to $t_0^k t^m$ and $\prod_\rho x_\rho^{k b_\rho+\langle m,e_\rho\rangle}$, where $e_\rho$ is the minimal integral generator of the ray $\rho$. Now, given $f\in S({{\PP}}_\Delta)_{\beta}$, we get the ring $R_1(f)$. The polynomial $f=\sum_{m\in\Delta}f(m) x_\rho^{b_\rho+\langle m,e_\rho\rangle}$, where $f(m)$ are the coefficients, corresponds by the isomorphisms (\[e:isom\]) to $\sum_{m\in\Delta}f(m)t_0t^m\in (S_\Delta)_1$ and $\sum_{m\in\Delta}f(m)[m,1]\in{\Bbb C}[K]_1$ (the brackets \[[ ]{}\] are used to distinguish the lattice points from the vectors over $\CC$), which we also denote by $f$. By the proof of [@bc Theorem 11.5], we have that $$(S({{\PP}}_\Delta)/J_0(f))_{k\beta}\cong (S_\Delta/\langle t_i\partial f/\partial t_i:\, i=0,\dots,d\rangle)_k \cong R_0(f,K)_k,$$ where $R_0(f,K)$ is the quotient of ${\Bbb C}[K]$ by the ideal generated by all “logarithmic derivatives” of $f$: $$\sum_{m\in\Delta}((m,1)\cdot n) f(m)[m,1]$$ for $n\in N\oplus{\Bbb Z}$. The isomorphisms (\[e:isom\]) induce the bijections $$S({{\PP}}_\Delta)_{k\beta-\beta_0}@>\prod_\rho x_\rho>>\langle \prod_\rho x_\rho \rangle_{k\beta}\cong (I_\Delta^{(1)})_k \cong {\Bbb C}[K^\circ]_k$$ ($\beta_0=\deg(\prod_\rho x_\rho)$), where $I_\Delta^{(1)}\subset S_\Delta$ is the ideal spanned by all monomials $t_0^k t^m$ such that $m$ is in the interior of $k\Delta$, and ${\Bbb C}[K^\circ]\subset{\Bbb C}[K]$ is the ideal spanned by all lattice points in the relative interior of $K$. Since the space $R_1(f)_{k\beta-\beta_0}$ is isomorphic to the image of $\langle\prod_\rho x_\rho \rangle_{k\beta}$ in $(S({{\PP}}_\Delta)/J_0(f))_{k\beta}$, $$R_1(f)_{k\beta-\beta_0}\cong R_1(f,K)_k,$$ where $R_1(f,K)$ is the image of ${\Bbb C}[K^\circ]$ in the graded ring $R_0(f,K)$. The above discussion applies well to all faces $\Gamma$ in $\Delta$. In particular, if the toric variety $V(\sigma)\subset{{\PP}}_\Delta$ corresponds to $\Gamma$, and $\beta^\sigma\in A_{d-\dim\sigma-1}(V(\sigma))$ is the restriction of the ample class $\beta$, then $$S(V(\sigma))_{*\beta^\sigma}\cong {\Bbb C}[C],$$ where $C$ is the Gorenstein cone over the polytope $\Gamma\times\{1\}$. This induces an isomorphism $$R_1(f_\sigma)_{*\beta^\sigma-\beta_0^\sigma}\cong R_1(f_C,C),$$ where $f_C=\sum_{m\in\Gamma} f(m)[m,1]$ in ${\Bbb C}[C]_1$ is the projection of $f$ to the cone $C$. Now, we can restate our conjecture (\[e:conj\]) in terms of Gorenstein cones: $$\bigoplus_{p,q}QH_{\rm st}^{p,q}(Y)\cong\bigoplus_{\begin{Sb} p,q\\ (C,C^*)\end{Sb}} R_1(\omega_{C^*},C^*)_{(p+q-d+\dim C^*+1)/2}\otimes R_1(f_{C},C)_{(q-p+\dim C)/2},$$ where the sum is by all dual faces of the reflexive Gorenstein cones $K$ and $K^*$. This formula is already supported by Theorem 8.2 in [@bd], which for ample Calabi-Yau hypersurfaces in weighted projective spaces gives a corresponding decomposition of the stringy Hodge numbers (see Remark \[r:corrw\] in the next section). A generalization of [@bd Theorem 8.2] will be proved in Section \[section.bbo\], justifying the above conjecture in the case of ample Calabi-Yau hypersurfaces in Fano toric varieties. It is known that the string cohomology, which should be the limit of the quantum cohomology ring, of smooth Calabi-Yau hypersurfaces should be the same as the usual cohomology. We also know the property that the quantum cohomology spaces should be isomorphic for the ample Calabi-Yau hypersurface $Y$ and its crepant resolution $X$. Therefore, it makes sense to compare the above description of $QH_{\rm st}^{p,q}(Y)$ with the description of the cohomology of semiample Calabi-Yau hypersurfaces $X$ in Theorem \[t:main\]. We can see that the right components in the tensor products coincide, by Proposition \[p:iso\] and the definition of $R_1(f_{C},C)$. On the other hand, the left components in $QH_{\rm st}^{p,q}(Y)$ for the ample Calabi-Yau hypersurface $Y$ do not depend on a resolution, while the left components $T^\sigma(X)$ in $H^{p,q}(X)$ for the resolution $X$ depend on the Stanley-Reisner ideal $SR(\Sigma)$. This hints us to the following definitions: \[d:rings\] Let $C$ be a Gorenstein cone in a lattice $L$, subdivided by a fan $\Sigma$, and let ${\Bbb C}[C]$ and ${\Bbb C}[C^\circ]$, where $C^\circ$ is the relative interior of $C$, be the semigroup rings. Define “deformed” ring structures $\CC[C]^\Sigma$ and $\CC[C]^\Sigma$ on ${\Bbb C}[C]$ and ${\Bbb C}[C^\circ]$, respectively, by the rule: $[m_1][m_2]=[m_1+m_2]$ if $m_1,m_2\subset\sigma\in\Sigma$, and $[m_1][m_2]=0$, otherwise. Given $g=\sum_{m\in C,\deg m=1} g(m)[m]$, where $g(m)$ are the coefficients, let $$R_0(g,C)^\Sigma=\CC[C]^\Sigma/Z\cdot\CC[C]^\Sigma$$ be the graded ring over the graded module $$R_0(g,C^\circ)^\Sigma=\CC[C^\circ]^\Sigma/Z\cdot\CC[C^\circ]^\Sigma,$$ where $Z=\{\sum_{m\in C,\deg m=1} (m\cdot n) g(m)[m]:\,n\in {\rm Hom}(L,{\Bbb Z})\}$. Then define $R_1(g,C)^\Sigma$ as the image of the natural homomorphism $R_0(g,C^\circ)^\Sigma@>>>R_0(g,C)^\Sigma$. In the above definition, note that if $\Sigma$ is a trivial subdivision, we recover the spaces $R_0(g,C)$ and $R_1(g,C)$ introduced earlier. Also, we should mention that the Stanley-Reisner ring of the fan $\Sigma$ can be naturally embedded into the “deformed” ring $\CC[C]^\Sigma$, and this map is an isomorphism when the fan $\Sigma$ is smooth. Here is our conjecture about the string cohomology space of semiample Calabi-Yau hypersurfaces in a complete toric variety. \[semiampleconj\] Let $X\subset{{{\PP}_{\Sigma}}}$ be a semiample anticanonical nondegenerate hypersurface defined by $f\in H^0({{{\PP}_{\Sigma}}},{\cal O}_{{{\PP}_{\Sigma}}}(X))\cong\CC[K]_1$, and let $\omega$ be a generic element in $\CC[K^*]_1$, where $K^*$ is the reflexive Gorenstein cone dual to the cone $K$ over the reflexive polytope $\Delta$ associated to $X$. Then there is a natural isomorphism: $$H^{p,q}_{\st}(X)\cong \bigoplus_{C\subseteq K} R_1(\omega_{C^*},C^*)^\Sigma_{(p+q-d+\dim C^*+1)/2} \otimes R_1(f_C,C)_{(q-p+\dim C)/2},$$ where $C^*\subseteq K^*$ is a face dual to $C$, and where $f_C$, $\omega_{C^*}$ denote the projections of $f$ and $\omega$ to the respective cones $C$ and $C^*$. (Here, the superscript $\Sigma$ denotes the subdivision of $K^*$ induced by the fan $\Sigma$.) Since the dimension of the string cohomology for all crepant partial resolutions should remain the same and should coincide with the dimension of the quantum string cohomology space, we expect that $$\label{e:expe} {\dim}R_1(\omega_{C^*},C^*)_{\_}^\Sigma= {\dim}R_1(\omega_{C^*},C^*)_{\_},$$ which will be shown in Section \[section.brel\] for a projective subdivision $\Sigma$. Conjecture \[semiampleconj\] will be confirmed by the corresponding decomposition of the stringy Hodge numbers in Section \[section.bbo\]. Moreover, in Section \[s:vs\], we will derive the Chen-Ruan orbifold cohomology as a special case of Conjecture \[semiampleconj\] for ample Calabi-Yau hypersurfaces in complete simplicial toric varieties. Hodge-Deligne numbers of affine hypersurfaces {#s:hd} ============================================= Here, we compute the dimensions of the spaces $R_1(g,C)_{\_}$ from the previous section. It follows from Proposition \[p:iso\] that these dimensions are exactly the Hodge-Deligne numbers of the minimal weight space on the middle cohomology of a hypersurface in a torus. An explicit formula in [@dk] and [@bd] for the $E$-polynomial of a nondegenerate affine hypersurface whose Newton polyhedra is a simplex leads us to the answer for the graded dimension of $R_1(g,C)$ when $C$ is a simplicial Gorenstein cone. However, it was very difficult to compute the Hodge-Deligne numbers of an arbitrary nondegenerate affine hypersurface. This was a major technical problem in the proof of mirror symmetry of the stringy Hodge numbers for Calabi-Yau complete intersections in [@bb]. Here, we will present a simple formula for the Hodge-Deligne numbers of a nondegenerate affine hypersurface. Before we start computing ${\rm gr.dim.}R_1(g,C)$, let us note that for a nondegenerate $g\in\CC[C]_1$ (i.e., the corresponding hypersurface in ${\rm Proj}(\CC[C])$ is nondegenerate): $${\rm gr.dim.}R_0(g,C)=S(C,t),$$ where the polynomial $S$ is the same as in Definition \[d:bd\] of the stringy Hodge numbers. This was shown in [@b1 Theorem 4.8 and 2.11] (see also [@Bor.locstring]). When the cone $C$ is simplicial, we already know the formula for the graded dimension of $R_1(g,C)$: \[p:simp\] Let $C$ be a simplicial Gorenstein cone, and let $g\in\CC[C]_1$ be nondegenerate. Then $${\rm gr.dim.}R_1(g,C)=\tilde S(C,t)$$ where $\tilde S(C,t)=\sum_{C_1\subseteq C} S(C_1,t) (-1)^{\dim C-\dim C_1}$. The polynomial $\tilde S(C,t)$ was introduced with a slightly different notation in [@bd Definition 8.1] for a lattice simplex. One can check that $\tilde S(C,t)$ in this proposition is equivalent to the one in [@bd Corollary 6.6]. [From]{} the previous section and [@b1 Proposition 9.2], we know that $$R_1(g,C)\cong{{\rm Gr}}_F W_{\dim Z_g}H^{\dim Z_g}(Z_g),$$ where $Z_g$ is the nondegenerate affine hypersurface determined by $g$ in the maximal torus of ${\rm Proj}(\CC[C])$. By [@bd Proposition 8.3], $$E(Z_g;u,v) =\frac{(uv-1)^{\dim C-1}+(-1)^{\dim C}}{uv}+(-1)^{\dim C} \sum_{\begin{Sb} C_1\subseteq C\\ \dim C_1>1\end{Sb}}\frac{u^{\dim C_1}}{uv}\tilde S(C_1,u^{-1}v).$$ Now, note that the coefficients $e^{p,q}(Z_g)$ at the monomials $u^p v^q$ with $p+q=\dim Z_g$ are related to the Hodge-Deligne numbers by the calculations in [@dk]: $$e^{p,q}(Z_g)=(-1)^{\dim C}h^{p,q}(H^{\dim Z_g}(Z_g))+(-1)^{p}\delta_{pq} C_{\dim C-1}^p,$$ where $\delta_{pq}$ is the Kronecker symbol and $C_{\dim C-1}^p$ is the binomial coefficient. Comparing this with the above formula for $E(Z_g;u,v)$, we deduce the result. \[r:corrw\] By the above proposition, we can see that [@bd Theorem 8.2] gives a decomposition of the stringy Hodge numbers of ample Calabi-Yau hypersurfaces in weighted projective spaces in correspondence with Conjecture \[semiampleconj\]. Next, we generalize the polynomials $\tilde S(C,t)$ from Proposition \[p:simp\] to nonsimplicial Gorenstein cones in such a way that they would count the graded dimension of $R_1(g,C)$. \[d:spol\] Let $C$ be a Gorenstein cone in a lattice $L$. Then set $$\tilde S(C,t) := \sum_{C_1\subseteq C} S(C_1,t) (-1)^{\dim C-\dim C_1} G([C_1,C],t),$$ where $G$ is a polynomial (from Definition \[Gpoly\] in the Appendix) for the partially ordered set $[C_1,C]$ of the faces of $C$ that contain $C_1$. \[tildepoincare\] It is not hard to show that the polynomial $\tilde S(C,t)$ satisfies the duality $$\tilde S(C,t) = t^{\dim C} \tilde S(C,t^{-1})$$ based on the duality properties of $S$ and the definition of $G$-polynomials. However, the next result and Proposition \[p:iso\] imply this fact. \[p:nonsimp\] Let $C$ be a Gorenstein cone, and let $g\in\CC[C]_1$ be nondegenerate. Then $${\rm gr.dim.}R_1(g,C)=\tilde S(C,t).$$ As in the proof of Proposition \[p:simp\], we consider a nondegenerate affine hypersurface $Z_g$ determined by $g$ in the maximal torus of ${\rm Proj}(\CC[C])$. Then [@bb Theorem 3.18] together with the definition of $S$ gives $$E(Z_g;u,v) = \frac{(uv-1)^{\dim C-1}}{uv} + \frac{(-1)^{\dim C}}{uv} \sum_{C_2\subseteq C} B([C_2,C]^*; u,v)S(C_2,vu^{-1})u^{\dim C_2},$$ where the polynomials $B$ are from Definition \[Q\]. We use Lemma \[BfromG\] and Definition \[d:spol\] to rewrite this as $$\begin{gathered} E(Z_g;u,v) = \frac{(uv-1)^{\dim C-1}}{uv} +\frac{(-1)^{\dim C}}{uv}\times \\ \times \sum_{C_2\subseteq C_1\subseteq C} u^{\dim C_2} S(C_2,u^{-1}v) G([C_2,C_1],u^{-1}v)(-u)^{\dim C_1-\dim C_2} G([C_1,C]^*,uv) \\ =\frac{(uv-1)^{\dim C-1}}{uv}+\frac{(-1)^{\dim C}}{uv} \sum_{C_1\subseteq C}u^{\dim C_1} \tilde S(C_1,u^{-1}v) G([C_1,C]^*,uv).\end{gathered}$$ The definition of $G$-polynomials assures that the degree of $u^{\dim C_1}G([C_1,C]^*,uv)$ is at most $\dim C$ with the equality only when $C_1=C$. Therefore, the graded dimension of $R_1(g,C)$ can be read off the same way as in the proof of Proposition \[p:simp\] from the coefficients at total degree $\dim C-2$ in the above sum. “Deformed” rings and modules {#section.brel} ============================ While this section may serve as an invitation to a new theory of “deformed” rings and modules, the goal here is to prove the equality (\[e:expe\]), by showing that the graded dimension formula of Proposition \[p:nonsimp\] holds for the spaces $R_1(g,C)^\Sigma$ from Definition \[d:rings\]. To prove the formula we use the recent work of Bressler and Lunts (see [@bl], and also [@bbfk]). This requires us to first study Cohen-Macaulay modules over the deformed semigroup rings $\CC[C]^\Sigma$. First, we want to generalize the nondegeneracy notion: \[d:nond\] Let $C$ be a Gorenstein cone in a lattice $L$, subdivided by a fan $\Sigma$. Given $g=\sum_{m\in C,\deg m=1} g(m)[m]$, get $$g_j = \sum_{m\in C,\deg m=1}(m\cdot n_j) g(m)[m],\quad\text{ for } j=1,\dots,\dim C$$ where $\{n_1,\dots,n_{\dim C}\}\subset{\rm Hom}(L,{\Bbb Z})$ descends to a basis of ${\rm Hom}(L,{\Bbb Z})/C^\perp$. The element $g$ is called [*$\Sigma$-regular (nondegenerate)*]{} if $\{g_1,\ldots,g_{\dim C}\}$ forms a regular sequence in the deformed semigroup ring $\CC[C]^\Sigma$. \[r:nond\] When $\Sigma$ is a trivial subdivision, [@b1 Theorem 4.8] shows that the above definition is consistent with the previous notion of nondegeneracy corresponding to the transversality of a hypersurface to torus orbits. \[t:cm\] [(i)]{} The ring $\CC[C]^\Sigma$ and its module $\CC[C^\circ]^\Sigma$ are Cohen-Macaulay.\ [(ii)]{} A generic element $g\in\CC[C]_1$ is $\Sigma$-regular. Moreover, for a generic $g$ the sequence $\{g_1,\ldots,g_{\dim C}\}$ from Definition \[d:nond\] is $\CC[C^\circ]^\Sigma$-regular.\ [(iii)]{} If $g\in\CC[C]_1$ is $\Sigma$-regular, then the sequence $\{g_1,\ldots,g_{\dim C}\}$ is $\CC[C^\circ]^\Sigma$-regular. Part [(ii)]{} follows from the proofs of Propositions 3.1 and 3.2 in [@Bor.locstring]. The reader should notice that the proofs use degenerations defined by projective simplicial subdivisions, and any fan admits such a subdivision. Then, part [(ii)]{} implies [(i)]{}, by the definition of Cohen-Macaulay, while part [(iii)]{} follows from [(i)]{} and Proposition 21.9 in [@e]. As a corollary of Theorem \[t:cm\], we get the following simple description of $\Sigma$-regular elements: \[l:iff\] An element $g\in\CC[C]_1$ is $\Sigma$-regular, if and only if its restriction to all maximum-dimensional cones $C'\in\Sigma(\dim C)$ is nondegenerate in $\CC[C']$. Since $\CC[C]^\Sigma$ is Cohen-Macaulay, the regularity of a sequence is equivalent to the quotient by the sequence having a finite dimension, by [@ma Theorem 17.4]. One can check that $\CC[C]^\Sigma$ is filtered by the modules $R_k$ defined as the span of $[m]$ such that the minimum cone that contains $m$ has dimension at least $k$. The $k$-th graded quotient of this filtration is the direct sum of $\CC[C_1^\circ]$ by all $k$-dimensional cones $C_1$ of $\Sigma$. If $g$ is nondegenerate for every cone of maximum dimension, then its projection to any cone $C_1$ is nondegenerate, and Theorem \[t:cm\] shows that it is nondegenerate for each $\CC[C_1^\circ]$. Then by decreasing induction on $k$ one shows that $R_k/\{g_1,\ldots,g_{\dim C}\}R_k$ is finite-dimensional. In the other direction, it is easy to see that for every $C'\in \Sigma$ the $\CC[C]^\Sigma$-module $\CC[C']$ is a quotient of $\CC[C]^\Sigma$, which gives a surjection $$\CC[C]^\Sigma/\{g_1,\ldots,g_{\dim C}\}\CC[C]^\Sigma @>>>\CC[C']/\{g_1|_{C'},\dots,g_{\dim C}|_{C'}\}\CC[C']@>>>0.$$ The above lemma implies that the property of $\Sigma$-regularity is preserved by the restrictions: Let $C$ be a Gorenstein cone in a lattice $L$, subdivided by a fan $\Sigma$. If $g\in\CC[C]_1$ is $\Sigma$-regular, then $g\in\CC[C_1]_1$ is $\Sigma$-regular for all faces $C_1\subseteq C$. Let $g\in\CC[C]_1$ be $\Sigma$-regular. By Lemma \[l:iff\], the restriction $g_{C'}$ is nondegenerate in $\CC[C']$ for all $C'\in\Sigma(\dim C)$. Since the property of nondegeneracy associated with a hypersurface is preserved by the restrictions, $g_{C_1'}$ is nondegenerate in $\CC[C_1']$ for all $C_1'\in\Sigma(\dim C_1)$ contained in $C_1$. Applying Lemma \[l:iff\] again, we deduce the result. The next result generalizes [@b1 Proposition 9.4] and [@Bor.locstring Proposition 3.6]. \[Zreg\] Let $g\in\CC[C]_1$ be $\Sigma$-regular, then $R_0(g,C)^\Sigma$ and $R_0(g,C^\circ)^\Sigma$ have graded dimensions $S(C,t)$ and $t^kS(C,t^{-1})$, respectively, and there exists a nondegenerate pairing $$\langle\_,\_\rangle: R_0(g,C)_k^\Sigma\times R_0(g,C^\circ)_{\dim C-k}^\Sigma\to R_0(g,C^\circ)_{\dim C}^\Sigma\cong\CC,$$ induced by the multiplicative $R_0(g,C)^\Sigma$-module structure. It is easy to see that the above statement is equivalent to saying that $\CC[C^\circ]^\Sigma$ is the canonical module for $\CC[C]^\Sigma$. When $\Sigma$ consists of the faces of $C$ only, this is well-known (cf. [@d]). To deal with the general case, we will heavily use the results of [@e], Chapter 21. We denote $A=\CC[C]^\Sigma$. For every cone $C_1$ of $\Sigma$ the vector spaces $\CC[C_1]$ and $\CC[C_1^\circ]$ are equipped with the natural $A$-module structures. By Proposition 21.10 of [@e], modified for the graded case, we get $$\Ext^i_A(\CC[C_1],w_A) \iso \Bigl\{ \begin{array}{ll} \CC[C_1^\circ],&i=\codim(C_1)\\ 0,&i\neq\codim(C_1) \end{array} \Bigr.$$ where $w_A$ is the canonical module of $A$. Consider now the complex $\cal F$ of $A$-modules $$0@>>>F^0@>>>F^1@>>>\cdots @>>>F^d@>>>0$$ where $$F^n = \bigoplus_{C_1\in\Sigma,\codim(C_1)=n} \CC[C_1]$$ and the differential is a sum of the restriction maps with signs according to the orientations. The nontrivial cohomology of $\cal F$ is located at $F^0$ and equals $\CC[C^\circ]^\Sigma$. Indeed, by looking at each graded piece separately, we see that the cohomology occurs only at $F^0$, and then the kernel of the map to $F^1$ is easy to describe. We can now use the complex $\cal F$ and the description of $\Ext^i_A(\CC[C_1],w_A)$ to try to calculate $\Hom_A(\CC[C^\circ]^\Sigma,w_A)$. The resulting spectral sequence degenerates immediately, and we conclude that $\Hom_A(\CC[C^\circ]^\Sigma,w_A)$ has a filtration such that the associated graded module is naturally isomorphic to $$\bigoplus_{C_1\in \Sigma} \CC[C_1^\circ].$$ By duality of maximal Cohen-Macaulay modules (see [@e]), it suffices to show that $\Hom_A(\CC[C^\circ]^\Sigma,w_A)\iso A$, but the above filtration only establishes that it has the correct graded pieces, so extra arguments are required. Let $C^\prime$ be a cone of $\Sigma$ of maximum dimension. We observe that $\cal F$ contains a subcomplex ${\cal F}^\prime$ such that $$F^{\prime n}= \bigoplus_{C_1\subseteq C^\prime} \CC[C_1].$$ Similar to the case of $\cal F$, the cohomology of ${\cal F}^\prime$ occurs only at $F^{\prime 0}$ and equals $\CC[C^{\prime\circ}]$. By snake lemma, the cohomology of ${\cal F}/{\cal F}^\prime$ also occurs at the zeroth spot and equals $\CC[C^\circ]^\Sigma/\CC[C^{\prime\circ}]$. By looking at the spectral sequences again, we see that $$\Ext^{>0}(\CC[C^\circ]^\Sigma/\CC[C^{\prime\circ}],w_A)=0$$ and we have a grading preserving surjection $$\Hom_A(\CC[C^\circ]^\Sigma,w_A) @>>>\Hom_A(\CC[C^{\prime\circ}],w_A)@>>>0.$$ Since $\Hom_A(\CC[C^\prime],w_A)\iso \CC[C^{\prime\circ}]$, duality of maximal Cohen-Macaulay modules over $A$ shows that $$\Hom_A(\CC[C^{\prime\circ}],w_A)\iso \CC[C^{\prime}]$$ so for every $m\in C^\prime$ the element $[m]$ of $A$ does not annihilate the degree zero element of $\Hom_A(\CC[C^\circ]^\Sigma,w_A)$. By looking at all $C^\prime$ together, this shows that $$\Hom_A(\CC[C^\circ]^\Sigma,w_A)\iso A$$ which finishes the proof. \[p:nons\] Let $g\in\CC[C]_1$ be $\Sigma$-regular, then the pairing $\langle\_\,,\_\rangle$ induces a symmetric nondegenerate pairing $\{\_\,,\_\}$ on $R_1(g,C)^\Sigma$, defined by $$\{ x,y\} = \langle x,y'\rangle$$ where $y'$ is an element of $R_0(g,C^\circ)^\Sigma$ that maps to $y$. The nondegeneracy of the pairing $\{\_\,,\_\}$ follows from that of $\langle\_\,,\_\rangle$. The pairing is symmetric, because it comes from the commutative product on $\CC[C^\circ]^\Sigma$. \[dimtilde\] Let $C$ be a Gorenstein cone subdivided by a projective fan $\Sigma$. If $g\in\CC[C]_1$ is $\Sigma$-regular, then the graded dimension of $R_1(g,C)^\Sigma$ is $\tilde S(C,t)$. We will use the description of Bressler and Lunts [@bl] of locally free flabby sheaves on the finite ringed topological space associated to the cone $C$. We recall here the basic definitions. Consider the set $P$ of all faces of the cone $C$. It is equipped with the topology in which open sets are subfans, i.e. the sets of faces closed under the operation of taking a face. Bressler and Lunts define a sheaf $\cal A$ of graded commutative rings on $P$ whose sections over each open set is the ring of continuous piecewise polynomial functions on the union of all strata of this set. The grading of linear functions will be set to $1$, contrary to the convention of [@bl]. They further restrict their attention to the sheaves $\cal F$ of $\cal A$-modules on $P$ that satisfy the following conditions. $\bullet$ For every face $C_1$ of $C$, sections of $\cal F$ over the open set that corresponds to the union of all faces of $C_1$ is a free module over the ring of polynomial functions on $C_1$. $\bullet$ ${\cal F}$ is flabby, i.e. all restriction maps are surjective. We will use the following crucial result. [@bl] Every sheaf $\cal F$ that satisfies the above two properties is isomorphic to a direct sum of indecomposable graded sheaves ${\cal L}_{C_1}t^i$, where $C_1$ is a face of $C$ and $t^i$ indicates a shift in grading. For each indecomposable sheaf $L_{C_1}$ the space of global sections $\Gamma(P,{\cal L}_{C_1})$ is a module over the polynomial functions on $C$ of the graded rank $G([C_1,C]^*,t)$ where $[C_1,C]$ denotes the Eulerian subposet of $P$ that consists of all faces of $C$ that contain $C_1$. Now let us define a sheaf ${\cal B}(g)$ on $P$ whose sections over the open subset $I\in P$ are $\CC[\cap_{i\in I}C_i]^\Sigma$. It is clearly a flabby sheaf, which can be given a grading by $\deg(\_)$. Moreover, ${\cal B}(g)$ can be given a structure of a sheaf of $\cal A$ modules as follows. Every linear function $\varphi$ on a face $C_1$ defines a logarithmic derivative $$\partial_\varphi g:=\sum_{m\in C_1,\deg m =1} \phi(m)g(m)[m]$$ of $g$, which is an element of the degree $1$ in $\CC[C_1]^\Sigma$. Then the action of $\varphi$ is given by the multiplication by $\partial_\varphi g$, and this action is extended to all polynomial functions on the cone $C_1$. Similar construction clearly applies to continuous piecewise polynomial functions for any open set of $P$. Proposition \[Zreg\] assures that ${\cal B}(g)$ satisfies the second condition of Bressler and Lunts, and can therefore be decomposed into a direct sum of $L_{C_1}t^i$ for various $C_1$ and $i$. The definition of $R_1(g,C)^\Sigma$ implies that its graded dimension is equal to the graded rank of the stalk of ${\cal B}(g)$ at the point $C\in P$. Since the graded rank of $\cal B$ is $S(C,t)$, we conclude that $$S(C,t) = \sum_{C_1\subseteq C} {\rm gr.dim.}R_1(g_{C_1},C_1)^\Sigma G([C_1,C]^*,t).$$ To finish the proof of Theorem \[dimtilde\], it remains to apply Lemma \[Ginverse\]. Decomposition of stringy Hodge numbers for hypersurfaces {#section.bbo} ======================================================== In this section, we prove a generalization of [@bd Theorem 8.2] for all Calabi-Yau hypersurfaces, which gives a decomposition of the stringy Hodge numbers of the hypersurfaces. First, we recall a formula for the stringy Hodge numbers of Calabi-Yau hypersurfaces obtained in [@bb]. Then using a bit of combinatorics, we rewrite this formula precisely to the form of [@bd Theorem 8.2] with $\tilde S$ defined in the previous section. The stringy Hodge numbers of a Calabi-Yau complete intersection have been calculated in [@bb] in terms of the numbers of integer points inside multiples of various faces of the reflexive polytopes $\Delta$ and $\Delta^*$ as well as some polynomial invariants of partially ordered sets. A special case of the main result in [@bb] is the following description of the stringy $E$-polynomials of Calabi-Yau hypersurfaces. \[st.formula\] [@bb] Let $K\subset M\oplus{\Bbb Z}$ be the Gorenstein cone over a reflexive polytope $\Delta\subset M$. For every $(m,n)\in (K,K^*)$ with $m\cdot n =0$ denote by $x(m)$ the minimum face of $K$ that contains $m$ and by $x^*(n)$ the dual of the minimum face of $K^*$ that contains $n$. Also, set $A_{(m,n)}(u,v)$ be $$\frac{(-1)^{\dim(x^*(n))}}{uv} (v-u)^{\dim(x(m))}(uv -1)^{d+1-\dim(x^*(n))}B([x(m),x^*(n)]^*;u,v)$$ where the function $B$ is defined in Definition \[Q\] in the Appendix. Then $$E_{\rm st}(Y; u,v)= \sum_{(m,n) \in (K,K^*),m\cdot n =0} \left(\frac{u}{v}\right)^{{\rm deg}\,m} A_{(m,n)}(u,v) \left(\frac{1}{uv}\right)^{{\rm deg}\,n}$$ for an ample nondegenerate Calabi-Yau hypersurface $Y$ in $\PP_\Delta={\rm Proj}(\CC[K])$. The mirror duality $E_{\st}(Y;u,v) = (-u)^{d-1}E_\st(Y^*;u^{-1},v)$ was proved in [@bb] as the immediate corollary of the above formula and the duality property $B(P; u,v) = (-u)^{\rk P} B(P^*;u^{-1},v)$. It was not noticed there that Lemma \[Ginverse\] allows one to rewrite the $B$-polynomials in terms of $G$-polynomials, which we will now use to give a formula for the $E_\st(Y;u,v)$, explicitly obeying the mirror duality. The next result is a generalization of Theorem 8.2 in [@bd] with $\tilde S$ from Definition \[d:spol\]. \[estfromtilde\] Let $Y$ be an ample nondegenerate Calabi-Yau hypersurface in $\PP_\Delta={\rm Proj}(\CC[K])$. Then $$E_\st(Y;u,v) = \sum_{C\subseteq K} (uv)^{-1}(-u)^{\dim C}\tilde S(C,u^{-1}v) \tilde S(C^*,uv).$$ [*Proof.*]{} First, observe that the formula for $E_{\rm st}(Y; u,v)$ from Theorem \[st.formula\] can be written as $$\sum_{m,n,C_1,C_2} \frac{(-1)^{\dim C_2^*}}{uv} (v-u)^{\dim(C_1)}B([C_2,C_1^*];u,v)(uv -1)^{\dim C_2} \left(\frac{u}{v}\right)^{{\rm deg} m} \left(\frac{1}{uv}\right)^{{\rm deg}n}$$ where the sum is taken over all pairs of cones $C_1\subseteq K, C_2\subseteq K^*$ that satisfy $C_1\cdot C_2 = 0$ and all $m$ and $n$ in the relative interiors of $C_1$ and $C_2$, respectively. We use the standard duality result (see Definition \[d:bd\]) $$\sum_{n\in {\rm int}(C)} t^{-\deg(n)}= (t-1)^{-\dim C}S(C,t)$$ to rewrite the above formula as $$\frac 1{uv}\sum_{C_1\cdot C_2=0} (-1)^{\dim(C_2^*)}u^{\dim C_1} B([C_2,C_1^*];u,v)S(C_1,u^{-1}v)S(C_2,uv).$$ Then apply Lemma \[BfromG\] to get $$E_\st(Y;u,v) = \frac 1{uv} \sum_{C\in K} \sum_{C_1\subseteq C,C_2\subseteq C^*} (-1)^{\dim(C_2^*)}u^{\dim C_1} ~\times$$ $$\times~ G([C_1,C],u^{-1}v) (-u)^{\dim C_1^* -\dim C^*} G([C_2,C^*],uv) S(C_1,u^{-1}v)S(C_2,uv).$$ It remains to use Definition \[d:spol\]. String cohomology construction via intersection cohomology {#section.general} ========================================================== Here, we construct the string cohomology space for $\QQ$-Gorenstein toroidal varieties, satisfying the assumption of Proposition \[BDvsB\]. The motivation for this construction comes from the conjectural description of the string cohomology space for ample Calabi-Yau hypersurfaces and a look at the formula in [@bd Theorem 6.10] for the stringy $E$-polynomial of a Gorenstein variety with abelian quotient singularities. This immediately leads to a decomposition of the string cohomology space as a direct sum of tensor products of the usual cohomology of a closure of a strata with the spaces $R_1(g,C)$ from Proposition \[p:simp\]. Then the property that the intersection cohomology of an orbifold is naturally isomorphic to the usual cohomology leads us to the construction of the string cohomology space for $\QQ$-Gorenstein toroidal varieties. We show that this space has the dimension prescribed by Definition \[d:bd\] for Gorenstein complete toric varieties and the nondegenerate complete intersections in them. \[d:orbdef\] Let $X=\bigcup_{i \in I} X_i$ be a Gorenstein complete variety with quotient abelian singularities, satisfying the assumption of Proposition \[BDvsB\]. The stringy Hodge spaces of $X$ are naturally isomorphic to $$H_{\rm st}^{p,q}(X)\cong\bigoplus_{\begin{Sb}i \in I\\ k\ge0 \end{Sb}}H^{p-k,q-k}(\overline{X}_i)\otimes R_1(\omega_{\sigma_i},\sigma_i)_k,$$ where $\sigma_i$ is the Gorenstein simplicial cone of the singularity along the strata $X_i$, and $\omega_{\sigma_i}\in\CC[\sigma_i]_1$ are nondegenerate such that, for $\sigma_j\subset \sigma_i$, $\omega_{\sigma_i}$ maps to $\omega_{\sigma_j}$ by the natural projection $\CC[\sigma_i]@>>>\CC[\sigma_j]$. Since $\overline{X}_i$ is a compact orbifold, the coefficient $e^{p,q}(\overline{X}_i)$ at the monomial $u^p v^q$ in the polynomial $E(\overline{X}_i;u,v)$ is equal to $(-1)^{p+q}h^{p,q}(\overline{X}_i)$, by Remark \[r:epol\]. Therefore, Proposition \[p:simp\] shows that the above decomposition of $H_{\rm st}^{p,q}(X)$ is in correspondence with [@bd Theorem 6.10], and the dimensions $h_{\rm st}^{p,q}(X)$ coincide with those from Definition \[d:bcangor\]. Since we expect that the usual cohomology must be replaced in Definition \[d:orbdef\] by the intersection cohomology for Gorenstein toroidal varieties, the next result is a natural generalization of Theorem 6.10 in [@bd]. \[8.3\] Let $X=\bigcup_{i \in I} X_i$ be a Gorenstein complete toric variety or a nondegenerate complete intersection of Cartier hypersurfaces in the toric variety, where the stratification is induced by the torus orbits. Then $$E_{\rm st}(X;u,v)= \sum_{i \in I} E_{\rm int}(\overline X_i;u,v) \cdot \tilde S(\sigma_i,uv),$$ where $\sigma_i$ is the Gorenstein cone of the singularity along the strata $X_i$. Similarly to Corollary 3.17 in [@bb], we have $$E_{\rm int}(\overline X_i;u,v)=\sum_{X_j\subseteq \overline X_i}E(X_i;u,v)\cdot G([\sigma_i\subseteq \sigma_j]^*,uv).$$ Hence, we get $$\sum_{i \in I} E_{\rm int}(\overline X_i;u,v) \cdot \tilde S(\sigma_i,uv) = \sum_{i\in I}\sum_{X_j\subseteq \overline X_i} E(X_j;u,v) G([\sigma_i\subseteq \sigma_j]^*,uv)\tilde S(\sigma_i,uv)$$ $$=\sum_{j\in I} E(X_j;u,v) \Bigl(\sum_{\sigma_i\subseteq \sigma_j} G([\sigma_i\subseteq \sigma_j]^*,uv)\tilde S(\sigma_i,uv) \Bigr) = \sum_{j\in I} E(X_j;u,v) S(\sigma_j,uv),$$ where at the last step we have used the formula for $\tilde S$ and Lemma \[Ginverse\]. Based on the above theorem, we propose the following conjectural description of the stringy Hodge spaces for $\QQ$-Gorenstein toroidal varieties. \[d:tordef\] Let $X=\bigcup_{i \in I} X_i$ be a $\QQ$-Gorenstein $d$-dimensional complete toroidal variety, satisfying the assumption of Proposition \[BDvsB\]. The stringy Hodge spaces of $X$ are defined by: $$H_{\rm st}^{p,q}(X):=\bigoplus_{\begin{Sb}i \in I\\ k\ge0 \end{Sb}}H_{\rm int}^{p-k,q-k}(\overline{X}_i)\otimes R_1(\omega_{\sigma_i},\sigma_i)_k,$$ where $\sigma_i$ is the Gorenstein cone of the singularity along the strata $X_i$, and $\omega_{\sigma_i}\in\CC[\sigma_i]_1$ are nondegenerate such that, for $\sigma_j\subset \sigma_i$, $\omega_{\sigma_i}$ maps to $\omega_{\sigma_j}$ by the natural projection $\CC[\sigma_i]@>>>\CC[\sigma_j]$. Here, $p,q$ are rational numbers from $[0,d]$, and we assume that $H_{\rm int}^{p-k,q-k}(\overline{X}_i)=0$ if $p-k$ or $q-k$ is not a non-negative integer. Toric varieties and nondegenerate complete intersections of Cartier hypersurfaces have the stratification induced by the torus orbits which satisfies the assumptions in the above definition. String cohomology vs. Chen-Ruan orbifold cohomology {#s:vs} =================================================== Our next goal is to compare the two descriptions of string cohomology for Calabi-Yau hypersurfaces to the Chen-Ruan orbifold cohomology. Using the work of [@p], we will show that in the case of ample orbifold Calabi-Yau hypersurfaces the three descriptions coincide. We refer the reader to [@cr] for the orbifold cohomology theory and only use [@p] in order to describe the orbifold cohomology for complete simplicial toric varieties and Calabi-Yau hypersurfaces in Fano simplicial toric varieties. From Theorem 1 in [@p Section 4] and the definition of the orbifold Dolbeault cohomology space we deduce: Let ${{{\PP}_{\Sigma}}}$ be a $d$-dimensional complete simplicial toric variety. Then the orbifold Dolbeault cohomology space of ${{{\PP}_{\Sigma}}}$ is $$H^{p,q}_{orb}({{{\PP}_{\Sigma}}};\CC)\cong\bigoplus_{\begin{Sb}\sigma\in\Sigma\\l\in\QQ\end{Sb}} H^{p-l,q-l}(V(\sigma))\otimes \bigoplus_{t\in T(\sigma)_l}\CC t,$$ where $T(\sigma)_l=\{\sum_{\rho\subset\sigma}a_\rho [e_\rho]\in N:a_\rho\in(0,1), \sum_{\rho\subset\sigma}a_\rho=l\}$ (when $\sigma=0$, set $l=0$ and $T(\sigma)_l=\CC$), and $V(\sigma)$ is the closure of the torus orbit corresponding to $\sigma\in\Sigma$. Here, $p$ and $q$ are rational numbers in $[0,d]$, and $H^{p-l,q-l}(V(\sigma))=0$ if $p-l$ or $q-l$ is not integral. (The elements of $\oplus_{0\ne\sigma\in\Sigma,l}T(\sigma)_l$ correspond to the twisted sectors.) In order to compare this result to the description in Definition \[d:tordef\], we need to specify the $\omega_{\sigma_i}$ for the toric variety ${{{\PP}_{\Sigma}}}$. The stratification of ${{{\PP}_{\Sigma}}}$ is given by the torus orbits: ${{{\PP}_{\Sigma}}}=\cup_{\sigma\in\Sigma}\TT_\sigma$. The singularity of the variety ${{{\PP}_{\Sigma}}}$ along the strata $\TT_\sigma$ is given by the cone $\sigma$, so we need to specify a nondegenerate $\omega_\sigma\in\CC[\sigma]_1$ for each $\sigma\in\Sigma$. If $\omega_\sigma=\sum_{\rho\subset\sigma}\omega_\rho [e_\rho]$ with $\omega_\rho\ne0$, then one can deduce that $\omega_\sigma$ is nondegenerate using Remark \[r:nond\] and the fact that the nondegeneracy of a hypersurface in a complete simplicial toric variety (in this case, it corresponds to a simplex) is equivalent to the nonvanishing of the logarithmic derivatives simultaneously. So, picking any nonzero coefficients $\omega_\rho$ for each $\rho\in\Sigma(1)$ gives a nondegenerate $\omega_\sigma\in\CC[\sigma]_1$ satisfying the condition of Definition \[d:tordef\]. For such $\omega_\sigma$, note that the set $Z=\{\sum_{\rho\subset\sigma} (e_\rho\cdot m) \omega_\rho e_\rho:\,m\in {\rm Hom}(N,{\Bbb Z})\}$ is a linear span of $e_\rho$ for $\rho\subset\sigma$. Hence, $$R_0(\omega_\sigma,\sigma)_l=(\CC[\sigma]/Z\cdot\CC[\sigma])_l\cong \bigoplus_{t\in \tilde{T}(\sigma)_l}\CC t,$$ where $\tilde T(\sigma)_l=\{\sum_{\rho\subset\sigma}a_\rho e_\rho\in N:a_\rho\in[0,1), \sum_{\rho\subset\sigma}a_\rho=l\}$, and $$R_1(\omega_\sigma,\sigma)_l\cong \bigoplus_{t\in{T}(\sigma)_l}\CC t.$$ This shows that the orbifold Dolbeault cohomology for complete simplicial toric varieties can be obtained as a special case of the description of string cohomology in Definition \[d:tordef\]. We will now explain how the parameter $\omega$ should be related to the complexified Kähler class. We do not have the definition of the “orbifold” Kähler cone even for simplicial toric varieties. However, we know the Kähler classes in $H^2({{{\PP}_{\Sigma}}},\RR)$. Let ${{{\PP}_{\Sigma}}}$ be a projective simplicial toric variety, then $H^2({{{\PP}_{\Sigma}}},\RR)\cong PL(\Sigma)/M_\RR$, where $PL(\Sigma)$ is the set of $\Sigma$-piecewise linear functions $\varphi:N_\RR@>>>\RR$, which are linear on each $\sigma\in\Sigma$. The Kähler cone $K(\Sigma)\subset H^2({{{\PP}_{\Sigma}}},\RR)$ of ${{{\PP}_{\Sigma}}}$ consists of the classes of the upper strictly convex $\Sigma$-piecewise linear functions. One may call $K(\Sigma)$ the “untwisted” part of the orbifold Kähler cone. So, we can introduce the [*untwisted complexified Kähler space*]{} of the complete simplicial toric variety: $$K^{\rm untwist}_\CC({{{\PP}_{\Sigma}}})= \{\omega\in H^2({{{\PP}_{\Sigma}}},\CC):Im(\omega)\in K(\Sigma)\}/{\rm im}H^2({{{\PP}_{\Sigma}}},\ZZ).$$ Its elements may be called the [*untwisted complexified Kähler classes*]{}. We can find a generic enough $\omega\in K^{\rm untwist}_\CC({{{\PP}_{\Sigma}}})$ represented by a complex valued $\Sigma$-piecewise linear function $\varphi_\omega:N_\CC@>>>\CC$ such that $\varphi_\omega(e_\rho)\ne0$ for $\rho\in\Sigma(1)$. Setting $\omega_\rho=\exp(\varphi_\omega(e_\rho))$ produces our previous parameters $\omega_\sigma$ for $\sigma\in\Sigma$. This is how we believe $\omega_\sigma$ should relate to the complexified Kähler classes, up to perhaps some instanton corrections. We next turn our attention to the case of an ample Calabi-Yau hypersurface $Y$ in a complete simplicial toric variety ${{{\PP}_{\Sigma}}}$. Section 4.2 in [@p] works with a generic nondegenerate anticanonical hypersurface. However, one can avoid the use of Bertini’s theorem and state the result without “generic”. It is shown that the nondegenerate anticanonical hypersurface $X$ is a suborbifold of ${{{\PP}_{\Sigma}}}$, the twisted sectors of $Y$ are obtained by intersecting with the closures of the torus orbits and the degree shifting numbers are the same as for the toric variety ${{{\PP}_{\Sigma}}}$. Therefore, we conclude: \[p:des\] Let $Y\subset{{{\PP}_{\Sigma}}}$ be an ample Calabi-Yau hypersurface in a complete simplicial toric variety. Then $$H^{p,q}_{orb}(Y;\CC)\cong\bigoplus_{\begin{Sb}\sigma\in\Sigma\\l\in\ZZ\end{Sb}} H^{p-l,q-l}(Y\cap V(\sigma))\otimes \bigoplus_{t\in T(\sigma)_l}\CC t,$$ where $T(\sigma)_l=\{\sum_{\rho\subset\sigma}a_\rho [e_\rho]\in N:a_\rho\in(0,1), \sum_{\rho\subset\sigma}a_\rho=l\}$ (when $\sigma=0$, set $l=0$ and $T(\sigma)_l=\CC$). As in the case of the toric variety, we pick $\omega_\sigma=\sum_{\rho\subset\sigma}\omega_\rho e_\rho$ with $\omega_\rho\ne0$. Then, by the above proposition, $$H_{\rm st}^{p,q}(Y)\cong H^{p,q}_{orb}(Y;\CC).$$ We now want to show that the description in Proposition \[p:des\] is equivalent to the one in Conjecture \[semiampleconj\]. First, note that the proper faces $C^*$ of the Gorenstein cone $K^*$ in Conjecture \[semiampleconj\] one to one correspond to the cones $\sigma\in\Sigma$. Moreover, the rings $\CC[C^*]\cong\CC[\sigma]$ are isomorphic in this correspondence. If we take $\omega\in\CC[K^*]^\Sigma_1$ to be $[0,1]+\sum_{\rho\in\Sigma(1)}\omega_\rho[e_\rho,1]$, then $\omega$ is $\Sigma$-regular and $$R_1(C^*,\omega_{C^*})_l\cong R_1(\omega_\sigma,\sigma)_l\cong \oplus_{t\in{T}(\sigma)_l}\CC t.$$ On the other hand, the Hodge component $H^{p-l,q-l}(Y\cap V(\sigma))$ decomposes into the direct sum $$H^{p-l,q-l}_{\rm toric}(Y\cap V(\sigma))\oplus H^{p-l,q-l}_{\rm res}(Y\cap V(\sigma))$$ of the toric and residue parts. Since $Y\cap V(\sigma)$ is an ample hypersurface, from [@bc Theorem 11.8] and Section \[section.anvar\] it follows that $$H^{p-l,q-l}_{\rm res}(Y\cap V(\sigma))\cong R_1(f_C,C)_{q-l+1},$$ where $C\subset K$ is the face dual to $C^*$ which corresponds to $\sigma$, $p+q-2l=\dim Y\cap V(\sigma)=d-\dim\sigma-1=d-\dim C^*-1$. If $p+q-2l\ne d-\dim C^*-1$, then $H^{p-l,q-l}_{\rm res}(Y\cap V(\sigma))=0$. Hence, we get $$\bigoplus_{\begin{Sb}\sigma\in\Sigma\\l\in\ZZ\end{Sb}} H_{\rm res}^{p-l,q-l}(Y\cap V(\sigma))\otimes \bigoplus_{t\in T(\sigma)_l}\CC t \cong \bigoplus_{0\ne C\subseteq K} R_1(\omega_{C^*},C^*)^\Sigma_{a} \otimes R_1(f_C,C)_{b},$$ where $a=(p+q-d+\dim C^*+1)/2$ and $b=(q-p+\dim C)/2$. We are left to show that $$\label{e:lef} \bigoplus_{\begin{Sb}\sigma\in\Sigma\\l\in\ZZ\end{Sb}} H_{\rm toric}^{p-l,p-l}(Y\cap V(\sigma))\otimes \bigoplus_{t\in T(\sigma)_l}\CC t \cong R_1(\omega,K^*)^\Sigma_{p+1}.$$ Notice that the dimensions of the spaces on both sides coincide, so it suffices to construct a surjective map between them. This will follow from the following proposition. \[p:sttor\] Let ${{{\PP}_{\Sigma}}}={\rm Proj}(\CC[K])$ be the Gorenstein Fano simplicial toric variety, where $K$ as above. Then there is a natural isomorphism: $$H^{p,p}_{\rm st}({{{\PP}_{\Sigma}}})\cong\bigoplus_{\begin{Sb}\sigma\in\Sigma\\l\in\ZZ\end{Sb}} H^{p-l,p-l}(V(\sigma))\otimes \bigoplus_{t\in T(\sigma)_l}\CC t\cong R_0(\omega,K^*)_p^\Sigma,$$ where $\omega=[0,1]+\sum_{\rho\in\Sigma(1)}\omega_\rho[e_\rho,1]$ with $\omega_\rho\ne0$. First, observe that the dimensions of the spaces in the isomorphisms coincide by our definition of string cohomology, Proposition \[Zreg\] and [@bd Theorem 7.2]. So, it suffices to construct a surjective map between them. We know the cohomology ring of the toric variety: $$H^{*}(V(\sigma))\cong \CC[D_\rho:\rho\in\Sigma(1),\rho+\sigma\in\Sigma(\dim\sigma+1)]/ (P(V(\sigma))+SR(V(\sigma))),$$ where $$SR(V(\sigma))=\bigl\langle D_{\rho_1}\cdots D_{\rho_k}:\{e_{\rho_1},\dots,e_{\rho_k}\} \not\subset\tau \text{ for all }\sigma\subset\tau\in\Sigma(\dim\sigma+1)\bigr\rangle$$ is the Stanley-Reisner ideal, and $$P(V(\sigma))=\biggl\langle \sum_{\rho\in\Sigma(1),\rho+\sigma\in\Sigma(\dim\sigma+1)} \langle m,e_\rho\rangle D_\rho: m\in M\cap\sigma^\perp\biggr\rangle.$$ Define the maps from $H^{p-l,p-l}(V(\sigma))\otimes \bigoplus_{t\in T(\sigma)_l}\CC t$ to $R_0(\omega,K^*)^\Sigma$ by sending $D_{\rho_1}\cdots D_{\rho_{p-l}}\otimes t$ to $\omega_{\rho_1}[e_{\rho_1}]\cdots \omega_{\rho_{p-l}} [e_{\rho_{p-l}}]\cdot t\in\CC[N]^\Sigma$. One can easily see that these maps are well defined. To finish the proof we need to show that the images cover $R_0(\omega,K^*)^\Sigma$. Every lattice point $[n]$ in the boundary of $K^*$ lies in the relative interior of a face $C\subset K^*$, and can be written as a linear combination of the minimal integral generators of $C$: $$[n]=\sum_{[e_\rho,1]\in C}(a_\rho+b_\rho)[e_\rho,1],$$ where $a_\rho\in(0,1)$ and $b_\rho$ are nonnegative integers. Let $C'\subseteq C$ be the cone spanned by those $[e_\rho,1]$ for which $a_\rho\ne0$. The lattice point $\sum_{[e_\rho,1]\in C'}(a_\rho)[e_\rho,1]$ projects to one of the elements $t$ from $T(\sigma)_l$ for some $l$ and $\sigma$ corresponding to $C'$. Using the relations $\sum_{\rho\in\Sigma(1)}\omega_\rho\langle m,e_\rho\rangle [e_\rho,1]$ in the ring $R_0(\omega,K^*)^\Sigma$, we get that $$[n]=\sum_{[e_\rho,1]\in C'}(a_\rho)[e_\rho,1]+ \sum_{\rho+\sigma\in\Sigma(\dim\sigma+1)}b'_\rho[e_\rho,1],$$ which comes from $H^{p-l,p-l}(V(\sigma))\otimes \CC t$ for an appropriate $p$. The surjectivity now follows from the fact that the boundary points of $K^*$ generate the ring $C[K^*]^\Sigma/\langle \omega\rangle$. The isomorphism (\[e:lef\]) follows from the above proposition and the presentation: $$H_{\rm toric}^{*}(Y\cap V(\sigma))\cong H^*(V(\sigma))/{\rm Ann}([Y\cap V(\sigma)])$$ (see (\[e:ann\])). Indeed, the map constructed in the proof of Proposition \[p:sttor\] produces a well defined map between the right hand side in (\[e:lef\]) and $R_0(\omega,K^*)^\Sigma/Ann([0,1])$ because the annihilator of $[Y\cap V(\sigma)]$ maps to the annihilator of $[0,1]$. On the other hand, $$(R_0(\omega,K^*)^\Sigma/Ann([0,1]))_p\cong R_1(\omega,K^*)^\Sigma_{p+1},$$ which is induced by the multiplication by $[0,1]$ in $R_0(\omega,K^*)^\Sigma$. We expect that the product structure on $H^{*}_{\rm st}({{{\PP}_{\Sigma}}})$ is given by the ring structure $R_0(\omega,K^*)^\Sigma$. Also, the ring structure on $R_1(\omega,K^*)^\Sigma_{*+1}$ induced from $R_0(\omega,K^*)^\Sigma/Ann([0,1])$ should give a subring of $H^{*}_{\rm st}(Y)$ for a generic $\omega$ in Conjecture \[semiampleconj\]. Moreover, $$\bigoplus_{p,q} R_1(\omega_{C^*},C^*)^\Sigma_{(p+q-d+\dim C^*+1)/2} \otimes R_1(f_C,C)_{(q-p+\dim C)/2},$$ should be the module over the ring $R_0(\omega,K^*)^\Sigma/Ann([0,1])$: $$a\cdot(b\otimes c)=\bar{a}b\otimes c,$$ for $a\in R_0(\omega,K^*)^\Sigma/Ann([0,1])$ and $(b\otimes c)$ from a component of the above direct sum, where $\bar{a}$ is the image of $a$ induced by the projection $R_0(\omega,K^*)^\Sigma@>>>R_0(\omega_{C^*},C^*)^\Sigma$. We can also say about the product structure on the B-model chiral ring. The space $R_1(f,K)\cong R_0(f,K)/Ann([0,1])$ in Conjecture \[semiampleconj\], which lies in the middle cohomology $\oplus_{p+q=d-1}H^{p,q}_{\rm st}(Y)$, should be a subring of the B-model chiral ring, and $$\bigoplus_{p,q} R_1(\omega_{C^*},C^*)^\Sigma_{(p+q-d+\dim C^*+1)/2} \otimes R_1(f_C,C)_{(q-p+\dim C)/2},$$ should be the module over the ring $R_1(f,K)$, similarly to the above description in the previous paragraph. These ring structures are consistent with the products on the usual cohomology and the B-model chiral ring $H^*(X,\bigwedge^*T_X)$ of the smooth semiample Calabi-Yau hypersurfaces $X$ in [@m3 Theorem 2.11(a,b)] and [@m2 Theorem 7.3(i,ii)]. Description of string cohomology inspired by vertex algebras {#section.vertex} ============================================================ Here we will give yet another description of the string cohomology spaces of Calabi-Yau hypersurfaces. It will appear as cohomology of a certain complex, which was inspired by the vertex algebra approach to Mirror Symmetry. We will state the result first in the non-deformed case, and it will be clear what needs to be done in general. Let $K$ and $K^*$ be dual reflexive cones of dimension $d+1$ in the lattices $M$ and $N$ respectively. We consider the subspace $\CC[L]$ of $\CC[K]\otimes \CC[K^*]$ as the span of the monomials $[m,n]$ with $m\cdot n =0$. We also pick non-degenerate elements of degree one $f=\sum_m f_m [m]$ and $g=\sum_n g_n [n]$ in $\CC[K]$ and $\CC[K^*]$ respectively. Consider the space $$V = \Lambda^*(N_\CC)\otimes \CC[L].$$ The space $V$ is equipped with a differential $D$ given by $$D:= \sum_{m} f_m \contr m \otimes(\pi_L\circ[m]) + \sum_{n} g_n (\wedge n) \otimes(\pi_L\circ[n])$$ where $[m]$ and $[n]$ means multiplication by the corresponding monomials in $\CC[K]\otimes\CC[K^*]$ and $\pi_L$ denotes the natural projection to $\CC[L]$. It is straightforward to check that $D^2=0$. \[Dcoh\] Cohomology $H$ of $V$ with respect to $D$ is naturally isomorphic to $$\bigoplus_{C\subseteq K} \Lambda^{\dim C^*}C^*_\CC \otimes R_1(f,C)\otimes R_1(g,C^*)$$ where $C^*_\CC$ denotes the vector subspace of $N_\CC$ generated by $C^*$. First observe that $V$ contains a subspace $$\bigoplus_{C\subseteq K} \Lambda^* N_\CC \otimes (\CC[C^\circ]\otimes \CC[C^{*\circ}])$$ which is invariant under $D$. It is easy to calculate the cohomology of this subspace under $D$, because the action commutes with the decomposition $\oplus_C$. For each $C$, the cohomology of $D$ on $\Lambda^* N_\CC \otimes (\CC[C^\circ]\otimes \CC[C^{*\circ}])$ is naturally isomorphic to $$\Lambda^{\dim C^*}C^*_\CC \otimes R_0(f,C^\circ)\otimes R_0(g,C^{*\circ}),$$ because $\Lambda^* N_\CC \otimes (\CC[C^\circ]\otimes\CC[C^{*\circ}])$ is a tensor product of the Koszul complex for $\CC[C^\circ]$ and the dual of the Koszul complex for $\CC[C^{*\circ}]$. As a result, we have a map $$\alpha:H_1\to H,~H_1:=\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^ *_\CC \otimes R_0(f,C^\circ)\otimes R_0(g,C^{*\circ}).$$ Next, we observe that $V$ embeds naturally into the space $$\bigoplus_{C\subseteq K} \Lambda^* N_\CC \otimes (\CC[C]\otimes \CC[C^{*}])$$ as the subspace of the elements compatible with the restriction maps. This defines a map $$\beta:H\to\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^ *_\CC \otimes R_0(f,C)\otimes R_0(g,C^{*})=:H_2.$$ We observe that the composition $\beta\circ\alpha$ is precisely the map induced by embeddings $C^\circ\subseteq C$ and $C^{*\circ}\subseteq C^*$, so its image in $H_2$ is $$\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^ *_\CC \otimes R_1(f,C)\otimes R_1(g,C^{*}).$$ As a result, what we need to show is that $\alpha$ is surjective and $\beta$ is injective. We can not do this directly, instead, we will use spectral sequences associated to two natural filtrations on $V$. First, consider the filtration $$V=V^0\supset V^1\supset \ldots \supset V^{d+1}\supset V^{d+2}=0$$ where $V^p$ is defined as $\Lambda^*N_\CC$ tensored with the span of all monomials $[m,n]$ for which the smallest face of $K$ that contains $m$ has dimension at least $p$. It is easy to see that the spectral sequence of this filtration starts with $$H_3:=\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^ *_\CC \otimes R_0(f,C^\circ)\otimes R_0(g,C^{*}).$$ Analogously, we have a spectral sequence from $$H_4:=\bigoplus_{C\subseteq K}\Lambda^{\dim C^{*}}C^ *_\CC \otimes R_0(f,C)\otimes R_0(g,C^{*\circ})$$ to $H$, which gives us the following diagram. $$\begin{array}{ccccc} & & H_3 & & \\ & \nearrow & \Downarrow& \searrow & \\ H_1& \rightarrow & H & \rightarrow &H_2 \\ & \searrow & \Uparrow & \nearrow & \\ & & H_4 & & \end{array}$$ We remark that the spectral sequences mean that $H$ is a subquotient of both $H_3$ and $H_4$, i.e. there are subspaces $I_3^+$ and $I_3^-$ of $H_3$ such that $H\simeq I_3^+/I_3^-$, and similarly for $H_4$. Moreover, the above diagram induces commutative diagrams $$\begin{array}{ccccccccccc} & & 0 & & &\hspace{20pt} & & & 0 & & \\ & & \downarrow& & &\hspace{20pt} & & & \uparrow & & \\ & & I_3^- & & &\hspace{20pt} & H_1& \rightarrow & H & \rightarrow &H_2 \\ & & \downarrow& & &\hspace{20pt} & & \searrow & \uparrow & \nearrow & \\ & & I_3^+ & & &\hspace{20pt} & & & I_4^+ & & \\ & \nearrow & \downarrow& \searrow & &\hspace{20pt} & & & \uparrow & & \\ H_1& \rightarrow & H & \rightarrow &H_2&\hspace{20pt} & & & I_4^- & & \\ & & \downarrow& & &\hspace{20pt} & & & \uparrow & & \\ & & 0 & & &\hspace{20pt} & & & 0 & & \\ \end{array}$$ with exact vertical lines. Indeed, the filtration $V^*$ induces a filtration on the subspace of $V$ $$\bigoplus_{C\subseteq K} \Lambda^*\otimes\CC[C^\circ]\otimes \CC[C^{*\circ}].$$ The resulting spectral sequence degenerates immediately, and the functoriality of spectral sequences assures that there are maps from $H_1$ as above. Similarly, the space $$\bigoplus_{C\subseteq K} \Lambda^*\otimes\CC[C]\otimes \CC[C^{*}]$$ has a natural filtration by the dimension of $C$ that induces the filtration on $V$. Functoriality then gives the maps to $H_4$. We immediately get $$Im(\beta) \subseteq Im(H_3\to H_2) \cap Im(H_4\to H_2)$$ which implies that $$Im(\beta) = Im(\beta\circ\alpha)= \bigoplus_{C\subseteq K}\Lambda^{\dim C^*} C^*_\CC \otimes R_1(f,C)\otimes R_1(g,C^{*}).$$ Analogously, $Ker(\alpha)=Ker(\beta\circ\alpha)$, which shows that $$\bigoplus_{C\subseteq K}\Lambda^{\dim C^*} C^*_\CC \otimes R_1(f,C)\otimes R_1(g,C^{*})$$ is a direct summand of $H$. The fact that $$Ker(\alpha)\supseteq Ker(H_1\to H_4)$$ $$= \bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^*_\CC \otimes Ker(R_0(f,C^\circ)\to R_0(f,C))\otimes R_0(g,C^{*\circ})$$ implies that $I_3^-$ contains the image of this space under $H_1\to H_3$, which is equal to $$\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^*_\CC \otimes Ker(R_0(f,C^\circ)\to R_0(f,C))\otimes R_1(g,C^{*}).$$ Similarly, $I_3^+$ is contained in the preimage of $$\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^*_\CC \otimes R_1(f,C)\otimes R_1(g,C^{*})$$ under $H_3\to H_2$, which is $$\bigoplus_{C\subseteq K}\Lambda^{\dim C^*}C^*_\CC \otimes R_0(f,C^\circ)\otimes R_1(g,C^{*}).$$ As a result, $$H=\bigoplus_{C\subseteq K}\Lambda^{\dim C^*} C^*_\CC \otimes R_1(f,C)\otimes R_1(g,C^{*}).$$ If one replaces $\CC[K^*]$ by $\CC[K^*]^\Sigma$ in the definition of $\CC[L]$, then the statement and the proof of Theorem \[Dcoh\] remain intact. In addition, one can make a similar statement after replacing $\Lambda^*N_\CC$ by $\Lambda^*M_\CC$ and switching contraction and exterior multiplication in the definition of $D$. It is easy to see that the resulting complex is basically identical, though various gradings are switched. This should correspond to a switch between $A$ and $B$ models. We will now briefly outline the connection between Theorem \[Dcoh\] and the vertex algebra approach to mirror symmetry, developed in [@Bvertex] and further explored in [@MS]. The vertex algebra that corresponds to the N=2 superconformal field theory is expected to be the cohomology of a lattice vertex algebra ${\rm Fock}_{M\oplus N}$, built out of $M\oplus N$, by a certain differential $D_{f,g}$ that depends on the defining equations $f$ and $g$ of a mirror pair. The space $\Lambda^*(N_\CC)\otimes \CC[L]$ corresponds to a certain subspace of ${\rm Fock}_{M\oplus N}$ such that the restriction of $D_{f,g}$ to this subspace coincides with the differential $D$ of Theorem \[Dcoh\]. We can not yet show that this is precisely the chiral ring of the vertex algebra, so the connection to vertex algebras needs to be explored further. Appendix. G-polynomials ======================= A finite graded partially ordered set is called Eulerian if every its nontrivial interval contains equal numbers of elements of even and odd rank. We often consider the poset of faces of the Gorenstein cone $K$ over a reflexive polytope $\Delta$ with respect to inclusions. This is an Eulerian poset with the grading given by the dimension of the face. The minimum and maximum elements of a poset are commonly denoted by $\hat 0$ and $\hat 1$. [@stanley1] [Let $P = \lbrack \hat{0}, \hat{1} \rbrack$ be an Eulerian poset of rank $d$. Define two polynomials $G(P,t)$, $H(P,t) \in {\ZZ} [t]$ by the following recursive rules: $$G(P,t) = H(P,t) = 1\;\; \mbox{\rm if $d =0$};$$ $$H(P,t) = \sum_{ \hat{0} < x \leq \hat{1}} (t-1)^{\rho(x)-1} G(\lbrack x,\hat{1}\rbrack, t)\;\; (d>0),$$ $$G(P,t) = \tau_{ < d/2 } \left( (1-t)H(P,t) \right) \;\;( d>0),$$ where $\tau_{ < r }$ denotes the truncation operator ${\ZZ}\lbrack t \rbrack \rightarrow {\ZZ}\lbrack t \rbrack$ which is defined by $$\tau_{< r} \left( \sum_i a_it^i \right) = \sum_{i < r} a_it^i.$$]{} \[Gpoly\] The following lemma will be extremely useful. For every Eulerian poset $P=[\hat 0,\hat 1]$ of positive rank there holds $$\sum_{\hat 0\leq x\leq \hat 1}(-1)^{\rk[\hat 0,x]} G([\hat 0,x]^*,t) G([x,\hat 1],t) =\sum_{\hat 0\leq x\leq \hat 1} G([\hat 0,x],t)G([x,\hat 1]^*,t)(-1)^{\rk[x,\hat 1]} =0$$ where $()^*$ denotes the dual poset. In other words, $G(\_,t)$ and $(-1)^{\rk} G(\_^*,t)$ are inverses of each other in the algebra of functions on the posets with the convolution product. \[Ginverse\] [*Proof.*]{} See Corollary 8.3 of [@stanley]. The following polynomial invariants of Eulerian posets have been introduced in [@bb]. \[Q\] Let $P$ be an Eulerian poset of rank $d$. Define the polynomial $B(P; u,v) \in {\ZZ}[ u,v]$ by the following recursive rules: $$B(P; u,v) = 1\;\; \mbox{\rm if $d =0$},$$ $$\sum_{\hat{0} \leq x \leq \hat{1}} B(\lbrack \hat{0}, x \rbrack; u,v) u^{d - \rho(x)} G(\lbrack x , \hat{1}\rbrack, u^{-1}v) = G(P ,uv).$$ \[BfromG\] Let $P=[\hat 0,\hat 1]$ be an Eulerian poset. Then $$B(P;u,v) = \sum_{\hat 0\leq x \leq \hat 1} G([x,\hat 1]^*,u^{-1}v) (-u)^{\rk \hat 1 -\rk x} G([\hat 0,x],uv).$$ Indeed, one can sum the recursive formulas for $B([\hat 0, y])$ for all $\hat 0\leq y\leq\hat 1$ multiplied by $G([y,\hat 1]^*,u^{-1}v) (-u)^{\rk \hat 1 -\rk y}$ and use Lemma \[Ginverse\]. [BaBrFK]{} \[AKaMW\][AKMW]{} D. Abramovich, K. Karu, K. Matsuki, J. Włodarsczyk, [*Torification and Factorization of Birational Maps*]{}, preprint math.AG/9904135. \[BaBrFK\][bbfk]{} G. Barthel, J.-P. Brasselet, K.-H. Fieseler, L. Kaup, [*Combinatorial Intersection Cohomology for Fans*]{}, preprint math.AG/0002181. \[B1\][b1]{} V. V. Batyrev, [*Variations of the mixed Hodge structure of affine hypersurfaces in algebraic tori*]{}, Duke Math. J., [**69**]{} (1993), 349–409. \[B2\][b2]{} , [*Dual polyhedra and mirror symmetry for Calabi-Yau hypersurfaces in toric varieties*]{}, J. Algebraic Geometry [**6**]{} (1994), 493–535. \[B3\][Batyrev.cangor]{} , [*Stringy Hodge numbers of varieties with Gorenstein canonical singularities*]{}, Integrable systems and algebraic geometry (Kobe/Kyoto, 1997), 1–32, World Sci. Publishing, River Edge, NJ, 1998. \[B4\][Batyrev.nai]{} , [*Non-Archimedean integrals and stringy Euler numbers of log-terminal pairs*]{}, J. Eur. Math. Soc. (JEMS) [**1**]{} (1999), no. 1, 5–33. \[BBo\][bb]{} V. V. Batyrev, L. A. Borisov, [*Mirror duality and string-theoretic Hodge numbers*]{} Invent. math. [**126**]{} (1996), 183–203. \[BC\][bc]{} V. V. Batyrev, D. A. Cox, [*On the Hodge structure of projective hypersurfaces in toric varieties*]{}, Duke Math. J. [**75**]{} (1994), 293–338. \[BDa\][bd]{} V. V. Batyrev, D. Dais, [*Strong McKay correspondence, string-theoretic Hodge numbers and mirror symmetry*]{}, Topology [**35**]{} (1996), 901–929. \[Bo1\][Bor.locstring]{} L. A. Borisov, [*String cohomology of a toroidal singularity*]{}, J. Algebraic Geom. [**9**]{} (2000), no. 2, 289–300. \[Bo2\][Bvertex]{} , [*Vertex Algebras and Mirror Symmetry*]{}, Comm. Math. Phys. [**215**]{} (2001), no. 3, 517–557. \[BreL\][bl]{} P. Bressler, V. Lunts, [*Intersection cohomology on nonrational polytopes*]{}, preprint math.AG/0002006. \[C1\][c]{} D. A. Cox, [*The homogeneous coordinate ring of a toric variety*]{}, J. Algebraic Geom. [**4**]{} (1995), 17–50. \[C2\][c2]{} , [*Recent developments in toric geometry*]{}, in Algebraic Geometry (Santa Cruz, 1995), Proceedings of Symposia in Pure Mathematics, [bf 62]{}, Part 2, Amer. Math. Soc., Providence, 1997, 389–436. \[CKat\][ck]{} D. A. Cox, S. Katz, [*Algebraic Geometry and Mirror Symmetry*]{}, Math. Surveys Monogr. [**68**]{}, Amer. Math. Soc., Providence, 1999. \[ChR\][cr]{} W. Chen, Y. Ruan, [*A new cohomology theory for orbifold*]{}, preprint math.AG/0004129 \[D\][d]{} V. I. Danilov, [*The geometry of toric varieties*]{}, Russian Math. Surveys [**33**]{} (1978), 97–154. \[DiHVW\][dhvw]{} L. Dixon, J. Harvey, C. Vafa, E. Witten, [*Strings on orbifolds I, II*]{}, Nucl. Physics, [**B261**]{} (1985), [**B274**]{} (1986). \[DKh\][dk]{} V. Danilov, A. Khovanskii, [*Newton polyhedra and an algorithm for computing Hodge-Deligne numbers*]{}, Math. USSR-Izv. [**29**]{} (1987), 279–298. \[E\][e]{} D. Eisenbud, [*Commutative algebra with a view toward algebraic geometry*]{}, Graduate Texts in Mathematics [**150**]{}, Springer-Verlag, New York, 1995. \[F\][f]{} W. Fulton, [*Introduction to toric varieties*]{}, Princeton Univ. Press, Princeton, NJ, 1993. \[G\][Greene]{}B. R. Greene, [*String theory on Calabi-Yau manifolds*]{}, Fields, strings and duality (Boulder, CO, 1996), 543–726, World Sci. Publishing, River Edge, NJ, 1997. \[KoMo\][Kollar]{} J. Kollár, S. Mori, [*Birational geometry of algebraic varieties*]{}, With the collaboration of C. H. Clemens and A. Corti. Cambridge Tracts in Mathematics [**134**]{}. Cambridge University Press, Cambridge, 1998. \[MaS\][MS]{} F. Malikov, V. Schechtman, [*Deformations of chiral algebras and quantum cohomology of toric varieties*]{}, preprint math.AG/0003170. \[Mat\][ma]{} Matsumura, [*Commutative ring theory.*]{} Translated from Japanese by M. Reid. Second edition. Cambridge Studies in Advanced Mathematics, 8. Cambridge University Press, Cambridge, 1989. \[M1\][m1]{} A. R. Mavlyutov, [*Semiample hypersurfaces in toric varieties*]{}, Duke Math. J. [**101**]{} (2000), 85–116. \[M2\][m2]{} , [*On the chiral ring of Calabi-Yau hypersurfaces in toric varieties*]{}, preprint math.AG/0010318. \[M3\][m3]{} , [*The Hodge structure of semiample hypersurfaces and a generalization of the monomial-divisor mirror map*]{}, in Advances in Algebraic Geometry Motivated by Physics (ed. E. Previato), Contemporary Mathematics, [**276**]{}, 199–227. \[P\][p]{} M. Poddar, [*Orbifold Hodge numbers of Calabi-Yau hypersurfaces*]{}, preprint math.AG /0107152. \[S1\][stanley1]{} R. Stanley, [*Generalized $H$-vectors, Intersection Cohomology of Toric Varieties, and Related Results*]{}, Adv. Stud. in Pure Math. [**11**]{} (1987), 187–213. \[S2\][stanley]{} , [*Subdivisions and local h-vectors*]{}, JAMS [**5**]{} (1992), 805–851.
epsf.sty 220 mm 145 mm 0.5 mm 1 mm 8 mm [M. Nieto-Vesperinas[^1] and J. R. Arias-González[^2]]{} [*$^1$Instituto de Ciencia de Materiales de Madrid, CSIC*]{} [*$^2$Instituto Madrileño de Estudios Avanzados en Nanociencia*]{} [*Cantoblanco, 28049 Madrid, Spain.*]{} Introduction {#sec:introduction} ============ The purpose of this report is to present the theoretical foundations of the interaction of evanescent fields on an object. Evanescent electromagnetic waves are inhomogeneous components of the near field, bound to the surface of the scattering object. These modes travel along the illuminated sample surface and exponentially decrease outside it [@bornwolf99ch11; @nieto-vesperinas91; @mandel95], [*e.g.*]{}, either in the form of lateral waves [@tamir72a; @tamir72b] created by total internal reflection (TIR) at dielectric flat interfaces, whispering–gallery modes in dielectric tips and particles [@hill88; @owen81; @benincasa87; @collot93; @knight95; @weiss95; @nieto-vesperinas96] or of plasmon polaritons [@raether88] in metallic corrugated interfaces (see Section \[sec:PFM\]). The force exerted by these evanescent waves on particles near the surface is of interest for several reasons. On the one hand, evanescent waves convey high resolution of the scattered field signal, beyond the half wavelength limit. This is the essence of near–field scanning optical microscopy, abbreviated usually as NSOM [@pohl93; @paesler96]. These fields may present large concentrations and intensity enhancements in subwavelength regions near tips thus giving rise to large gradients that produce enhanced trapping forces that may enable one to handle particles within nanometric distances [@novotny97]. In addition, the large contribution of evanescent waves to the near field is the basis of the high resolution of signals obtained by transducing the force due to these waves on particles over surfaces when such particles are used as probes. On the other hand, evanescent waves have been used both to control the position of a particle above a surface and to estimate the interaction (colloidal force) between such a particle and the surface (see Chapter 6) [@sasaki97; @clapp99; @dogariu00]. The first experimental observation, demonstrating the mechanical action of a single evanescent wave ([*i.e*]{}., of the lateral wave produced by total internal reflection at a dielectric (saphire–water interface) on microspheres immersed in water over a dielectric surface) was made in [@kawata92]. Further experiments either over waveguides [@kawata96] or attaching the particle to the cantilever of an atomic force microscope (AFM) [@vilfan98] aimed at estimating the magnitude of this force. The scattering of an evanescent electromagnetic wave by a dielectric sphere has been investigated by several authors using Mie’s scattering theory (addressing scattering cross sections [@chew79] and electromagnetic forces [@almaas95]), as well as using ray optics [@prieve93; @walz99]. In particular, [@walz99] made a comparison with [@almaas95]. Although no direct evaluation of either theoretical work with any experimental result has been carried out yet, likely due to the yet lack of accurate well characterized and controlled experimental estimations of these TIR observed forces. In fact, to get an idea of the difficulties of getting accurate experimental data, we should consider the fluctuations of the particle position in its liquid environment, due to both Brownian movement and drift microcurrents, as well as the obliteration produced by the existence of the friction and van der Waals forces between particle and surface [@vilfan98; @almaas95]. This has led so far to discrepancies between experiment and theory. In the next section we shall address the effect of these forces on particles from the point of view of the dipolar approximation, which is of considerable interpretative value to understand the contribution of horizontal and vertical forces. Then we shall show how the multiple scattering of waves between the surface and the particle introduces important modifications of the above mentioned forces, both for larger particles and when they are very close to substrates. Further, we shall investigate the interplay of these forces when there exists slight corrugation in the surface profile. Then the contribution of evanescent waves created under total internal reflection still being important, shares its effects with radiative propagating components that will exert scattering repulsive forces. Even so, the particle can be used in these cases as a scanning probe that transduces this force in a photonic force microscopy operation. Force on a Small Particle: The Dipolar Approximation {#sec:dipapprox} ==================================================== Small polarizable particles, namely, those with radius $a\ll \lambda $, in the presence of an electromagnetic field experience a Lorentz force [@gordon73]: $$\label{eq:lorentz} {\bf F}=({\vec {\wp}} \cdot \nabla) {\vec{{\cal E}}}+ \frac{1}{c}\frac{ \partial {\vec \wp}}{\partial t} \times {\vec{{\cal B}}}.$$ In Equation (\[eq:lorentz\]) ${\vec{\wp}}$ is the induced dipole moment density of the particle, and ${\vec{{\cal E}}}$, ${\vec{{\cal B}}}$ are the electric and magnetic vectors, respectively. At optical frequencies, used in most experiments, the observed magnitude of the electromagnetic force is the time–averaged value. Let the electromagnetic field be time–harmonic, so that ${\vec{{\cal E}}}({\bf r} ,t)=\Re e \{ {\bf E}({\bf r})\exp (-i\omega t) \}$, ${\vec{{\cal B}}}({\bf r },t)=\Re e \{ {\bf B}({\bf r})\exp (-i\omega t) \}$, ${\vec{\wp}}({\bf r} ,t)=\Re e \{ {\bf p}({\bf r})\exp (-i\omega t) \}$; ${\bf E}({\bf r})$, $ {\bf B}({\bf r})$ and ${\bf p}({\bf r})$ being complex functions of position in the space, and $\Re e$ denoting the real part. Then, the time–averaged Lorentz force over a time interval $T$ large compared to $2\pi /\omega$ [@bornwolf99pp34] is $$\langle {\bf F}({\bf r})\rangle =\frac{1}{4T}\int_{-T/2}^{T/2}dt\left[ {{{{{( {\bf p}+{\bf p}^{\ast })\cdot \nabla ({\bf E}+{\bf E}^{\ast })+\frac{1}{c} \left( \frac{\partial {\bf p}}{\partial t}+\frac{\partial {\bf p}^{\ast }}{ \partial t}\right) \times ({\bf B}+{\bf B}^{\ast })}}}}}\right] , \label{eq:averaging}$$ where $\ast $ denotes complex conjugate. On substituting in Equation (\[eq:averaging\]) ${\bf E}$, ${\bf B}$, and ${\bf p}$ by their time harmonic expressions given above and performing the integral, one obtains for each $ i^{th}$–Cartesian component of the force $$\langle F_{j}({\bf r})\rangle =\frac{1}{2}\Re e \left\{ p_{k}\frac{\partial E_{j}^{\ast }({\bf r})}{\partial x_{k}}+\frac{1}{c}\epsilon _{jkl}\frac{ \partial p_{k}}{\partial t}B_{l}^{\ast }\right\} . \label{eq:chaumetinicio}$$ In Equation (\[eq:chaumetinicio\]) $j=1,2,3$, $\epsilon _{jkl}$ is the completely antisymmetric Levy–Civita tensor. On using the Maxwell equation ${\bf B} =(c/i\omega )\nabla \times {\bf E}$ and the relationships ${\bf p}=\alpha {\bf E}$ and $\partial {\bf p}/\partial t=-i\omega {\bf p}$, $\alpha $ being the particle polarizability, Equation (\[eq:chaumetinicio\]) transforms into $$\langle F_{j}({\bf r})\rangle =\frac{1}{2}\Re e \left\{ {{{{{\alpha \left( E_{k} \frac{\partial E_{j}^{\ast }({\bf r})}{\partial x_{k}}+\epsilon _{jkl}~\epsilon _{lmn}E_{k}\frac{\partial E_{n}^{\ast}}{\partial x_{m}} \right) }}}}}\right\} . \label{eq:chaumetmedio}$$ Since $\epsilon _{jkl}\epsilon _{lmn}=\delta _{jm}\delta _{kn}-\delta _{jn}\delta _{km}$, one can finally express the time–averaged Lorentz force on the small particle as [@chaumet00c] $$\langle F_{j}({\bf r})\rangle =\frac{1}{2}\Re e \left\{ {{{{{\alpha E_{k}\frac{ \partial E_{k}^{\ast }({\bf r})}{\partial x_{j}}}}}}}\right\} . \label{eq:chaumetfin}$$ Equation (\[eq:chaumetfin\]) constitutes the expression of the time–averaged force on a particle in an arbitrary time–harmonic electromagnetic field. For a dipolar particle, the polarizability is [@draine88] $$\alpha =\frac{\alpha _{0}}{1-\frac{2}{3}ik^{3}\alpha _{0}}. \label{eq:alfa}$$ In Equation (\[eq:alfa\]) $\alpha _{0}$ is given by: $\alpha _{0}=a^{3}(\epsilon -1)/(\epsilon +2)$, $\epsilon=\epsilon_2/\epsilon_0$ being the dielectric permittivity contrast between the particle, $\epsilon_2$, and the surrounding medium, $\epsilon_0$; and $k=\sqrt{\epsilon_0} k_0$, $k_0=\omega /c$. For $ka\ll 1$, one can approximate $\alpha $ by: $\alpha =\alpha _{0}(1+\frac{2}{3}ik^{3}|\alpha _{0}|^{2})$. The imaginary part in this expression of $\alpha $ constitutes the radiation–reaction term. The light field can be expressed by its paraxial form, [*e.g*]{}., it is a beam or a plane wave, either propagating or evanescent, so that it has a main propagation direction along ${\bf k}$, the light electric vector will then be described by $${\bf E}({\bf r})={\bf E}_{0}({\bf r})\exp (i{\bf k}\cdot {\bf r}). \label{eq:oplana}$$ Substituting Equation (\[eq:oplana\]) into Equation (\[eq:chaumetfin\]), one obtains for the force $$\langle {\bf F}\rangle =\frac{1}{4}\Re e \{ \alpha \} \nabla |{\bf E} _{0}|^{2}+\frac{1}{2}{\bf k}\Im m \{ \alpha \} |{\bf E}_{0}|^{2}-\frac{1}{2} \Im m \{ \alpha \} \Im m \{ {\bf E}_{0}\cdot \nabla {\bf E}_{0}^{\ast }\}, \label{eq:finforce}$$ where $\Im m$ denotes imaginary part. The first term is the gradient force acting on the particle, whereas the second term represents the radiation pressure contribution to the scattering force that, on substituting the above approximation for $\alpha $, namely, $\alpha =\alpha _{0}(1+\frac{2}{3} ik^{3}|\alpha _{0}|^{2})$, can also be expressed for a Rayleigh particle ($ka\ll 1$) as [@vandehulst81] $(|{\bf E}_{0}|^{2}/8\pi )C{\bf k}/k$, where $C$ is the particle scattering cross section given by $C=(8/3)\pi k^{4}|\alpha _{0}|^{2}$. Notice that the last term of Equation (\[eq:finforce\]) is only zero when either $\alpha $ or ${\bf E}_{0}$ is real. (This is the case for a plane propagating or evanescent wave but not for a beam, in general.) Force on a Dipolar Particle due to an Evanescent Wave {#sec:dipevanescent} ===================================================== Let the small particle be exposed to the electromagnetic field of an evanescent wave, whose electric vector is ${\bf E}={\bf T}\exp (-qz)\exp (i {\bf K}\cdot {\bf R})$, where we have written ${\bf r}=({\bf R},z)$ and ${\bf k}=({\bf K},k_{z})$, ${\bf K}$ and $k_{z}$ satisfying $K^{2}+k_{z}^{2}=k^{2}$, $k^{2}=\omega ^{2}\epsilon _{0}/c^{2}$, with $k_{z}=iq=i\sqrt{K^{2}-k_{0}^{2}}$. This field is created under total internal reflection at a flat interface ($z=$ constant, below the particle) between two media of dielectric permittivity ratio $\epsilon_0/\epsilon_1$ (see also inset of Figure \[fig:dielec\](a)). The incident wave, either $s$ or $p$ polarized, ([*i.e*]{}., with the electric vector either perpendicular or in the plane of incidence: the plane formed by the incident wavevector ${\bf k} _{i}$ at the interface and the surface normal $\hat{z}$) enters from the denser medium at $z<0$. The particle is in the medium at $z>0$. Without loss of generality, we shall choose the incidence plane as $OXZ$, so that ${\bf K} =(K,0)$. Let $T_{\perp }$ and $T_{\parallel }$ be the transmitted amplitudes into $z>0$ for $s$ and $p$ polarizations, respectively. The electric vector is: $${\bf E}=(0,1,0){T_{\perp }}\exp (iKx)\exp (-qz), \label{eq:evanescent1}$$ for $s$ polarization, and $${\bf E}=(-iq,0,K)\frac{T_{\parallel }}{k}\exp (iKx)\exp (-qz). \label{eq:evanescent2}$$ for $p$ polarization By introducing the above expressions for the electric vector ${\bf E}$ into Equation (\[eq:finforce\]), we readily obtain the average total force on the particle split into the scattering and gradient forces. The scattering force is contained in the $OXY$–plane (that is, the plane containing the propagation wavevector of the evanescent wave), namely, $$\label{eq:evanescent3} \langle F_x \rangle = \frac{|T|^2}{2} K \Im m \{ \alpha \} \exp(-2qz);$$ For the gradient force, which is purely directed along $OZ$, one has $$\label{eq:evanescent4} \langle F_z \rangle = -\frac{|T|^2}{2}q \Re e \{ \alpha \} \exp(-2qz).$$ In Equations (\[eq:evanescent3\]) and (\[eq:evanescent4\]) $T$ stands for either $T_{\perp }$ or $T_{\parallel }$, depending on whether the polarization is $s$ or $p$, respectively. For an absorbing particle, on introducing Equation (\[eq:alfa\]) for $\alpha$ into Equations (\[eq:evanescent3\]) and (\[eq:evanescent4\]), one gets for the scattering force $$\label{eq:fx1} \langle F_x \rangle = \frac{|T|^2}{2} K \exp(-2qz)\frac{\Im m \{ \alpha_0 \} +(2/3)k^3 |\alpha_0|^2}{1+(4/9)k^6|\alpha_0|^2},$$ and for the gradient force $$\label{eq:fz1} \langle F_z \rangle = -\frac{|T|^2}{2}q \frac{\Re e \{ \alpha_0 \} }{1+(4/9)k^6| \alpha_0|^2} \exp(-2qz).$$ It should be remarked that, except near resonances, in general $\Im m \{ \alpha _{0} \}$ is a positive quantity and therefore the scattering force in Equation (\[eq:fx1\]) is positive in the propagation direction $K$ of the evanescent wave, thus pushing the particle parallel to the surface, whereas the gradient force Equation (\[eq:fz1\]) is negative or positive along $OZ$, therefore, attracting or repelling the particle towards the surface, respectively, according to whether $\Re e \{ \alpha \} > 0$ or $\Re e \{ \alpha \} < 0$. The magnitudes of these forces increase with the decrease of distance to the interface and it is larger for $p$ polarization since in this case the dipoles induced by the electric vector at both the particle and the surface are oriented parallel to each other, thus resulting in a smaller interaction than when these dipoles are induced in the $OXZ$–plane ($s$–polarization) [@alonsofinn68]. In particular, if $ka\ll 1$, Equation (\[eq:fx1\]) becomes $$\label{eq:fx2} \langle F_x \rangle = \frac{|T|^2}{2} K \exp(-2qz) \left[a^3 \Im m \left\{ \frac{\epsilon-1} {\epsilon+2}\right \} + \frac{2}{3}k^3 a^6 \left |\frac{ \epsilon-1}{\epsilon+2} \right|^2 \right];$$ The first term of Equation (\[eq:fx2\]) is the radiation pressure of the evanescent wave on the particle due to absorption, whereas the second term corresponds to scattering. This expression can be further expressed as $$\label{eq:fx3} \langle F_x \rangle = \frac{|T|^2}{8 \pi} \frac{K}{k} \exp(-2qz)~C_{ext}.$$ where the particle extinction cross section $C_{ext}$ has been introduced as $$\label{eq:eficaz} C_{ext}=4\pi k a^3 \Im m \left \{ \frac{\epsilon-1}{\epsilon+2} \right\} +\frac{8 \pi}{3}k^4 a^6 \left |\frac{\epsilon-1}{\epsilon+2} \right|^2.$$ Notice that Equation (\[eq:eficaz\]) coincides with the value obtained from Mie’s theory for small particles in the low–order expansion of the size parameter $ka$ of the extinction cross section [@vandehulst81]. Although the above equations do not account either for multiple scattering as described by Mie’s theory for larger particles or for multiple interactions of the wave between the particle and the dielectric surface, they are useful to understand the fundametals of the force effects induced by a single evanescent wave on a particle. It should be remarked, however, that as shown at the end of this section, once Mie’s theory becomes necessary, multiple scattering with the surface demands that its contribution be taken into account. ![ Forces in the $Z$ direction and in the $X$ direction (as insets) acting on a sphere with radius $a=60$ $nm$, in the dipolar approximation. The angle of incidence is $\protect\theta _0=42^o$, larger than the critical angle $\protect\theta _c=41.8^o$ (for the glass–air interface). $\protect \lambda=632.8$ $nm$. Solid lines: $s$ polarization, dashed lines: $p$ polarization. The sphere material is: (a): glass, (b): silicon, (c): gold.[]{data-label="fig:dipolo"}](dipolo.eps){width="\linewidth"} Figure \[fig:dipolo\] shows the evolution of the scattering and gradient forces on three kinds of particles, namely, glass ($\epsilon_2 =2.25$), silicon ($\epsilon_2 =15+i0.14$) and gold ($\epsilon_2 =-5.65+i0.75$), all of radius $a=60$ $nm,$ as functions of the gap distance $d$ between the particle and the surface at which the evanescent wave is created. The illuminating evanescent wave is due to refraction of an incident plane wave of power $P=\frac{c\sqrt{\epsilon_1 }}{8\pi }|A|^{2}=1.9\times 10^{-2}$ $mW/\mu m^{2}$, equivalent to $150$ $mW$ over a circular section of radius $50$ $\mu m$, on a glass–air interface at angle of incidence $\theta _{0}=42^{o} $ and $\lambda =632.8$ $nm$ (the critical angle is $\theta _{c}=41.8^{o}$), both at $s$ and $p$ polarization (electric vector perpendicular and parallel, respectively, to the incidence plane at the glass–air interface, namely, $|T_{\perp }|^{2}=4\epsilon_1 \cos ^{2}\theta _{0}|A|^{2}/(\epsilon_1 -1) $, $|T_{\parallel }|^{2}=4\epsilon_1 \cos ^{2}\theta _{0}|A|^{2}/[(\epsilon_1 -1)((1+\epsilon_1 )\sin ^{2}\theta _{0}-1)]$). These values of forces are consistent with the magnitudes obtained on similar particles by applying Maxwell’s stress tensor (to be discussed in Section \[sec:CDM\]) via Mie’s scattering theory [@almaas95]. However, as shown in the next section, as the size of the particle increases, the multiple interaction of the illuminating wave between the particle and the substrate cannot be neglected. Therefore, the above results, although of interpretative value, should be taken with care at distances smaller than $10$ $nm$, since in that case multiple scattering makes the force stronger. This will be seen next. Influence of Interaction with the Substrate {#sec:substrate} =========================================== Among the several studies on forces of evanescent waves over particles, there are several models which calculate the forces from Maxwell’s stress tensor on using Mie’s theory to determine the scattered field in and around the sphere, without, however, taking into account the multiple scattering between the interface at which the evanescent wave is created and the sphere [@almaas95; @chang94; @walz99]. We shall next see that, except at certain distances, this multiple interaction cannot be neglected. The equations satisfied by the electric and magnetic vectors in a non–magnetic medium are $$\begin{aligned} \nabla \times \nabla \times {\bf E}-k^{2}{\bf E} & = & 4\pi k^{2}{\bf P}, \label{eq:green1} \\ [+3mm] \nabla \times \nabla \times {\bf H}-k^{2}{\bf H} & = & -i4\pi k\nabla \times {\bf P}, \label{eq:green2}\end{aligned}$$ where ${\bf P}$ is the polarization vector. The solutions to Equations (\[eq:green1\]) and (\[eq:green2\]) are written in integral form as $$\begin{aligned} {\bf E}({\bf r}) & = & k^{2}\int d^{3}r^{\prime }~{\bf P}({\bf r}^{\prime }) \cdot\overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime }), \label{eq:green3} \\ [+3mm] {\bf H}({\bf r}) & = & -ik\int d^{3}r^{\prime }~\nabla \times {\bf P}({\bf r} ^{\prime })\cdot \overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r} ^{\prime }), \label{eq:green4}\end{aligned}$$ In Equations (\[eq:green3\]) and (\[eq:green4\]) $\overset{ \leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime })$ is the outgoing Green’s dyadic or field created at ${\bf r}$ by a point dipole at ${\bf r} ^{\prime }$. It satisfies the equation $$\nabla \times \nabla \times \overset{\leftrightarrow }{{\cal G}}({\bf r}, {\bf r}^{\prime })-k^{2}\overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r} ^{\prime })=4\pi \delta ({\bf r}-{\bf r}^{\prime })\overset{\leftrightarrow } {{\cal I}}. \label{eq:green5}$$ Let’s introduce the electric displacement vector ${\bf D} = {\bf E} + 4 \pi {\bf P}$. Then, $$\nabla \times \nabla \times {\bf E} = \nabla \times \nabla \times {\bf D} - 4 \pi \nabla \times \nabla \times {\bf P}. \label{eq:identity1}$$ Using the vectorial identity $\nabla \times \nabla \times {\bf D} = \nabla (\nabla \cdot {\bf D}) - \nabla ^2 {\bf D}$, and the fact that in the absence of free charges, $\nabla \cdot {\bf D}=0$, it is easy to obtain $$\begin{aligned} \nabla \times \nabla \times {\bf E} & = & - \nabla ^2 {\bf D} - 4 \pi \left[ \nabla (\nabla \cdot {\bf P}) - \nabla ^2 {\bf P} \right] \\ [+3mm] & = & - \nabla ^2 {\bf E} - 4 \pi \nabla (\nabla \cdot {\bf P}), \label{eq:identity2}\end{aligned}$$ which straightforwardly transforms Equation (\[eq:green1\]) into: $$\nabla ^{2}{\bf E}+k^{2}{\bf E}=-4\pi \lbrack k^{2}{\bf P}+\nabla (\nabla \cdot {\bf P})], \label{eq:green6}$$ whose solution is $${\bf E}({\bf r})=k^{2}\int d^{3}r^{\prime }~[{\bf P}+\nabla (\nabla \cdot {\bf P})]({\bf r}^{\prime })~G({\bf r},{\bf r}^{\prime }) \, . \label{eq:electrico}$$ In a homogenous infinite space the function $G$ of Equation (\[eq:electrico\]) is $G_{0}({\bf r},{\bf r}^{\prime })=\exp (ik|{\bf r}-{\bf r} ^{\prime }|)/|{\bf r}-{\bf r}^{\prime }|$, namely, a spherical wave, or scalar Green’s function, corresponding to radiation from a point source at ${\bf r}^{\prime }$. To determine $\overset{\leftrightarrow }{{\cal G}}$, we consider the case in which the radiation comes from a dipole of moment ${\bf p}$, situated at ${\bf r}_{0}$; the polarization vector ${\bf P}$ is expressed as $${\bf P}({\bf r})={\bf p}\delta ({\bf r}-{\bf r}_{0}). \label{eq:polarizacion}$$ Introducing Equation (\[eq:polarizacion\]) into Equation (\[eq:electrico\]) one obtains the well known expression for the electric field radiated by a dipole $${\bf E}({\bf r})=k^{2}{\bf p}\nabla ({\bf p}\cdot \nabla )\frac{\exp (ik| {\bf r}-{\bf r}^{\prime }|)}{|{\bf r}-{\bf r}^{\prime }|}. \label{eq:dipole1}$$ On the other hand, if Equation (\[eq:polarizacion\]) is introduced into Equation (\[eq:green3\]) one obtains $${\bf E}({\bf r})=k^{2}{\bf p}\cdot \overset{\leftrightarrow }{{\cal G}}_{0}( {\bf r},{\bf r}^{\prime }). \label{eq:dipole2}$$ On comparing Equations (\[eq:dipole1\]) and (\[eq:dipole2\]), since both give identical value for ${\bf E}$, we get $$k^{2}{\bf P}({\bf r}^{\prime })\cdot \overset{\leftrightarrow }{{\cal G}} _{0}({\bf r},{\bf r}^{\prime })=[k^{2}{\bf p}\nabla ({\bf p}\cdot \nabla )]G_{0}({\bf r},{\bf r}^{\prime }), \label{eq:dyadic1}$$ [*i.e*]{}., the tensor Green’s function in a homogenous infinite space is $$\overset{\leftrightarrow }{{\cal G}}_{0}({\bf r},{\bf r}^{\prime })=\left( \overset{\leftrightarrow }{{\cal I}}+\frac{1}{k^{2}}\nabla \nabla \right) G_{0}({\bf r},{\bf r}^{\prime }). \label{eq:dyadic2}$$ A remark is in order here. When applying Equation (\[eq:dyadic2\]) in calculations one must take into account the singularity at ${\bf r}={\bf r} ^{\prime }$, this is accounted for by writing $\overset{\leftrightarrow }{ {\cal G}}_{0}$ as [@yaghjian80]: $$\overset{\leftrightarrow }{{\cal G}}_{0}({\bf r},{\bf r}^{\prime })={\cal P} \left[ {{{{{{{{\ \left( \overset{\leftrightarrow }{{\cal I}}+\frac{1}{k^{2}} \nabla \nabla \right) G_{0}({\bf r},{\bf r}^{\prime })}}}}}}}}\right] -\frac{ 1}{k^{2}}\delta ({\bf r}-{\bf r}^{\prime })\overset{\leftrightarrow }{{\bf L} }_{v}. \label{eq:dyadic3}$$ In Equation (\[eq:dyadic3\]) ${\cal P}$ represents the principal value and $\overset{\leftrightarrow }{{\bf L}}_{v}$ is a dyadic that describes the singularity and corresponds to an exclusion volume around ${\bf r}={\bf r} ^{\prime }$, on whose shape it depends [@yaghjian80]. The Coupled Dipole Method {#sec:CDM} ------------------------- Among the several methods of calculating multiple scattering between bodies of arbitrary shape ([*e.g*]{}., transition matrix, finite–difference time domain, integral procedures, discrete dipole approximation, [*etc*]{}.) we shall next address the [*coupled dipole method*]{} (Purcell and Pennipacker [@purcell73]). This procedure is specially suitable for multiple scattering between a sphere and a flat interface. Let us return to the problem of determining the interaction of the incident wave with the substrate and the sphere. The scattered electromagnetic field is obtained from the contribution of all polarizable elements of the system under the action of the illuminating wave. The electric vector above the interface is given by the sum of the incident field ${\bf E}_{i}$ and that expressed by Equation (\[eq:green3\]) with the dyadic Green function $\overset{\leftrightarrow }{{\cal G}}$ being given by $$\overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime })=\overset{ \leftrightarrow }{{\cal G}}_{0}({\bf r},{\bf r}^{\prime })+\overset{ \leftrightarrow }{{\cal G}}_{s}({\bf r},{\bf r}^{\prime }). \label{eq:sumdyadic}$$ In Equation (\[eq:sumdyadic\]) $\overset{\leftrightarrow }{{\cal G}}_{0}$ is given by Equation (\[eq:dyadic2\]) and, as such, it corresponds to the field created by a dipole in a homogeneous infinite space. On the other hand, $\overset{\leftrightarrow }{{\cal G}}_{s}$ represents the field from the dipole after reflection at the interface. ![ Normalized force in the $Z$ direction acting on a glass sphere on a glass–vacuum interface. The angle of incidence $\protect\theta_ 0=42 ^o$ is larger than the critical angle $\protect\theta_ c=41.8 ^o$. $\protect \lambda=632.8$ $nm$. Thin lines: $S$ polarization, Thick lines: $P$ polarization. (a): $a=10$ $nm$, full line: dipole approximation, dashed line: CDM–A, dotted line: CDM–B. The inset shows the scattering geometry. (b): $a=100$ $nm$, full line: calculation with CDM–B, dashed line: static approximation. (From Ref. [@chaumet00a]). []{data-label="fig:dielec"}](dielec.eps){width="\linewidth"} The polarization vector ${\bf P}$ is represented by the collection of $N$ dipole moments ${\bf p}_j$ corresponding to the $N$ polarizable elements of all materials included in the illuminated system, namely, $$\label{eq:sumpolarizacion} {\bf P}({\bf r})=\sum_{j}^{N}{\bf p}_j\delta({\bf r}-{\bf r}_i).$$ The relationship between the $k^{{\rm th}}$ dipole moment ${\bf p}_{k}$ and the exciting electric field is, as before, given by ${\bf p}_{k}=\alpha _{k} {\bf E}({\bf r}_{k})$, with $\alpha _{k}$ expressed by Equation (\[eq:alfa\]). Then, Equations (\[eq:electrico\]), (\[eq:sumdyadic\]) and (\[eq:sumpolarizacion\]) yield $${\bf E}({\bf r}_{j})=k^{2}\sum_{k}^{N}\alpha _{k}[ \overset{\leftrightarrow }{{\cal G}}_{0}({\bf r}_{j},{\bf r}_{k})+\overset{ \leftrightarrow }{{\cal G}}_{s}({\bf r}_{j},{\bf r}_{k})]\cdot {\bf E}({\bf r }_{k}). \label{eq:electricdip}$$ The determination of $\overset{\leftrightarrow }{{\cal G}}_{s}$ either above or below the flat interface is discussed next (one can find more details in Ref. [@agarwal75a]). Let us summarize the derivation of its expression above the surface. The field ${\bf E}$ in the half–space $z>0$, from a dipole situated in this region, is the sum of that from the dipole in free space and the field ${\bf E}_{r}$ produced on reflection of the latter at the interface. Taking Equation (\[eq:dipole2\]) into account, this is therefore $${\bf E}({\bf r})=k^{2}{\bf p}\cdot \overset{\leftrightarrow }{{\cal G}}_{0}( {\bf r},{\bf r}^{\prime })+{\bf E}_{r}({\bf r}),\text{ \ \ \ }z>0 \, . \label{eq:electricdipfin}$$ Both the spherical wave $G_{0}$ and ${\bf E}_{r}$ are expanded into plane waves. The former is, according to Weyl’s representation [@banios66ch; @nieto-vesperinas91], $$G_{0}({\bf r},{\bf r}^{\prime })=\frac{i}{2\pi }\int_{-\infty }^{\infty } \frac{d^{2}K}{k_z ({\bf K})}\exp [i({\bf K}\cdot ({\bf R}-{\bf R}^{\prime })+k_z |z-z^{\prime }|)] \, . \label{eq:weyl}$$ On the other hand, ${\bf E}_{r}$ is expanded as an angular spectrum of plane waves [@nieto-vesperinas91; @mandel95] $${\bf E}_{r}({\bf r})=\int_{-\infty }^{\infty }d^{2}K~{\bf A}_{r}({\bf K} )\exp [i({\bf K}\cdot {\bf R}+k_z z)]. \label{eq:angularspectrum}$$ Introducing Equation (\[eq:weyl\]) into Equation (\[eq:sumdyadic\]) one obtains a plane wave expansion for $\overset{\leftrightarrow }{{\cal G}}_{0}$. This gives the plane wave components ${\bf A}_{h}({\bf K})$ of the first term of Equation (\[eq:electricdipfin\]). Then, the plane wave components ${\bf A}_{r}({\bf K})$ of the second term of Equation (\[eq:electricdipfin\]) are given by $${\bf A}_{r}({\bf K})=r({\bf K}){\bf A}_{h}({\bf K}), \label{eq:amplitude}$$ In Equation (\[eq:amplitude\]) $r({\bf K})$ is the Fresnel reflection coefficient corresponding to the polarization of ${\bf A}_{h}$. The result is therefore that $\overset{\leftrightarrow }{{\cal G}}$, Equation (\[eq:sumdyadic\]), is $$\overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime })=\frac{1}{ 4\pi ^{2}}\int_{-\infty }^{\infty }d^{2}K~{\overset{\leftrightarrow }{{\bf S} }}^{-1}({\bf K})\cdot \overset{\leftrightarrow }{{\bf g}}({\bf K} ,z,z^{\prime })\cdot \overset{\leftrightarrow }{{\bf S}}({\bf K})\exp [i{\bf K}\cdot ({\bf R}-{\bf R}^{\prime })], \label{eq:CDMdyadic}$$ where [@agarwal75a; @agarwal75b; @keller93] $$\overset{\leftrightarrow }{{\bf S}}({\bf K})=\frac{1}{K} \left( \begin{array}{ccc} k_{x} & k_{y} & 0 \\ -k_{y} & k_{x} & 0 \\ 0 & 0 & K \end{array} \right) , \label{eq:Stensor}$$ and the dyadic $\overset{\leftrightarrow }{{\bf g}}$ has the elements [@greffet97] $$\begin{aligned} g_{11} & = & \frac{-ik_z^{(0)}}{2\epsilon _{0}k_0^{2}}\left[ \frac{\epsilon _{0}k_z^{(1)}-\epsilon _{1}k_z^{(0)}} {\epsilon _{0}k_z^{(1)}+\epsilon _{1}k_z^{(0)}}\exp [ik_z^{(0)}(z+z^{\prime })]+\exp (ik_z^{(0)}|z-z^{\prime }|)\right] , \label{eq:g11} \\ [+3mm] g_{22} & = & \frac{-i}{2k_z^{(0)}}\left[ {{{{{\frac{k_z^{(0)}-k_z^{(1)}} {k_z^{(0)}+k_z^{(1)}}\exp [ik_z^{(0)}(z+z^{\prime })]+\exp (ik_z^{(0)}|z-z^{\prime }|)}}}}}\right] , \label{eq:g22} \\ [+3mm] g_{33} & = & \frac{iK^{2}}{2\epsilon _{0}k_z^{(0)}k_0^{2}}\left[ \frac{\epsilon _{0}k_z^{(1)}-\epsilon _{1}k_z^{(0)}}{\epsilon _{0}k_z^{(1)}+ \epsilon _{1}k_z^{(0)}}\exp [ik_z^{(0)}(z+z^{\prime })]-\exp (ik_z^{(0)}|z-z^{\prime }|)\right] \nonumber \\ [+3mm] & + & \frac{1}{\epsilon _{0}k_0^{2}}\delta (z-z^{\prime }), \label{eq:g33} \\ [+3mm] g_{12} & = & 0, \label{eq:g12} \\ [+3mm] g_{13} & = & \frac{-iK}{2\epsilon _{0}k_0^{2}}\left[ \frac{\epsilon _{0}k_z^{(1)}-\epsilon _{1}k_z^{(0)}}{\epsilon _{0}k_z^{(1)}+\epsilon _{1}k_z^{( 0)}}\exp [ik_z^{(0)}(z+z^{\prime })]-\exp (ik_z^{(0)}|z-z^{\prime }|)\right] , \label{eq:g13} \\ [+3mm] g_{31} & = & \frac{iK}{2\epsilon _{0}k_0^{2}}\left[ \frac{\epsilon _{0}k_z^{(1)}-\epsilon _{1}k_z^{(0)}}{\epsilon _{0}k_z^{(1)}+\epsilon _{1}k_z^{(0)}}\exp [ik_z^{(0)}(z+z^{\prime })]+\exp (ik_z^{(0)}|z-z^{\prime }|)\right] . \label{eq:g31}\end{aligned}$$ We have used $k_z^{(j)} = iq_j = (K^2- \epsilon_j k_0^2) ^{1/2}$, $j=0$ ,$1$, and $k_0=\omega/c$. To determine the force acting on the particle, we also need the magnetic field. This is found by the relationships ${\bf B}({\bf r})=-i/k\nabla \times {\bf E}({\bf r})$. Then the time–averaged force obtained from Maxwell’s stress tensor $\overset{\leftrightarrow } {{\bf T}}$ [@stratton41; @jackson75] is $$\langle {\bf F}\rangle =\int_{S}d^{2}r~\left\langle \overset{\leftrightarrow }{ {\bf T}}({\bf r})\right\rangle \cdot {\bf n}. \label{eq:totalforce}$$ Equation (\[eq:totalforce\]) represents the flow of the time–average of Maxwell’s stress tensor $\langle \overset{\leftrightarrow }{{\bf T}}\rangle$ across a surface $S$ enclosing the particle, ${\bf n}$ being the local outward normal. The elements $T_{\alpha \beta }$ are [@jackson75] $$\left\langle T_{\alpha \beta }({\bf r})\right\rangle =\frac{1}{8\pi }\left[ E_{\alpha }E_{\beta }^{\ast }+B_{\alpha }B_{\beta }^{\ast }-\frac{1}{2}({\bf E}\cdot {\bf E}^{\ast }+{\bf B}\cdot {\bf B}^{\ast })\delta _{\alpha \beta }\right] ,(\alpha ,\beta =1,2,3). \label{eq:maxwellstresstensor}$$ For dipolar particles, one can use instead of Maxwell’s stress tensor the expression given by Equation (\[eq:chaumetfin\]) directly. In fact, for dielectric spheres of radii smaller than $5\times 10^{-2}\lambda $ there is no appreciable difference between using Equation (\[eq:chaumetfin\]) or Equation (\[eq:maxwellstresstensor\]) except at distances from the flat substrate smaller than $10^{-3}\lambda $. Figure \[fig:dielec\] shows the normalized $Z$–force for two glass particles ($\epsilon =2.25$) at $\lambda =632.8$ $nm$, one with $a=10$ $nm$ (Figure \[fig:dielec\](a)), and other with $a=100$ $nm$ (Figure \[fig:dielec\](b)), the flat interface is illuminated from the dielectric side at $\theta_0 =42^{o}$, (the critical angle is $\theta_c =41.8^{o}$). Two calculation procedures are shown: a multiple scattering evaluation of the field via Equations (\[eq:electricdip\])–(\[eq:g31\]) and then, either use of Equation (\[eq:chaumetfin\]), integrated over all induced dipoles (CDM–B), or Equation (\[eq:totalforce\]) (CDM–A). The normalization of the forces has been carried out by dividing them by $\exp (-2qz)$. Thus, as seen in these curves, the force tends, as $d$ increases, to the constant value given by Equation (\[eq:evanescent4\]): $-(|T|^{2}/2)q\Re e \{ \alpha \}$. The incident power is $1.19$ $mW$ distributed on a surface of $10$ $\mu m^{2}$, then the force on a sphere of $a=10$ $nm$ is $2.7991\times 10^{-10}$ $pN$ [@chaumet00a]. We see, therefore, the effect on the vertical force of the multiple interaction of the scattered wave with the substrate: as the particle gets closer to the flat interface at which the evanescent wave is created, the magnitude of the attractive force increases beyond the value predicted by neglecting this interaction. As the distance to the surface grows, the force tends to its value given by Equation (\[eq:evanescent4\]) in which no multiple scattering with the substrate takes place. Also, due to the standing wave patterns that appear in the field intensity distribution between the sphere and the substrate, the magnitude of this force oscillates as $d$ varies. This is appreciated for larger particles (Figure \[fig:dielec\](b)) except for very small particles (Figure \[fig:dielec\](a)), whose scattering cross section is large enough to produce noticeable interferences. On the other hand, the horizontal force on the particle is of the form given by Equation (\[eq:evanescent3\]) and always has the characteristics of a scattering force. ![ (a): From top to bottom: the first three curves represent the polarizability of a silver sphere with radius $a=10$ $nm$ versus the wavelength. The fourth curve is the force on this particle in free space. Plain line: Mie calculation, dashed line: polarizability of Eq. (\[eq:alfa\]), symbol $+$: Dungey and Bohren’s polarizability [@dungey91]. (b): Force along the $Z$ direction on a silver sphere with $a=100$ $nm$ versus distance $d$ with $\protect\theta_0=50^o$ for the following wavelengths: Plain line: $\protect \lambda=255$ $nm$, dashed line: $\protect\lambda=300$ $nm$, and dotted line: $\protect\lambda=340$ $nm$. Thin lines: $S$ polarization, thick lines: $P$ polarization. (From Ref. [@chaumet00b]). []{data-label="fig:metallic"}](metallic.eps){width="\linewidth"} As regards a metallic particle, we notice that $\Re e \{ \alpha \}$ may have negative values near plasmon resonances (Figure \[fig:metallic\](a), where we have plotted two models: that of Draine [@draine88] and that of Ref. [@dungey91] (see also [@chaumet00c])) and thus the gradient force, or force along $OZ$, may now be repulsive, namely, positive (Figure \[fig:metallic\](b)) [@chaumet00c]. We also observe that this force is larger at the plasmon polariton resonance excitation ($\lambda =350$ $nm$). We shall later return to this fact (Section \[sec:resonances\]). We next illustrate how, no matter how small the particle is, the continuous approach to the surface makes the multiple scattering noticeable. Corrugated Surfaces: Integral Equations for Light Scattering from Arbitrary Bodies {#sec:corrugated} ---------------------------------------------------------------------------------- At corrugated interfaces, the phenomenon of TIR is weakened, and the contribution of propagating components to the transmitted field becomes important, either from conversion of evanescent waves into radiating waves, or due to the primary appearance of much propagating waves from scattering at the surface defects. The size of the asperities is important in this respect. However, TIR effects are still strong in slightly rough interfaces, namely, those for which the defect size is much smaller than the wavelength. Then the contribution of evanescent components is dominant on small particles ([*i.e*]{}., of radius no larger than $0.1$ wavelengths). On the other hand, the use of such particles as probes in near–field microscopy may allow high resolution of surface details as they scan above it. We shall next study the resulting force signal effects due to corrugation, as a model of photonic force microscopy under TIR conditions. When the surface in front of the sphere is corrugated, finding the Green’s function components is not as straightforward as in the previous section. We shall instead employ an integral method that we summarize next. Let an electromagnetic field, with electric and magnetic vectors ${\bf E} ^{(inc)}({\bf r})$ and ${\bf H}^{(inc)}({\bf r})$, respectively, be incident on a medium of permittivity $\epsilon $ occupying a volume $V$, constituted by two scattering volumes $V_{1}$ and $V_{2}$, each being limited by a surface $S_{1}$ and $S_{2}$, respectively. Let ${\bf r}^{<}$ be the position vector of a generic point inside the volume $V_{j}$, and by ${\bf r}^{>}$ that of a generic point in the volume $\hat{V}$, which is outside all volumes $V_{j}$. The electric and magnetic vectors of a monochromatic field satisfy, respectively, the wave equations, [*i.e*]{}., Equations (\[eq:green1\]) and (\[eq:green2\]). The vector form of Green’s theorem for two vectors ${\bf P}$ and ${\bf Q}$ well behaved in a volume $V$ surrounded by a surface $S$ reads [@morsefeshbach53] $$\begin{aligned} \int_{V}d^{3}r~({\bf Q}\cdot \nabla \times \nabla \times {\bf P}-{\bf P} \cdot \nabla \times \nabla \times {\bf Q}) & = & \nonumber \\ \int_{S}d^{2}r~({\bf P}\times \nabla \times {\bf Q}-{\bf Q}\times \nabla \times {\bf P})\cdot {\bf n}, \label{eq:greentheorem}\end{aligned}$$ with ${\bf n}$ being the unit outward normal. Let us now apply Equation (\[eq:greentheorem\]) to the vectors ${\bf P}= \overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime })\cdot {\bf C}$ , (${\bf C}$ being a constant vector) and ${\bf Q}={\bf E}({\bf r})$. Taking Equations (\[eq:green1\]) and (\[eq:green5\]) into account, we obtain $$\int_{V}d^{3}r^{\prime }~{\bf E}({\bf r}^{\prime })\delta ({\bf r}-{\bf r} ^{\prime })=k^{2}\int_{V}d^{3}r^{\prime }~{\bf P}({\bf r}^{\prime })\cdot \overset{\leftrightarrow }{{\cal G}}({\bf r},{\bf r}^{\prime })-\frac{1}{ 4\pi }{\bf S}_{e}({\bf r}), \label{eq:ET1}$$ where ${\bf S}_{e}$ is $${\bf S}_{e}({\bf r})=\nabla \times \nabla \times \int_{S}d^{2}r^{\prime }\left( {\bf E}({\bf r}^{\prime })\frac{\partial G({\bf r},{\bf r}^{\prime }) }{\partial {\bf n}}-G({\bf r},{\bf r}^{\prime })\frac{\partial {\bf E}({\bf r }^{\prime })}{\partial {\bf n}}\right) . \label{eq:ET2}$$ Equation (\[eq:ET2\]) adopts different forms depending on whether the points ${\bf r}$ and ${\bf r}^{\prime }$ are considered in $V$ or in $\hat{V} $. By means of straightforward calculations one obtains the following: - If ${\bf r}$ and ${\bf r}^{\prime }$ belong to any of the volumes $V_{j}$, $(j=1,2)$, namely, $V$ becomes either of the volumes $V_{j}$: $${\bf E}({\bf r}^{<})=k^{2}\int_{V_{j}}d^{3}r^{\prime }~{\bf P}({\bf r} ^{\prime })\cdot \overset{\leftrightarrow }{{\cal G}}({\bf r}^{<},{\bf r} ^{\prime })-\frac{1}{4\pi }{\bf S}_{j}^{(in)}({\bf r}^{<}), \label{eq:ET3}$$ where $$\begin{aligned} {\bf S}_{j}^{(in)}({\bf r}^{<}) & = & \nabla \times \nabla \times \nonumber \\ [+3mm] & & \int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}_{in}({\bf r}^{\prime })\frac{ \partial G({\bf r}^{<},{\bf r}^{\prime })}{\partial {\bf n}}-G({\bf r}^{<}, {\bf r}^{\prime })\frac{\partial {\bf E}_{in}({\bf r}^{\prime })}{\partial {\bf n}}\right) \, . \, \, \, \, \, \, \, \, \, \, \label{eq:ET4}\end{aligned}$$ In Equation (\[eq:ET4\]) ${\bf E}_{in}$ represents the limiting value of the electric vector on the surface $S_{j}$ taken from inside the volume $V_{j}$. Equation (\[eq:ET3\]) shows that the field inside each of the scattering volumes $V_{j}$ does not depend on the sources generated in the other volumes. - If ${\bf r}$ belongs to any of the volumes $V_{j}$, namely, $V$ becomes $V_{j}$, and ${\bf r}^{\prime }$ belongs to $\hat{V}$: $$0={\bf S}_{ext}({\bf r}^{<}). \label{eq:ET5}$$ In Equation (\[eq:ET5\]) ${\bf S}_{ext}$ is $${\bf S}_{ext}({\bf r}^{<})=\sum_{j}{\bf S}_{j}^{(out)}({\bf r}^{<})-{\bf S} _{\infty }({\bf r}^{<}), \label{eq:ET6}$$ where $$\begin{aligned} {\bf S}_{j}^{(out)}({\bf r}^{<}) & = & \nabla \times \nabla \times \nonumber \\ [+3mm] & & \int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}({\bf r}^{\prime })\frac{\partial G({\bf r}^{<},{\bf r}^{\prime })}{\partial {\bf n}}-G({\bf r}^{<},{\bf r} ^{\prime })\frac{\partial {\bf E}({\bf r}^{\prime })}{\partial {\bf n}} \right) \, . \, \, \, \, \, \, \label{eq:ET7}\end{aligned}$$ In Equation (\[eq:ET7\]) the surface values of the electric vector are taken from the volume $\hat{V}$. The normal ${\bf n}$ now points towards the interior of each of the volumes $V_{j}$. Also, ${\bf S}_{\infty }$ has the same meaning as Equation (\[eq:ET7\]), the surface of integration now being a large sphere whose radius will eventually tend to infinity. It is not difficult to see that $-{\bf S} _{\infty }$ in Equation (\[eq:ET6\]) is $4\pi $ times the incident field ${\bf E}^{(inc)}({\bf r}^{<})$ ([*cf*]{}. Refs. [@nieto-vesperinas91] and [@pattanayak76a; @pattanayak76b]). Therefore Equation (\[eq:ET5\]) finally becomes $$0={\bf E}^{(inc)}({\bf r}^{<})+\frac{1}{4\pi }\sum_{j}{\bf S}_{j}^{(out)}({\bf r}^{<}). \label{eq:ET8}$$ Note that when Equation (\[eq:ET8\]) is used as a non–local boundary condition, the unknown [*sources*]{} to be determined, given by the limiting values of ${\bf E}({\bf r}^{\prime })$ and $\partial {\bf E}({\bf r}^{\prime })/\partial {\bf n}$ on each of the surfaces $S_{j}$, ([*cf*]{}. Equation (\[eq:ET7\])), appear coupled to those corresponding sources on the other surface $S_{k}$, $k\neq j$. Following similar arguments, one obtains: - For ${\bf r}$ belonging to $\hat{V}$ and ${\bf r}^{\prime }$ belonging to either volume $V_{j}$, $(j=1,2)$ namely, $V$ becoming $V_{j}$ $$0=k^2 \int_{V_{j}}d^{3}r^{\prime }~{\bf P}({\bf r}^{\prime })\cdot \overset{\leftrightarrow }{{\cal G}}({\bf r}^{>},{\bf r}^{\prime })- \frac{1}{4\pi }{\bf S}_{j}^{(in)}({\bf r}^{>}), \label{eq:ET9}$$ with ${\bf S}_{j}^{(in)}$ given by Equation (\[eq:ET4\]), this time evaluated at ${\bf r}^{>}$. - For both ${\bf r}$ and ${\bf r}^{\prime }$ belonging to $\hat{V}$ $${\bf E}({\bf r}^{>})={\bf E}^{(inc)}({\bf r}^{>})+\frac{1}{4\pi }\sum_{j}{\bf S }_{j}^{(out)}({\bf r}^{>}), \label{eq:ET10}$$ Hence, the exterior field is the sum of the fields emitted from each scattering surface $S_{j}$ $(j=1,2)$ with sources resulting from the coupling involved in Equation (\[eq:ET9\]). One important case corresponds to a penetrable, optically homogeneous, isotropic, non–magnetic and spatially nondispersive medium (this applies for a real metal or a pure dielectric). In this case, Equations (\[eq:ET3\]) and (\[eq:ET9\]) become, respectively, $$\begin{aligned} {\bf E}({\bf r}^{<}) &=&-\frac{1}{4\pi k_{0}^{2}\epsilon }\nabla \times \nabla \times \nonumber \\ &&\int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}_{in}({\bf r}^{\prime })\frac{ \partial G^{(in)}({\bf r}^{<},{\bf r}^{\prime })}{\partial {\bf n}}-G^{(in)}( {\bf r}^{<},{\bf r}^{\prime })\frac{\partial {\bf E}_{in}({\bf r}^{\prime }) }{\partial {\bf n}}\right) , \label{eq:ET11}\end{aligned}$$ $$\begin{aligned} 0 &=&{\bf E}^{(inc)}({\bf r}^{<})+\frac{1}{4\pi k_{0}^{2}}\nabla \times \nabla \times \nonumber \label{eq:ET12} \\ &&\sum_{j}\int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}({\bf r}^{\prime })\frac{ \partial G({\bf r}^{<},{\bf r}^{\prime })}{\partial {\bf n}}-G({\bf r}^{<}, {\bf r}^{\prime })\frac{\partial {\bf E}({\bf r}^{\prime })}{\partial {\bf n} }\right) ,\end{aligned}$$ whereas Equations (\[eq:ET5\]) and (\[eq:ET10\]) yield $$\begin{aligned} 0 &=&\frac{1}{4\pi k_{0}^{2}}\nabla \times \nabla \times \nonumber \label{eq:ET13} \\ &&\int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}_{in}({\bf r}^{\prime })\frac{ \partial G^{(in)}({\bf r}^{>},{\bf r}^{\prime })}{\partial {\bf n}}-G^{(in)}( {\bf r}^{>},{\bf r}^{\prime })\frac{\partial {\bf E}_{in}({\bf r}^{\prime }) }{\partial {\bf n}}\right) ,\end{aligned}$$ $$\begin{aligned} {\bf E}({\bf r}^{>}) &=&{\bf E}^{(inc)}({\bf r}^{>})+\frac{1}{4\pi k_{0}^{2}\epsilon }\nabla \times \nabla \times \nonumber \label{eq:ET14} \\ &&\sum_{j}\int_{S_{j}}d^{2}r^{\prime }\left( {\bf E}({\bf r}^{\prime })\frac{ \partial G({\bf r}^{>},{\bf r}^{\prime })}{\partial {\bf n}}-G({\bf r}^{>}, {\bf r}^{\prime })\frac{\partial {\bf E}({\bf r}^{\prime })}{\partial {\bf n} }\right) .\end{aligned}$$ In Equations (\[eq:ET11\]) and (\[eq:ET13\]) $``in"$ means that the limiting values on the surface are taken from inside the volume $V_{j}$; note that this implies for both $G^{(in)}$ and ${\bf E}_{in}$ that $k=k_{0} \sqrt{\epsilon }$. The continuity conditions $${\bf n}\times \lbrack {\bf E}_{in}({\bf r}^{<})-{\bf E}({\bf r} ^{>})]=0,\,\,\,\,\,\,{\bf n}\times \lbrack {\bf H}_{in}({\bf r}^{<})-{\bf H}( {\bf r}^{>})]=0 \, , \label{eq:ET15}$$ and the use of Maxwell’s equations lead to (cf. Ref. [@jackson75], Section I.5, or Ref. [@bornwolf99], Section 1.1): $$\begin{aligned} \left. E _{in} ({\bf r}) \right| _{{\bf r} \in S_j^{(-)}} & = & \left. E ({\bf r}) \right| _{{\bf r} \in S_j^{(+)}} \, , \label{eq:continuity1} \\ [+5mm] \left. \frac {\partial E _{in}({\bf r})} {\partial {\bf n}} \right| _{{\bf r} \in S_j^{(-)}} & = & \left. \frac {\partial E ({\bf r})} {\partial {\bf n}} \right| _{{\bf r} \in S_j^{(+)}} \, , \label{eq:continuity2}\end{aligned}$$ where $S_j^{(+)}$ and $S_j^{(-)}$ denote the surface profile when approached from outside or inside the volume $V_j$, respectively. Equations (\[eq:continuity1\]) and (\[eq:continuity2\]) permit to find both ${\bf E}$ and $\partial {\bf E}/\partial {\bf n}$ from either the pair Equations (\[eq:ET13\]) and (\[eq:ET14\]), or, equivalently, from the pair Equations (\[eq:ET11\]) and (\[eq:ET12\]), as both ${\bf r}^{>}$ and ${\bf r}^{<}$ tend to a point in $S_{j}$. Then the scattered field outside the medium is given by the second term of Equation (\[eq:ET14\]). In the next section, we apply this theory to finding the near–field distribution of light scattered from a small particle in front of a corrugated dielectric surface when illumination is done from the dielectric half–space at angles of incidence larger than the critical angle. The non–local boundary conditions that we shall use are Equations (\[eq:ET13\]) and (\[eq:ET14\]). Photonic Force Microscopy of Surfaces with Defects {#sec:PFM} ================================================== The [*Photonic Force Microscope*]{} (PFM) is a technique in which one uses a probe particle trapped by a tweezer trap to image soft surfaces. The PFM [@ghislain93; @florin96; @wada00] was conceived as a scanning probe device to measure ultrasmall forces, in the range from a few to several hundredths $pN/nm$ with laser powers of some $mW$, between colloidal particles [@crocker94], or in soft matter components such as cell membranes [@stout97] and protein or other macromolecule bonds [@smith96]. In such a system, a dielectric particle of a few hundred nanometers, held in an optical tweezer [@ashkin86; @clapp99; @sugiura93; @dogariu00], scans the object surface. The spring constant of the laser trap is three or four orders of magnitude smaller than that of AFM cantilevers, and the probe position can be measured with a resolution of a few nanometers within a range of some microseconds [@florin96]. As in AFM, surface topography imaging can be realized with a PFM by transducing the optical force induced by the near field on the probe [@horber01bookproc]. As in near–field scanning optical microscopy (NSOM[^3]) [@pohl93], the resolution is given by the size of the particle and its proximity to the surface. It is well known, however [@nieto-vesperinas91; @greffet97] that multiple scattering effects and artifacts, often hinder NSOM images so that they do not bear resemblance to the actual topography . This has constituted one of the leading basic problems in NSOM [@hecht96]. Numerical simulations [@yo01b; @yo01c; @yo99b; @yo00; @yo01bookproc] based on the theory of Section \[sec:corrugated\] show that detection of the optical force on the particle yields topographic images, and thus they provide a method of prediction and interpretation for monitoring the force signal variation with the topography, particle position and illumination conditions. This underlines the fundamentals of the PFM operation. An important feature is the signal enhancement effects arising from the excitation of [*Mie resonances*]{} of the particle, which we shall discuss next. This allows to decrease its size down to the nanometric scale, thus increasing resolution both of force magnitudes and spatial details. Nanoparticle Resonances {#sec:resonances} ----------------------- Electromagnetic eigenmodes of small particles are of importance in several areas of research. On the one hand, experiments on the linewidth of surface plasmons in metallic particles [@klar98] and on the evolution of their near fields, both in isolated particles and in arrays [@krenn99], seek a basic understanding and possible applications of their optical properties. Mie resonances of particles are often called [*morphology–dependent resonances*]{} ([*MDR*]{}). They depend on the particle shape, permittivity, and the [*size parameter*]{}: $x=2\pi a/\lambda $. In dielectric particles, they are known as [*whispering–gallery modes*]{} ([*WGM*]{}) [@owen81; @barber82; @benincasa87; @hill88; @barber88; @barber90]. On the other hand, in metallic particles, they become [*surface plasmons*]{} ([*SPR*]{}), coming from electron plasma oscillations [@raether88]. All these resonances are associated to surface waves which exponentially decay away from the particle boundary. Morphology–dependent resonances in dielectric particles are interpreted as waves propagating around the object, confined by total internal reflection, returning in phase to the starting point. A [*Quality factor*]{} is also defined as $Q=2\pi $ (Stored energy) $/$ (Energy lost per cycle) $=\omega _{0}/\delta \omega $, where $\omega _{0}$ is the resonace frequency and $\delta \omega $ the resonance full width. The first theoretical studies of [*MDR*]{} were performed by Gustav Mie, in his well–known scattering theory for spheres. The scattered field, both outside and in the particle, is decomposed into a sum of partial waves. Each partial wave is weighted by a coefficient whose poles explain the existence of peaks at the scattering cross section. These poles correspond to complex frequencies, but true resonances ([*i.e*]{}.,the real values of frequency which produce a finite for the coefficient peaks) have a size parameter value close to the real part of the complex poles. The imaginary part of the complex frequency accounts for the width of the resonance peak. [*MDR*]{}’s are classified by three integer numbers: one related to the partial wave ([*order number*]{}), another one which accounts for the several poles that can be present in the same coefficient ([*mode number*]{}), and a third one accounting for the degeneration of a resonance ([*azimuthal mode number*]{}). In the first experimental check at optical frequencies, the variation of the radiation pressure (due to [*MDR*]{}) on highly transparent, low–vapor–pressure silicone oil drops (index $1.4-1.53$) was measured by Ashkin [@ashkin77]. The drops were levitated by optical techniques and the incident beam was focused at either the edge or the axis of the particles showing the creeping nature of the surface waves. It is important to note, as regards resonances, the enhanced directional scattering effects such as the [*Glory*]{} [@bryant66; @fahlen68; @khare77] found in water droplets. The Glory theory accounts for the backscattering intensity enhancements found in water droplets. These enhancements are associated with rays grazing the surface of the droplet, involving hundreds of circumvolutions (surface effects). Axial rays (geometrical effects) also contribute. They have been observed in large particle sizes ($x>10^{2}$) and no Glory effects have been found for sizes in the range $x\sim 1$. These backscattering intensity enhancements cannot be associated to a unique partial wave, but to a superposition of several partial waves. Distributions of Forces on Dielectric Particles Over Corrugated Surfaces Illuminated Under TIR {#sec:forcedielec} ---------------------------------------------------------------------------------------------- We now model a rough interface separating a dielectric of permittivity $\epsilon _{1}=2.3104$, similar to that of glass, from air. We have addressed (Figure \[fig:ol1\], left) the profile consisting of two protrusions described by $z=h[\exp (-(x-X_{0})^{2}/\sigma ^{2})+\exp (-(x+X_{0})^{2}/\sigma ^{2})]$ on a plane surface $z=0$. (It should be noted that in actual experiments, the particle is immersed in water, which changes the particle’s relative refractive index weakly. But the phenomena shown here will remain, with the interesting features now occurring at slightly different wavelengths.) Illumination, linearly polarized, is done from the dielectric side under TIR (critical angle $\theta _{c}=41.14^{o}$) at $\theta _{0}=60^{o}$ with a Gaussian beam of half–width at half–maximum $W=4000$ $nm$ at wavelength $\lambda $ (in air). For the sake of computing time and memory, the calculation is done in two dimensions (2D). This retains the main physical features of the full 3D configuration, as far as multiple interaction of the field with the surface and the probe is concerned [@lester99]. The particle is then a cylinder of radius $a$, permittivity $\epsilon _{2}$, and axis $OY$, whose center moves at constant height $z=d+a$. Maxwell’s stress tensor is used to calculate the force on the particle resulting from the scattered near–field distribution created by multiple interaction of light between the surface and the particle. Since the configuration is 2D, the incident power and the force are expressed in $mW/nm$ and in $pN/nm$, respectively, namely, as power and force magnitudes per unit length (in $nm$) in the transversal direction, [*i.e*]{}., that of the cylinder axis. We shall further discuss how these magnitudes are consistent with three–dimensional (3D) experiments. ![ Left figure: Scattering geometry. Insets: Force curves on a silicon cylinder with $a=60$ $nm$ scanned at $d=132.6$ $nm$. (a) Horizontal force. (b) Vertical force. Solid line: $\protect\lambda=638$ $nm$ (on resonance). Broken line: $\protect\lambda=538$ $nm$ (off resonance). Thin solid line in (b): $|H/H_ o|^2$ at $z=d+a$ in absence of particle. Peak value: $|H/H_ o|^2=0.07$. Bottom figures: Spatial distribution $|H/H_ o|^2$ in this configuration. The cylinder center is placed at: $(0, 192.6)$ $nm$ (Fig. \[fig:ol1\](a)), and at: $(191.4, 192.6)$ $nm$ (Fig. \[fig:ol1\](b)). The wavelength ($\protect\lambda=638$ $nm$) excites the $(n,l)$ Mie resonance. (From Ref. [@yo01b]). []{data-label="fig:ol1"}](ol1.eps){width="\linewidth"} A silicon cylinder of radius $a=60$ $nm$ in front of a flat dielectric surface with the same value of $\epsilon _{1}$ as considered here, has a Mie resonance excited by the transmitted evanescent wave at $\lambda =638$ $nm$ ($\epsilon _{2}=14.99+i0.14$) [@yo00]. Those eigenmodes are characterized by $n=0$, $l=1$, for $p$ polarization, and $n=1$, $l=1$, for $s$ polarization. We consider the two protrusion interface (Figure \[fig:ol1\], left). Insets of Figures \[fig:ol1\](a) and \[fig:ol1\](b), corresponding to a $p$–polarized incident beam, show the electromagnetic force on the particle as it scans horizontally above the flat surface with two protrusions with parameters $\sigma =63.8$ $nm$, $h=127.6$ $nm$ and $X_{0}=191.4$ $nm$, both at resonant $\lambda $ and out of resonance ($\lambda =538$ $nm$). The particle scans at $d=132.6$ $nm$. Inset (a) shows the force along the $OX$ axis. As seen, the force is positive and, at resonance, it has two remarkable maxima corresponding to the two protrusions, even though they appear slightly shifted due to surface propagation of the evanescent waves transmitted under TIR, which produce the Goos–Hänchen shift of the reflected beam. The vertical force on the particle, on the other hand, is negative, namely attractive, ([*cf*]{}. Inset (b)), and it has two narrow peaks at $x$ just at the position of the protrusions. The signal being again remarkably stronger at resonant illumination. Similar force signal enhancements are observed for $s$–polarization. In this connection, it was recently found that this attractive force on such small dielectric particles monotonically increases as they approach a dielectric flat inteface [@chaumet00a]. ![ Insets: Force on a silicon cylinder with $a=200$ $nm$ scanned at $d=442$ $nm$. (a) $\protect\lambda=919$ $nm$ (on resonance). (b): $\protect \lambda=759$ $nm$ (off resonance). Solid line: Vertical force. Broken line: Horizontal force. Figures: $|E/E_ o|^2$ in this configuration. The cylinder center is placed at: $(0, 642)$ $nm$ (Fig. \[fig:ol2\](a)), and at: $(638, 642)$ $nm$ (Fig. \[fig:ol2\](b)). The wavelength ($\protect\lambda=919$ $nm$) excites the $(n,l)$ Mie resonance. (From Ref. [@yo01b]). []{data-label="fig:ol2"}](ol2.eps){width="\linewidth"} It should be remarked that, by contrast and as expected [@nieto-vesperinas91; @greffet97], the near field–intensity distribution for the magnetic field $H$, normalized to the incident one $H_{0}$, has many more components and interference fringes than the force signal, and thus, the resemblance of its image with the interface topography is worse. This is shown in Inset (b) (thin solid line), where we have plotted this distribution in the absence of particle at $z=d+a$ for the same illumination conditions and parameters as before. This is the one that ideally the particle scan should detect in NSOM. It is also interesting to investigate the near–field intensity distribution map. Figures \[fig:ol1\](a) and \[fig:ol1\](b) show this for the magnetic field $H$ in $p$ polarization for resonant illumination, $\lambda =638$ $nm$, at two different positions of the cylinder, which correspond to $x=0$ and $191.4$ $nm$, respectively. We notice, first, the strong field concentration inside the particle, corresponding to the excitation of the $(n=0,l=1)$ –eigenmode. When the particle is over one bump, the variation of the near field intensity is larger in the region closer to it, this being responsible for the stronger force signal on the particle at this position. Similar results are observed for $s$–polarized waves, in this case, the $(n=1,l=1)$–eigenmode of the cylinder is excited and one can appreciate remarkable fringes along the whole interface due to interference of the surface wave, transmitted under TIR, with scattered waves from both the protrusions and the cylinder. An increase of the particle size yields stronger force signals at the expense of losing some resolution. Figures \[fig:ol2\](a) and \[fig:ol2\](b) show the near electric field intensity distribution for $s$ polarization at two different positions of a cylinder with radius $a=200$ $nm$, the parameters of the topography now being $\sigma =212.7$ $nm$, $h=425.3$ $nm$ and $X_{0}=\pm 638$ $nm$. The distance is $d=442$ $nm$. The resonant wavelength is now $\lambda =919$ $nm$ ($\epsilon _{2}=13.90+i0.07$). Insets of Figures \[fig:ol2\](a) and  \[fig:ol2\](b) illustrate the force distribution as the cylinder moves along $OX$. The force peaks, when the resonant wavelength is considered, are positive, because now, the scattering force on this particle of larger scattering cross section is greater than the gradient force. They also appear shifted with respect to the protrusion positions, once again due to surface travelling waves under TIR. There are weaker peaks, or absence of them at non–resonant $\lambda $. Similar results occur for a $p$–polarized beam at the resonant wavelength $\lambda =759$ $nm$ ($\epsilon _{2}=13.47+i0.04$). For both polarizations the $(n=3,l=1)$ Mie eigenmode of the cylinder is now excited. The field distribution is well–localized inside the particle and it has the characteristic standing wave structure resulting from interference between counterpropagating whispering–gallery modes circumnavegating the cylinder surface. It is remarkable that this structure appears as produced by the excitation of propagating waves incident on the particle [@yo00], these being due to the coupling of the incident and the TIR surface waves with radiating components of the transmitted field, which are created from scattering with the interface protrusions. Although not shown here, we should remark that illumination at non–resonant wavelengths do not produce such a field concentration within the particle, then the field is extended throughout the space, with maxima attached to the flat portions of the interface (evanescent wave) and along certain directions departing from the protrusions (radiating waves from scattering at these surface defects). Evanescent components of the electromagnetic field and multiple scattering among several objects are often difficult to handle in an experiment. However, there are many physical situations that involve these phenomena. In this section we have seen that the use of the field inhomogeneity, combined with (and produced by) morphology–dependent resonances and multiple scattering, permit to imaging a surface with defects. Whispering–gallery modes in dielectric particles, on the other hand, produce also evanescent fields on the particle surface which enhance the strength of the force signal. The next section is aimed to study metallic particles under the same situation as previously discussed, exciting now plasmon resonances on the objects. Distributions of Forces on Metallic Particles Over Corrugated Surfaces Illuminated Under TIR {#sec:forcemetal} -------------------------------------------------------------------------------------------- Dielectric particles suffer intensity gradient forces under light illumination due to radiation pressure, which permit one to hold and manipulate them by means of optical tweezers [@ashkin77] in a variety of applications such as spectroscopy [@sasaki91; @misawa92; @misawa91], phase transitions in polymers [@hotta98], and light force microscopy of cells [@pralle99; @pralle98] and biomolecules [@smith96]. Metallic particles, however, were initially reported to suffer [*repulsive*]{} electromagnetic scattering forces due to their higher cross sections [@ashkin92], although later [@svoboda94] it was shown that nanometric metallic particles (with diameters smaller than $50$ $nm$) can be held in the focal region of a laser beam. Further, it was demonstrated in an experiment [@sasaki00] that metallic particles illuminated by an evanescent wave created under TIR at a substrate, experience a vertical attractive force towards the plate, while they are pushed horizontally in the direction of propagation of the evanescent wave along the surface. Forces in the $fN$ range were measured. ![$|H/H_ o|^2$, $P$ polarization, from a silver cylinder with $a=60$ $nm$ immersed in water, on a glass surface with defect parameters $X_0=\pm191.4$ $nm$, $h=127.6$ $nm$ and $\protect\sigma=63.8$ $nm$, at distance $d=132.6$ $nm$. Gaussian beam incidence with $W=4000$ $nm$. \[fig:prb3\](a): $\protect\lambda =387$ $nm$ (on resonance), $\protect\theta_ o=0^o$. \[fig:prb3\](b): $\protect\lambda =387$ $nm$ (on resonance), $\protect\theta_o=66^o$. \[fig:prb3\](c): $\protect\lambda =316$ $nm$ (off resonance), $\protect\theta_o=66^o$. \[fig:prb3\](d): $\protect\lambda=387$ $nm$ (on resonance), $\protect\theta_o=66^o$. The cylinder center is placed at $(0, 192.6)$ nm in \[fig:prb3\](a), \[fig:prb3\](b) and \[fig:prb3\](c), and at $(191.4, 192.6)$ nm in \[fig:prb3\](d). (From Ref. [@yo01c]). []{data-label="fig:prb3"}](prb3.eps){width="9.5cm"} Plasmon resonances in metallic particles are not so efficiently excited as mor-phology–dependent resonances in non–absorbing high–refractive–index dielectric particles ([*e.g*]{}., see Refs. [@yo01a; @yo00]) under incident evanescent waves. The distance from the particle to the surface must be very small to avoid the evanescent wave decay normal to the propagation direction along the surface. In this section we address the same configuration as before using water as the immersion medium. The critical angle for the glass–water interface is $\theta _{c}=61.28^{o}$. A silver cylinder of radius $a$ at distance $d+a$ from the flat portion of the surface is now studied. In Figure \[fig:prb3\] we plot the near–field intensity distribution $|H/H_{0}|^{2}$ corresponding to the configuration of inset in Figure \[fig:ol1\]. A silver cylinder of radius $a=60$ $nm$ scans at constant distance $d=162.6$ $nm$ above the interface. The system is illuminated by a $p$–polarized Gaussian beam ($W=4000$ $nm$) at $\theta _{0}=0^{o}$ and $\lambda =387$ $nm$ ($\epsilon _{2}=-3.22+i0.70$). The surface protrusions are positioned at $X_{0}=\pm 191.4$ $nm$ with height $h=127.6$ $nm$ and $\sigma =63.8$ $nm$. Figure \[fig:prb3\](a) shows the aforementioned distribution when the particle is centered between the protrusions. The plasmon resonance is excited as manifested by the field enhancement on the cylinder surface, which is higher in its lower portion. At this resonant wavelength, the main Mie coefficient contributor is $n=2$, which can also be deduced from the interference pattern formed along the particle surface: the number of lobes must be $2n$ along this surface [@owen81]. Figure \[fig:prb3\](b) shows the same situation but with $\theta _{0}=66^{o}$. The field intensity close to the particle is higher in Figure \[fig:prb3\](a) because in Figure \[fig:prb3\](b) the distance $d$ is large enough to obliterate the resonance excitation due to the decay of the evanescent wave created by TIR [@yo01a]. However, the field intensity is markedly different from the one shown in Figure \[fig:prb3\](c), in which the wavelength has been changed to $\lambda =316$ $nm$ ($\epsilon _{2}=0.78+i1.07$) so that there is no particle resonance excitation at all. Figure \[fig:prb3\](d) shows the same situation as in Figure \[fig:prb3\](b) but at a different $X$–position of the particle. In Figure \[fig:prb3\](c), the interference in the scattered near–field due to the presence of the particle is rather weak; the field distribution is now seen to be mainly concentrated at low $z$ as an evanescent wave travelling along the interface, and this distribution does not substantially change as the particle moves over the surface at constant $z$. By contrast, in Figures \[fig:prb3\](b) and \[fig:prb3\](d) the intensity map is strongly perturbed by the presence of the particle. As we shall see, this is the main reason due to which optical force microscopy is possible at resonant conditions with such small metallic particles used as nanoprobes, and not so efficient at non–resonant wavelengths. In connection with these intensity maps ([*cf*]{}. Figures \[fig:prb3\](b) and \[fig:prb3\](d)), we should point out the interference pattern on the left side of the cylinder between the evanescent wave and the strongly reflected waves from the particle, that in resonant conditions behaves as a strongly radiating antenna [@yo01a; @yo00; @krenn99]. This can also be envisaged as due to the much larger scattering cross section of the particle on resonance, hence reflecting backwards higher intensity and thus enhancing the interference with the evanescent incident field. The fringe spacing is $\lambda /2$ ($\lambda $ being the corresponding wavelength in water). This is explained as follows: The interference pattern formed by the two evanescent waves travelling on the surface opposite to each other, with the same amplitude and no dephasing, is proportional to $\exp (-2\kappa z)\cos ^{2}(n_{1}k_{0}\sin \theta _{0}x)$, with $\kappa =(n_{1}^{2}\sin ^{2}\theta _{0}-n_{0}^{2})^{1/2}$. The distance between maxima is $\Delta x=\lambda /(2n_{1}\sin \theta _{0})$. For the angles of incidence used in this work under TIR ($\theta _{0}=66^{0}$ and $72^{0}$), $\sin \theta _{0}\approx 0.9$, and taking into account the refractive indices of water and glass, one can express this distance as $\Delta x\approx \lambda /2n_{0}$. The quantity $\Delta x$ is similar to the fringe period below the particle in Figure \[fig:prb3\](a), now attributted to the interference between two opposite travelling plane waves, namely, the one transmitted through the interface and the one reflected back from the particle. ![ Force on a silver cylinder with $a=60$ $nm$ immersed in water, scanned at constant distance $d=132.6$ $nm$ on a glass surface with defect parameters $X_0=\pm 191.4$ $nm$, $h=127.6$ $nm$ and $\protect\sigma=63.8$ $nm$ along $OX$. The incident field is a $p$–polarized Gaussian beam with $W=4000$ $nm$ and $\protect\theta_ 0=66^o$. \[fig:prb5\](a): Horizontal force. \[fig:prb5\](b): Vertical force. Solid curves: $\protect\lambda= 387$ $nm$ (on resonance), broken curves: $\protect\lambda= 316$ $nm$ (off resonance). Thin lines in \[fig:prb5\](b) show $|H/H_0|^2$ (in arbitrary units), averaged on the perimeter of the cylinder cross section, while it scans the surface. The actual magnitude of the intensity in the resonant case is almost seven times larger than in the non–resonant one. (From Ref. [@yo01c]). []{data-label="fig:prb5"}](prb5.eps){width="\linewidth"} ![$|H/H_ o|^2$ for $P$ polarization for a silver cylinder with $a=200$ $nm$ immersed in water, on a glass surface with parameters $X_0=\pm 638$ $nm$, $h=425.3$ $nm$ and $\protect\sigma=212.7$ $nm$, at distance $d=442$ $nm$. Gaussian beam incidence with $W=4000$ $nm$. \[fig:prb8\](a): $\protect \lambda =441$ $nm$ (on resonance), $\protect\theta_ o=0^o$ and the cylinder center placed at $(-1276, 642)$ $nm$. \[fig:prb8\](b): $\protect\lambda =441$ $nm$ (on resonance), $\protect\theta_o=66^o$ and the cylinder center placed at $(1276, 642)$ $nm$. \[fig:prb8\](c): $\protect\lambda =316$ $nm$ (off resonance), $\protect\theta_o=66^o$ and the cylinder center placed at $(1276, 642)$ $nm$. (From Ref. [@yo01c]). []{data-label="fig:prb8"}](prb8.eps){width="11cm"} Figure \[fig:prb5\] shows the variation of the Cartesian components of the electromagnetic force ($F_x$, Fig. \[fig:prb5\](a) and $F_z$, Fig. \[fig:prb5\](d)) on scanning the particle at constant distance $d$ above the interface, at either plasmon resonance excitation ($\lambda=387$ $nm$, solid lines), or off resonance ($\lambda=316$ $nm$, broken lines). The incident beam power (per unit length) on resonance is $3.9320 \times 10^{-6}$ $mW/nm$, and $3.9327\times 10^{-6}$ $mW/nm$ at $\lambda=316$ $nm$. The incidence is done with a $p$–polarized Gaussian beam of $W=4000$ $nm$ at $\theta_0=66^o$. It is seen from these curves that the force distributions resembles the surface topography on resonant conditions with a signal which is remarkably larger than off–resonance. This feature is specially manifested in the $Z$ component of the force, in which the two protrusions are clearly distinguished from the rest of interference ripples, as explained above. Figure \[fig:prb5\](b) also shows (thin lines) the scanning that conventional near field microscopy would measure in this configuration, namely, the normalized magnetic near field intensity, averaged on the cylinder cross section. These intensity curves are shown in arbitrary units, and in fact the curve corresponding to plasmon resonant conditions is almost seven times larger than the one off–resonance. The force curves show, on the one hand, that resonant conditions also enhance the contrast of the surface topography image. Thus, the images obtained from the electromagnetic force follows more faithfully the topography than that from the near field intensity. This is a fact also observed with other profiles, including surface–relief gratings. When parameter $h$ is inverted, namely, the interface profile on the left in Fig. \[fig:ol1\], then the vertical component of the force distribution presents inverted the contrast. On the whole, one observes from these results that both the positions and sign of the defect height can be distinguished by the optical force scanning. ![ Force on a silver cylinder with $a=200$ $nm$ immersed in water, scanned at constant distance on a glass surface with parameters $X_0=\pm 638$ $nm$, $h=425.3$ and $\protect\sigma=212.7$ $nm$ along $OX$. The incident field is a $p$–polarized Gaussian beam with $W=4000$ $nm$ and $\protect\theta_ 0=66^o$. \[fig:prb9\](a): Horizontal force. \[fig:prb9\](b): Vertical force. Solid curves: $\protect\lambda= 441$ $nm$ (on resonance), broken curves: $\protect\lambda= 316$ $nm$ (off resonance). Thin solid curves: $\protect\lambda= 441$ $nm$ (on resonance) at $\protect\theta_ 0=72^o $, thin broken curves: $\protect\lambda= 316$ $nm$ (off resonance) at $\protect\theta_ 0=72^o$. (From Ref. [@yo01c]). []{data-label="fig:prb9"}](prb9.eps){width="\linewidth"} Figure \[fig:prb8\] displays near–field intensity maps for a larger particle ($a=200$ $nm$). Figure \[fig:prb8\](a) corresponds to $\theta _{0}=0^{o}$ and a resonant wavelength $\lambda =441$ $nm$ ($\epsilon _{2}=-5.65+i0.75$), with the particle being placed on the left of both protrusions. Figure \[fig:prb8\](b) corresponds to $\theta _{0}=66^{o}$ (TIR illumination conditions), at the same resonant wavelength, the particle now being on the right of the protrusions. Figure \[fig:prb8\](c) corresponds to $\theta _{0}=66^{o}$ (TIR incidence), at the no resonant wavelength $\lambda =316$ $nm$ ($\epsilon _{2}=0.78+i1.07$), the particle being placed at the right of the protrusions. The incident beam is $p$–polarized with $W=4000$ $nm$. The surface protrusions are positioned at $X_{0}=\pm 638$ $nm$ with height $h=425.3$ $nm$ and $\sigma =212.7$ $nm$. All the relevant size parameters are now comparable to the wavelength, and hence to the decay length of the evanescent wave. That is why now the plasmon resonance cannot be highly excited. When no resonant wavelength is used, the intensity interference fringes due to the presence of the particle are weaker. On the other hand, Figure \[fig:prb8\](a) shows the structure of the near–field scattered under $\theta _{0}=0^{o}$. There are three objects that scatter the field: the two protrusions and the particle. They create an inteference pattern with period $\lambda /2$ (with $\lambda $ being the wavelength in water). Besides, the particle shows an inteference pattern around its surface due to the two counterpropagating plasmon waves which circumnavegate it [@yo01a; @yo00]. The number of lobes along the surface is nine, which reflects that the contribution to the field enhancement at this resonant wavelength comes from Mie’s coefficients $n=5$ and $n=4$. Figure \[fig:prb8\](b) shows weaker excitation of the same plasmon resonance under TIR conditions. Now, the interference pattern at the incident side of the configuration is also evident. This pattern again has a period $\lambda /2$ ($\lambda $ being the wavelength in water). If non–resonant illumination conditions are used, the particle is too far from the surface to substantially perturb the transmitted evanescent field, then the intensity distribution of this field remains closely attached to the interface, and it is scattered by the surface protrusions. The field felt by the particle in this situation is not sufficient to yield a well–resolved image of the surface topography, as shown next for this same configuration. Figure \[fig:prb9\] shows the components of the force ($F_{x}$, Figure \[fig:prb9\](a) and $F_{z}$, Figure \[fig:prb9\](b)) for either plasmon excitation conditions ($\lambda =441$ $nm$, solid lines), or off–resonance ($\lambda =316$ $nm$, broken lines), as the cylinder scans at constant distance $d$ above the surface. The incidence is done with a $p$–polarized Gaussian beam of $W=4000$ $nm$ at either $\theta _{0}=66^{o}$ (thick curves) or $\theta _{0}=72^{o}$ (thin curves). The incident beam power (per unit length) is $3.9313\times 10^{-6}$ $mW/nm$ on resonance and $3.9327\times 10^{-6}$ $mW/nm$ at $\lambda =316$ $nm$ when $\theta _{0}=66^{o}$, and $3.9290\times 10^{-6}$ $mW/nm$ on resonance and $3.9315\times 10^{-6}$ $mW/nm$ at $\lambda =316$ $nm$ when $\theta _{0}=72^{o}$. As before, resonant conditions provide a better image of the surface topography making the two protrusions distinguishable with a contrast higher than the one obtained without plasmon excitation. The surface image corresponding to the force distribution is better when the protrusions (not shown here) are inverted because then the particle can be kept closer to the interface. Again, the curve contrast yielded by protrusions and grooves is inverted from each other. The positions of the force distribution peaks corresponding to the protrusions now appear appreciably shifted with respect to the actual protrusions’ position. This shift is explained as due to the Goos–Hänchen effect of the evanescent wave [@yo01b]. We observe that the distance between these peaks in the $F_{z}$ curve is aproximately $2X_{0}$. This shift is more noticeable in the force distribution as the probe size increases[^4]. Again, the $F_{z}$ force distribution has a higher contrast at the (shifted) position of the protrusions. The force signal with these bigger particles is larger, but the probe has to be placed farther from the surface at constant height scanning. This affects the strength of the signal. Finally, it is important to state that the angle of incidence (supposed to be larger than the critical angle $\theta _{c}$) influences both the contrast and the strength of the force: the contrast decreases as the angle of incidence increases. At the same time, the strength of the force signal also diminishes. As seen in the force figures for both sizes of particles, most curves contain tiny ripples. They are due to the field intensity interference pattern as shown in Figures \[fig:prb3\] and \[fig:prb8\], and discussed above. As the particle moves, the force on it is affected by this interference. As a matter of fact, it can be noted in the force curves that these tiny ripples are mainly present at the left side of the particle, which is the region where stronger interference takes place. It is worth remarking, however, that these oscillations are less marked in the force distribution ([*cf*]{}. their tiny ripples), than in the near field intensity distribution, where the interference patterns present much higher contrast. As stated in the previous section, evanescent fields and multiple scattering are fruitful to extracting information from a detection setup. The latter is, at the same time, somewhat troublesome as it cannot be neglected at will. This incovenient is well–known in NSOM, but it is diminished in PFM, as remarked before. The smoother signal provided by the force is underlined by two facts: one is the averaging process on the particle surface, quantitatively interpreted from the field surface integration involved in Maxwell’s stress tensor. Other is the local character of the force acting at each point of the particle surface. Metallic particles are better candidates as probes of PFM in comparison to dielectric particles, since the force signal is not only enhanced at resonance conditions, but it is also bigger and presents better resolution. However, dielectric particles are preferred when the distance to the interface is large, since then the weak evanescent field present at these distances presents better coupling to the whispering–gallery modes than to the plasmon surface waves of metallic particles. On the Attractive and Repulsive Nature of Vertical Forces and Their Orders of Magnitude {#sec:discussion} --------------------------------------------------------------------------------------- The horizontal forces acting on the particle are scattering forces due to radiation pressure of both the incident evanescent wave and the field scattered by the protrusions, thus the forces are positive in all the cases studied. As for the vertical forces, two effects compete in determining their sign. First, is the influence of the polarizability [@chaumet00b; @chaumet00a], which depends on the polarization of the illumination. On the other hand, it is well known that an evanescent wave produces only gradient forces in the vertical direction. For silver cylinders, the force at wavelength $\lambda =387$ ($\epsilon _{2}=-3.22+i0.70 $) and at $\lambda =441$ $nm$ ($\epsilon _{2}=-5.65+i0.75$) must be attractive, while at $\lambda =316$ $nm$ ($\epsilon _{2}=0.78+i1.07$), the real part of the polarizability changes its sign, and so does the gradient force, thus becoming repulsive (on cylinders of not very large sizes, as here). However, in the cases studied here, not only the multiple scattering of light between the cylinder and the flat portion of the interface, but also the surface defects, produce scattered waves both propagating (into $z>0$) and evanescent under TIR conditions. Thus, the scattering forces also contribute to the $z$–component of the force. This affects the sign of the forces, but it is more significant as the size of the objects increases. In larger cylinders and defects ([*cf*]{}. Figure \[fig:prb9\]), the gradient force is weaker than the scattering force thus making $F_{z}$ to become repulsive on scanning at $\lambda =441$ $nm$ (plasmon excited). On the other hand, for the smaller silver cylinders studied ([*cf*]{}. Figure \[fig:prb5\]), the gradient force is greater than the scattering force at $\lambda =387$ $nm$ (plasmon excited), and thus the force is attractive in this scanning. Also, as the distance between the particle and the surface decreases, the gradient force becomes more attractive [@chaumet00b; @chaumet00a]. This explains the dips and change of contrast in the vertical force distribution on scanning both protrusions and grooves. At $\lambda =316$ $nm$ (no plasmon excited), both scattering and gradient forces act cooperatively in the vertical direction making the force repulsive, no matter the size of the cylinder. For the silicon cylinder, as shown, the vertical forces acting under TIR conditions are attractive in absence of surface interaction (for both polarizations and the wavelengths used). However, this interaction is able to turn into repulsive the vertical force for $S$ polarization at $\lambda =538$ $nm$, due to the scattering force. This study also reveals the dependence of the attractive or repulsive nature of the forces on the size of the objects (probe and defects of the surface), apart from the polarizability of the probe and the distance to the interface, when illumination under total internal reflection is considered. The competition between the strength of the scattering and the gradient force determines this nature. The order of magnitude of the forces obtained in the preceding 2D calculations is consistent with that of forces in experiments and 3D calculations of Refs. [@pohl93; @guntherodt95; @depasse92; @sugiura93; @kawata92; @dereux94; @girard94; @almaas95; @novotny97; @hecht96; @okamoto99; @chaumet00b] and [@chaumet00a]. Suppose a truncated cylinder with axial length $L=10$ $\mu m$, and a Gaussian beam with $2W\sim 10$ $\mu m$. Then, a rectangular section of $L\times 2W=10^{2}$ $\mu m^{2}$ is illuminated on the interface. For an incident power $P_{0}\sim 1$ $mW$, spread over this rectangular section, the incident intensity is $I_{0}\sim 10^{-2}$ $mW/\mu m^{2}$, and the force range from our calculations is $F\sim 10^{-2}-10^{-1}$ $pN$. Thus, the forces obtained in Figures \[fig:prb9\](b) and \[fig:prb9\](d) are consistent with those presented, for example, in Ref. [@kawata92]. Concluding Remarks & Future Prospects {#sec:concluding} ===================================== The forces exerted by both propagating and evanescent fields on small particles are the basis to understand estructural characteristics of time–harmonic fields. The simplest evanescent field that can be built is the one that we have illustrated on transmission at a dielectric interface when TIR conditions occur. Both dielectric and metallic particles are pushed along the direction of propagation of the evanescent field, independently of the size (scattering and absorption forces). By contrast, forces behave differently along the decay direction (gradient forces) on either dielectric and metallic particles, as studied for dipolar–sized particles, Section \[sec:dipapprox\]. The analysis done in presence of a rough interface, and with particles able to interact with it show that scattering, absorption and gradient forces act both in the amplitude and phase directions, when multiple scattering takes place. Moreover, the excitation of particle resonances enhances this interaction, and, at the same time, generates evanescent fields (surface waves) on their surface, which makes even more complex this mixing among force components. Thus, an analysis based on a small particle isolated is not feasible due to the high inhomogeneity of the field. It is, however, this inhomogeneity what provides a way to imaging a surface with structural features, such as topography. The possibility offered by the combination of evanescent fields and Mie resonances is however not unique. As inhomgeneous fields (which can be analitically decomposed into propagating and evanescent fields [@nieto-vesperinas91]) play an important role in the mechanical action of the electromagnetic wave on dielectric particles (either on or out of resonance), they can be used to operate at the nanometric scale on such entities, to assist the formation of ordered particle structures as for example [@burns89; @burns90; @antonoyiannakis97; @antonoyiannakis99; @malley98; @bayer98; @barnes02], with help of these resonances. Forces created by evanescent fields on particles and morphology–dependent resonances are the keys to control the optical binding and the formation of photonic molecules. Also, when a particle is used as a nanodetector, these forces are the signal in a scheme of photonic force microscopy as modeled in this article. It has been shown that the evanescent field forces and plasmon resonance excitations permit to manipulate metallic particles [@novotny97; @chaumet01; @chaumet02], as well as to make such microscopy [@yo01c]. Nevertheless, controlled experiments on force magnitudes, both due to evanescent and propagating waves, are yet scarce and thus desirable to be fostered. The concepts released in this article open an ample window to investigate on soft matter components like in cells and molecules in biology. Most folding processes require small forces to detect and control in order not to alter them and be capable of actuating and extracting information from them. General Annotated References {#sec:references .unnumbered} ============================ Complementary information and sources for some of the contents treated in this report can be found in next bibliography: - Electromagnetic optics: there are many books where to find the basis of the electromagnetic theory and optics. We cite here the most common: [@jackson75; @bornwolf99]. The Maxwell’s Stress Tensor is analysed in [@jackson75; @stratton41]. The mathematical level of these textbooks is similar to the one in this report. - Mie theory can be found in [@vandehulst81; @kerker69; @bohren83]. In these textbooks, Optics of particles is develop with little mathematics and all of them are comparable in contents. - Resonances can be understood from the textbooks before, but a more detailed information, with applications and the implications in many topics can be found in the following references: [@yophd; @hill88; @barber90]. The last two references are preferently centred in dielectric particles. The first one compiles some of the information in these two references and some other from scientific papers. Surface plasmons, on the other hand, are studied in depth in [@raether88]. They are easy to understand from general physics. - Integral equations in scattering theory and angular spectrum representation (for the decomposition of time–harmonic fields in propagating and evanescent components) are treated in [@nieto-vesperinas91]. The mathematical level is similar to the one in this report. - The Coupled Dipole Method can be found in the scientific papers cited in Section \[sec:CDM\]. A More didactical reference is [@chaumetphd; @rahmaniphd]. The information shed in this report on the CDM is extended in these references. - The dipolar approximation, in the context of optical forces and evanescent fields, can be complemented in scientific papers: [@gordon73; @chaumet00c; @yo02b] and in the monograph: [@novotny00]. - A more detailed discussion on the sign of optical forces for dipolar particles, as well as larger elongated particles, can be found in the scientific papers [@yo02a; @yo02b; @chaumet00b]. - Monographs on NSOM and tweezers: [@nieto-vesperinas96; @sheetz97]. We thank P. C. Chaumet and M. Lester for work that we have shared through the years. Grants from DGICYT and European Union, as well as a fellowship of J. R. Arias-González from Comunidad de Madrid, are also acknowledged. [Ba[ñ]{}os 66a]{} G. S. Agarwal. .Phys. Rev. A,  11, pages 230–242, 1975. G. S. Agarwal. .Phys. Rev. A,  12, pages 1475–1497, 1975. M. Allegrini, N. García and O. Marti, editors.Nanometer scale science and technology, Amsterdam, 2001. Società Italiana di Fisica, IOS PRESS.in [*Proc. Int. Sch. E. Fermi, Varenna*]{}. E. Almaas and I. Brevik. . J. Opt. Soc. Am. B,  12, pages 2429–2438, 1995. M. Alonso and E. J. Finn. .Addison–Wesley Series in Physics, Addison–Wesley, Reading, MA, 1968. M. I. Antonoyiannakis and J. B. Pendry. . Europhys. Lett.,  40, pages 613–618, 1997. M. I. Antonoyiannakis and J. B. Pendry. . Phys. Rev. B,  60, pages 2363–2374, 1999. J. R. Arias-González, M. Nieto-Vesperinas and A. Madrazo. .J. Opt. Soc. Am. A,  16, pages 2928–2934, 1999. J. R. Arias-González and M. Nieto-Vesperinas. .Opt. Lett.,  25, pages 782–784, 2000. J. R. Arias-González, P. C. Chaumet and M. Nieto-Vesperinas. .In Nanometer Scale Science and Technology, 2001. [@allegrinigarciamarti01]. J. R. Arias-González and M. Nieto-Vesperinas. .J. Opt. Soc. Am. A,  18, pages 657–665, 2001. J. R. Arias-González, M. Nieto-Vesperinas and M. Lester.. Phys. Rev. B, 65, page 115402, 2002. J. R. Arias-González and M. Nieto-Vesperinas.. Opt. Lett., submitted, 2002. J. R. Arias-González and M. Nieto-Vesperinas.. J. Opt. Soc. Am. A, submitted, 2002. J. R. Arias-González . Universidad Complutense de Madrid, Spain, 2002. A. Ashkin and J. M. Dziedzic. . Phys. Rev. Lett.,  38, pages 1351–1354, 1977. A. Ashkin, J. M. Dziedzic, J. E. Bjorkholm and S. Chu. .Opt. Lett.,  11, pages 288–290, 1986. A. Ashkin and J. M. Dziedzic. .Appl. Phys. Lett.,  24, pages 586–589, 1992. A. Ba[ñ]{}os.1966.in [@banios66], chapter 2. A. Ba[ñ]{}os. .Pergamon Press, Oxford, 1966. P. W. Barber, J. F. Owen and R. K. Chang. .IEEE Trans. Antennas Propagat.,  30, pages 168–172, 1982. P. W. Barber and R. K. Chang, editors. .World Scientific, Singapore, 1988. P. W. Barber and S. C. Hill. .World Scientific, Singapore, 1990. M. D. Barnes, S. M. Mahurin, A. Mehta, B. G. Sumpter and D. W. Noid. . Phys. Rev. Lett.,  88, page 015508, 2002. M. Bayer, T. Gutbrod, J. P. Reithmaier, A. Forchel, T. L. Reinecke, P. A. Knipp, A. A. Dremin and V. D. Kulakovskii. . Phys. Rev. Lett.,  81, pages 2582–2585 , 1998. D.S. Benincasa, P.W. Barber, J-Z. Zhang, W-F. Hsieh and R.K. Chang. .Appl. Opt.,  26, pages 1348–1356, 1987. C. F. Bohren and D. R. Huffman. . Wiley–Interscience Publication, New York, 1983. M. Born and E. Wolf.1999. in [@bornwolf99], section 11.4.2. M. Born and E. Wolf.1999. in [@bornwolf99], pp 34. M. Born and E. Wolf. .Cambridge University Press, Cambridge, 7nd edition, 1999. H. C. Bryant and A. J. Cox. .J. Opt. Soc. Am. A,  56, pages 1529–1532, 1966. M. M. Burns, J.-M. Fournier and J. A. Golovchenco. . Science,  249, pages 749–754, 1990. M. M. Burns, J.-M. Fournier and J. A. Golovchenco. . Phys. Rev. Lett.,  63, pages 1233-1236, 1989. S. Chang, J. H. Jo and S. S. Lee. .Opt. Commun.,  108, pages 133–143, 1994. P. C. Chaumet and M. Nieto-Vesperinas. .Phys. Rev. B,  61, pages 14119–14127, 2000. P. C. Chaumet and M. Nieto-Vesperinas. .Phys. Rev. B,  62, pages 11185–11191, 2000. P. C. Chaumet and M. Nieto-Vesperinas. .Opt. Lett.,  25, pages 1065–1067, 2000. P. C. Chaumet and M. Nieto-Vesperinas. . Phys. Rev. B,  64, page 035422, 2001. P. C. Chaumet, A. Rahmani and M. Nieto-Vesperinas. . Phys. Rev. Lett.,  88, page 123601, 2002. P. C. Chaumet. . Université de Bourgogne, France, 1998. H. W. Chew, D.-S. Wang and M. Kerker.. Appl. Opt.,  18, 2679, 1979. A. R. Clapp, A. G. Ruta and R. B. Dickinson. .Rev. Sci. Instr.,  70, pages 2627–2636, 1999. L. Collot, V. Lefèvre-Seguin, M. Brune, J.M. Raimond and S. Haroche. .Europhys. Lett.,  23, pages 327–334, 1993. J. C. Crocker and D. G. Grier. .Phys. Rev. Lett.,  73, pages 352–355, 1994. F. Depasse and D. Courjon.Opt. Commun.,  87,  79, 1992. A. Dereux, C. Girard, O. J. F. Martin and M. Devel. .Europhys. Lett.,  26, pages 37–42, 1994. A. C. Dogariu and R. Rajagopalan..Langmuir,  16, pages 2770–2778, 2000. B. T. Draine. .Astrophys. J.,  333, pages 848–872, 1988. C. E. Dungey and C. F. Bohren. .J. Opt. Soc. Am. A,  8, pages 81–87, 1991. T. S. Fahlen and H. C. Bryant. .J. Opt. Soc. Am.,  58, pages 304–310, 1968. E.-L. Florin, J. K. H. Hörber and E. H. K. Stelzer. . Appl. Phys. Lett.,  69, pages 446–448, 1996. L. P. Ghislain and W. W. Webb..Opt. Lett.,  18, pages 1678–1680, 1993. C. Girard, A. Dereux and O. J. F. Martin. .Phys. Rev. B,  49, pages 13872–13881, 1994. J. P. Gordon. .Phys. Rev. A,  8, pages 14–21, 1973. J. J. Greffet and R. Carminati..Prog. Surf. Sci.,  56, pages 133–235, 1997. H.-J. Güntherodt, D. Anselmetti and E. Meyer, editors., Dordrecht, 1995. NATO ASI Series, Kluwer Academic Publishing. B. Hecht, H. Bielefeldt, L. Novotny, Y. Inouye and D. W. Pohl. .Phys. Rev. Lett.,  77, pages 1889–1892, 1996. S. C. Hill and R. E. Benner. Morphology–dependent resonances, chapitre 1.World Scientific, 1988.  [@barber88]. J. K. H. Hörber. .In Nanometer Scale Science and Technology, 2001. [@allegrinigarciamarti01]. J. Hotta, K. Sasaki, H. Masuhara and Y. Morishima. .J. Phys. Chem. B,  102, pages 7687–7690, 1998. J. D. Jackson. .Wiley–Interscience Publication, New York, 2nd edition, 1975. S. Kawata and T. Sugiura. .Opt. Lett.,  17, pages 772–774, 1992. S. Kawata and T. Tani. . Opt. Lett.,  21, pages 1768–1770, 1996. S. Kawata, editor. Near-field Optics and Surface Plasmon Polaritons, Topics in Applied Physics, Springer–Verlag, Berlin, 2000. O. Keller, M. Xiao and S. Bozhevolnyi. .Surf. Sci.,  280, pages 217–230, 1993. M. Kerker. . Academic Press, New York, 1969. V. Khare and H. M. Nussenzveig. .Phys. Rev. Lett.,  38, pages 1279–1282, 1968. T. Klar, M. Perner, S. Grosse, G.V. Plessen, W. Spirkl and J. Feldmann. .Phys. Rev. Lett.,  80, pages 4249–4252, 1988. J.C. Knight, N. Dubreuil, V. Sandoghdar, J. Hare, V. Lefèvre-Seguin, J.M. Raimond and S. Haroche. . Opt. Lett.,  20, pages 1515–1517, 1995. J. R. Krenn, A. Dereux, J. C. Weeber, E. Bourillot, Y. Lacroute, J. P. Goudonnet, G. Schider, W. Gotschy, A. Leitner, F. R. Aussenegg and C. Girard. . Phys. Rev. Lett.,  82, pages 2590–2593, 1999. M. Lester and M. Nieto-Vesperinas.. Opt. Lett.,  24, pages 936–938, 1999. M. Lester, J. R. Arias-González and M. Nieto-Vesperinas. .Opt. Lett.,  26, pages 707–709, 2001. L. E. Malley, D. A. Pommet and M. A. Fiddy. . J. Opt. Soc. Am. B,  15, pages 1590–1595, 1998. L. Mandel and E. Wolf. .Cambridge University Press, Cambridge, 1995. H. Misawa, M. Koshioka, K. Sasaki, N. Kitamura and H. Masuhara. .J. Appl. Phys.,  70, pages 3829–3836, 1991. H. Misawa, K. Sasaki, M. Koshioka, N. Kitamura and H. Masuhara. .Appl. Phys. Lett.,  60, pages 310–312, 1992. P. M. Morse and H. Feshbach..McGraw–Hill, New York, 1953. M. Nieto-Vesperinas. .John Wiley & Sons, Inc, New York, 1991. M. Nieto-Vesperinas and N. García, editors., Dordrecht, 1996. NATO ASI Series, Kluwer Academic Publishing. L. Novotny, R. X. Bian and X. S. Xie. .Phys. Rev. Lett.,  79, pages 645–648, 1997. L. Novotny. . In Near-field Optics and Surface Plasmon Polaritons, Topics in Applied Physics,  81, pages 123–141, 2000. [@kawata00]. K. Okamoto and S. Kawata. . Phys. Rev. Lett.,  83, pages 4534–4537, 1999. J. F. Owen, R. K. Chang and P. W. Barber..Opt. Lett.,  6, pages 540–542, 1981. M. A. Paesler and P. J. Moyer. .John Wiley & Sons, Inc, New York, 1996. D. N. Pattanayak and E. Wolf. .Phys. Rev. E,  13, pages 2287–2290, 1976. D. N. Pattanayak and E. Wolf. .Phys. Rev. E,  13, pages 913–923, 1976. D. W. Pohl and D. Courjon, editors., Dordrecht, 1993. NATO ASI Series, Kluwer Academic Publishing. A. Pralle, E.-L. Florin, E. H. K. Stelzer and J. K. H. Hörber. .Appl. Phys. A,  66, pages S71–S73, 1998. A. Pralle, M. Prummer, E.-L. Florin, E. H. K. Stelzer and J. K. H. Hörber. .Microsc. Res. Tech.,  44, pages 378–386, 1999. D. C. Prieve and J. Y. Walz. .Appl. Opt.-LP,  32, 1629, 1993. E. M. Purcell and C. R. Pennypacker. .Astrophys. J.,  186, pages 705–714, 1973. H. Raether.. Springer–Verlag, Berlin Heidelberg, 1988. A. Rahmani and F. de Fornel. . Eyrolles and France Télécom-CNET, Paris, 2000. K. Sasaki, M. Koshioka, H. Misawa, N. Kitamura and H. Masuhara. .Opt. Lett.,  16, pages 1463–1465, 1991. K. Sasaki, M. Tsukima and H. Masuhara. .Appl. Phys. Lett.,  71, pages 37–39, 1997. K. Sasaki, J. Hotta, K. Wada and H. Masuhara. .Opt. Lett.,  25, pages 1385–1387, 2000. M. P. Sheetz, editor. .Academic Press, San Diego, CA, 1997. S. B. Smith, Y. Cui and C. Bustamante..Science,  271, pages 795–799, 1996. A. L. Stout and W. W. Webb. .Methods Cell Biol.,  55,  99, 1997. in [@sheetz97]. J. A. Stratton. .McGraw–Hill, New York, 1941. T. Sugiura and S. Kawata. .Bioimaging,  1, pages 1–5, 1993. K. Svoboda and S. M. Block. .Opt. Lett.,  19, pages 13–15, 1994. T. Tamir. .Optik,  36, pages 209–232, 1972. T. Tamir. .Optik,  37, pages 204–228, 1972. H. C. van de Hulst. .Dover, New York, 1981. M. Vilfan, I. Mus[ě]{}vi[č]{} and M. [Č]{} opi[č]{}. .Europhys. Lett.,  43, pages 41–46, 1998. K. Wada, K. Sasaki and H. Masuhara. .Appl. Phys. Lett.,  76, pages 2815–2817, 2000. J. Y. Walz. .Appl. Opt.,  38, pages 5319–5330, 1999. D. S. Weiss, V. Sandoghdar, J. Hare, V. Lefè vre-Seguin, J.M. Raimond and S. Haroche. .Opt. Lett.,  20, pages 1835–1837, 1995. A. D. Yaghjian. .Proc. IEEE,  68, pages 248–263, 1980. [^1]: mnieto@@icmm.csic.es [^2]: ricardo.arias@@imdea.org [^3]: NSOM is also called SNOM, abbreviation for scanning near–field optical microscopy. [^4]: For a better picture of this shift, see the grating case in Ref. [@yo01b].
--- abstract: 'We study here properties of [*free Generalized Inverse Gaussian distributions*]{} (fGIG) in free probability. We show that in many cases the fGIG shares similar properties with the classical GIG distribution. In particular we prove that fGIG is freely infinitely divisible, free regular and unimodal, and moreover we determine which distributions in this class are freely selfdecomposable. In the second part of the paper we prove that for free random variables $X,Y$ where $Y$ has a free Poisson distribution one has $X\stackrel{d}{=}\frac{1}{X+Y}$ if and only if $X$ has fGIG distribution for special choice of parameters. We also point out that the free GIG distribution maximizes the same free entropy functional as the classical GIG does for the classical entropy.' author: - | Takahiro Hasebe\ Department of Mathematics,\ Hokkaido University\ thasebe@math.sci.hokudai.ac.jp - | Kamil Szpojankowski\ Faculty of Mathematics and Information Science\ Warsaw University of Technology\ k.szpojankowski@mini.pw.edu.pl title: On free generalized inverse gaussian distributions --- Introduction ============ Free probability was introduced by Voiculescu in [@Voi85] as a non-commutative probability theory where one defines a new notion of independence, so called freeness or free independence. Non-commutative probability is a counterpart of the classical probability theory where one allows random variables to be non-commutative objects. Instead of defining a probability space as a triplet $(\Omega,\mathcal{F},\mathbb{P})$ we switch to a pair $(\mathcal{A},\varphi)$ where $\mathcal{A}$ is an algebra of random variables and $\varphi\colon\mathcal{A}\to\mathbb{C}$ is a linear functional, in classical situation $\varphi=\mathbb{E}$. It is natural then to consider algebras $\mathcal{A}$ where random variables do not commute (for example $C^*$ or $W^*$–algebras). For bounded random variables independence can be equivalently understood as a rule of calculating mixed moments. It turns out that while for commuting random variables only one such rule leads to a meaningful notion of independence, the non-commutative setting is richer and one can consider several notions of independence. Free independence seems to be the one which is the most important. The precise definition of freeness is stated in Section 2 below. Free probability emerged from questions related to operator algebras however the development of this theory showed that it is surprisingly closely related with the classical probability theory. First evidence of such relations appeared with Voiculescu’s results about asymptotic freeness of random matrices. Asymptotic freeness roughly speaking states that (classically) independent, unitarily invariant random matrices, when size goes to infinity, become free.\ Another link between free and classical probability goes via infinite divisibility. With a notion of independence in hand one can consider a convolution of probability measures related to this notion. For free independence such operation is called free convolution and it is denoted by $\boxplus$. More precisely for free random variables $X,Y$ with respective distributions $\mu,\nu$ the distribution of the sum $X+Y$ is called the free convolution of $\mu$ and $\nu$ and is denoted by $\mu\boxplus\nu$. The next natural step is to ask which probability measures are infinitely divisible with respect to this convolution. We say that $\mu$ is freely infinitely divisible if for any $n \geq 1$ there exists a probability measure $\mu_n$ such that $$\mu=\underbrace{\mu_n\boxplus\ldots\boxplus\mu_n}_{\text{$n$ times}}.$$ Here we come across another striking relation between free and classical probability: there exists a bijection between classically and freely infinitely divisible probability measures, this bijection was found in [@BP99] and it is called Bercovici-Pata (BP) bijection. This bijection has number of interesting properties, for example measures in bijection have the same domains of attraction. In free probability literature it is standard approach to look for the free counterpart of a classical distribution via BP bijection. For example Wigner’s semicircle law plays the role of the Gaussian law in free analogue of Central Limit Theorem, Marchenko-Pastur distribution appears in the limit of free version of Poisson limit theorem and is often called free Poisson distribution. While BP bijection proved to be a powerful tool, it does not preserve all good properties of distributions. Consider for example Lukacs theorem which says that for classically independent random variables $X,Y$ random variables $X+Y$ and $X/(X+Y)$ are independent if and only if $X,Y$ have gamma distribution with the same scale parameter [@Luk55]. One can consider similar problem in free probability and gets the following result (see [@Szp15; @Szp16]) for free random variables $X,Y$ random variables $X+Y$ and $(X+Y)^{-1/2}X(X+Y)^{-1/2}$ are free if and only if $X,Y$ have Marchenko-Pastur (free Poisson) distribution with the same rate. From this example one can see our point - it is not the image under BP bijection of the Gamma distribution (studied in [@PAS08; @HT14]), which has the Lukacs independence property in free probability, but in this context free Poisson distribution plays the role of the classical Gamma distribution. In [@Szp17] another free independence property was studied – a free version of so called Matsumoto-Yor property (see [@MY01; @LW00]). In classical probability this property says that for independent $X,Y$ random variables $1/(X+Y)$ and $1/X-1/(X+Y)$ are independent if and only if $X$ has a Generalized Inverse Gaussian (GIG) distribution and $Y$ has a Gamma distribution. In the free version of this theorem (i.e. the theorem where one replaces classical independence assumptions by free independence) it turns out that the role of the Gamma distribution is taken again by the free Poisson distribution and the role of the GIG distribution plays a probability measure which appeared for the first time in [@Fer06]. We will refer to this measure as the free Generalized Inverse Gaussian distribution or fGIG for short. We give the definition of this distribution in Section 2. The main motivation of this paper is to study further properties of fGIG distribution. The results from [@Szp17] suggest that in some sense (but not by means of the BP bijection) this distribution is the free probability analogue of the classical GIG distribution. It is natural then to ask if fGIG distribution shares more properties with its classical counterpart. It is known that the classical GIG distribution is infinitely divisible (see [@BNH77]) and selfdecomposable (see [@Hal79; @SS79]). In [@LS83] the GIG distribution was characterized in terms of an equality in distribution, namely if we take $X,Y_1,Y_2$ independent and such that $Y_1$ and $Y_2$ have Gamma distributions with suitable parameters and we assume that $$\begin{aligned} X\stackrel{d}{=}\frac{1}{Y_2+\frac{1}{Y_1+X}}\end{aligned}$$ then $X$ necessarily has a GIG distribution. A simpler version of this theorem characterizes smaller class of fGIG distributions by equality $$\begin{aligned} \label{eq:char} X\stackrel{d}{=}\frac{1}{Y_1+X}\end{aligned}$$ for $X$ and $Y_1$ as described above. The overall result of this paper is that the two distributions GIG and fGIG indeed have many similarities. We show that fGIG distribution is freely infinitely divisible and even more that it is free regular. Moreover fGIG distribution can be characterized by the equality in distribution , where one has to replace the independence assumption by freeness and assume that $Y_1$ has free Poisson distributions. While there are only several examples of freely selfdecomposable distributions it is interesting to ask whether fGIG has this property. It turns out that selfdecomposability is the point where the symmetry between GIG and fGIG partially breaks down: not all fGIG distributions are freely selfdecomposable. We find conditions on the parameters of fGIG family for which this distributions are freely selfdecomposable. Except from the results mentioned above we prove that fGIG distribution is unimodal. We also point out that in [@Fer06] it was proved that fGIG maximizes a certain free entropy functional. An easy application of Gibbs’ inequality shows that the classical GIG maximizes the same functional of classical entropy. The paper is organized as follows: In Section 2 we shortly recall basics of free probability and next we study some properties of fGIG distributions. Section 3 is devoted to the study of free infinite divisibility, free regularity, free selfdecomposability and unimodality of the fGIG distribution. In Section 4 we show that the free counterpart of the characterization of GIG distribution by holds true, and we discuss entropy analogies between GIG and fGIG. Free GIG distributions ====================== In this section we recall the definition of free GIG distribution and study basic properties of this distribution. In particular we study in detail the $R$-transform of fGIG distribution. Some of the properties established in this section will be crucial in the subsequent sections where we study free infinite divisibility of the free GIG distribution and characterization of the free GIG distribution. The free GIG distribution appeared for the first time (not under the name free GIG) as the almost sure weak limit of empirical spectral distribution of GIG matrices (see [@Fer06]). Basics of free probability -------------------------- This paper deals mainly with properties of free GIG distribution related to free probability and in particular to free convolution. Therefore in this section we introduce notions and tools that we need in this paper. The introduction is far from being detailed, reader not familiar with free probability may find a very good introduction to the theory in [@VDN92; @NS06; @MS]. 1. A $C^*$–probability space is a pair $(\mathcal{A},\varphi)$, where $\mathcal{A}$ is a unital $C^*$-algebra and $\varphi$ is a linear functional $\varphi\colon\mathcal{A}\to\mathbb{C}$, such that $\varphi(\mathit{1}_\mathcal{A})=1$ and $\varphi(aa^*)\geq 0$. Here by $\mathit{1}_\mathcal{A}$ we understand the unit of $\mathcal{A}$. 2. Let $I$ be an index set. A family of subalgebras $\left(\mathcal{A}_i\right)_{i\in I}$ are called free if $\varphi(X_1\cdots X_n)=0$ whenever $a_i\in \mathcal{A}_{j_i}$, $j_1\neq j_2\neq \ldots \neq j_n$ and $\varphi(X_i)=0$ for all $i=1,\ldots,n$ and $n=1,2,\ldots$. Similarly, self-adjoint random variables $X,\,Y\in\mathcal{A}$ are free (freely independent) when subalgebras generated by $(X,\,\mathit{1}_\mathcal{A})$ and $(Y,\,\mathit{1}_\mathcal{A})$ are freely independent. 3. The distribution of a self-adjoint random variable is identified via moments, that is for a random variable $X$ we say that a probability measure $\mu$ is the distribution of $X$ if $$\varphi(X^n)=\int t^n\,{{\rm d}}\mu(t),\,\mbox{for all } n=1,2,\ldots$$ Note that since we assume that our algebra $\mathcal{A}$ is a $C^*$–algebra, all random variables are bounded, thus the sequence of moments indeed determines a unique probability measure. 4. The distribution of the sum $X+Y$ for free random variables $X,Y$ with respective distributions $\mu$ and $\nu$ is called the free convolution of $\mu$ and $\nu$, and is denoted by $\mu\boxplus\nu$. Free GIG distribution --------------------- In this paper we are concerned with a specific family of probability measures which we will refer to as free GIG (fGIG) distributions. The free Generalized Inverse Gaussian (fGIG) distribution is a measure $\mu=\mu(\alpha,\beta,\lambda)$, where $\lambda\in\mathbb{R}$ and $\alpha,\beta>0$ which is compactly supported on the interval $[a,b]$ with the density $$\begin{aligned} \mu({{\rm d}}x)=\frac{1}{2\pi}\sqrt{(x-a)(b-x)} \left(\frac{\alpha}{x}+\frac{\beta}{\sqrt{ab}x^2}\right){{\rm d}}x, \end{aligned}$$ where $0<a<b$ are the solution of $$\begin{aligned} \label{eq1} 1-\lambda+\alpha\sqrt{ab}-\beta\frac{a+b}{2ab}=&0\\ \label{eq2} 1+\lambda+\frac{\beta}{\sqrt{ab}}-\alpha\frac{a+b}{2}=&0. \end{aligned}$$ Observe that the system of equations for coefficients for fixed $\lambda\in \mathbb{R}$ and $\alpha,\beta>0$ has a unique solution $0<a<b$. We can easily get the following \[prop:ab\] Let $\lambda\in{\mathbb{R}}$. Given $\alpha,\beta>0$, the system of equations , has a unique solution $(a,b)$ such that $$\label{unique} 0<a<b, \qquad |\lambda| \left(\frac{\sqrt{a}-\sqrt{b}}{\sqrt{a}+\sqrt{b}}\right)^2<1.$$ Conversely, given $(a,b)$ satisfying , the set of equations – has a unique solution $(\alpha,\beta)$, which is given by $$\begin{aligned} &\alpha = \frac{2}{(\sqrt{a}-\sqrt{b})^2}\left( 1 + \lambda \left(\frac{\sqrt{a}-\sqrt{b}}{\sqrt{a}+\sqrt{b}}\right)^2 \right) >0, \label{eq3}\\ &\beta = \frac{2 a b}{ (\sqrt{a} - \sqrt{b})^2}\left( 1 - \lambda \left(\frac{\sqrt{a}-\sqrt{b}}{\sqrt{a}+\sqrt{b}}\right)^2 \right)>0. \label{eq4}\end{aligned}$$ Thus we may parametrize fGIG distribution using parameters $(a,b,\lambda)$ satisfying instead of $(\alpha,\beta,\lambda)$. We will make it clear whenever we will use a parametrization different than $(\alpha,\beta,\lambda)$. It is useful to introduce another parameterization to describe the distribution $\mu(\alpha,\beta,\lambda)$. Define $$\label{eq:AB} A=(\sqrt{b}-\sqrt{a})^2, \qquad B= (\sqrt{a}+\sqrt{b})^2,$$ observe that we have then $$\begin{aligned} \alpha =& \frac{2}{A}\left( 1 + \lambda \frac{A}{B} \right) >0, \qquad \beta = \frac{(B-A)^2}{8 A}\left( 1 - \lambda \frac{A}{B} \right)>0,\\ a =& \left(\frac{\sqrt{B}-\sqrt{A}}{2}\right)^2,\qquad b = \left(\frac{\sqrt{A}+\sqrt{B}}{2}\right)^2. \end{aligned}$$ The condition is equivalent to $$\label{eq:ABineq} 0<\max\{1,|\lambda|\}A<B.$$ Thus one can describe any measure $\mu(\alpha,\beta,\lambda)$ in terms of $\lambda,A,B$. $R$-transform of fGIG distribution {#sec:form} ---------------------------------- The $R$-transform of the measure $\mu(\alpha,\beta,\lambda)$ was calculated in [@Szp17]. Since the $R$-transform will play a crucial role in the paper we devote this section for a detailed study of its properties. We also point out some properties of fGIG distribution which are derived from properties of the $R$-transform. Before we present the $R$-transform of fGIG distribution let us briefly recall how the $R$-transform is defined and stress its importance for free probability. \[rem:Cauchy\] 1. For a probability measure $\mu$ one defines its Cauchy transform via $$G_\mu(z)=\int \frac{1}{z-x}{{\rm d}}\mu(x).$$ It is an analytic function on the upper-half plane with values in the lower half-plane. Cauchy transform determines uniquely the measure and there is an inversion formula called Stieltjes inversion formula, namely for $h_{\varepsilon}(t)=-\tfrac{1}{\pi}{\text{\normalfont Im}}\, G_\mu(t+i{\varepsilon})$ one has $${{\rm d}}\mu(t)=\lim_{{\varepsilon}\to 0^+} h_{\varepsilon}(t)\,{{\rm d}}t,$$ where the limit is taken in the weak topology. 2. For a compactly supported measure $\mu$ one can define in a neighbourhood of the origin so called $R$-transform by $$R_\mu(z)=G_\mu^{\langle -1 \rangle}(z)-\frac{1}{z},$$ where by $G_\mu^{\langle -1 \rangle}$ we denote the inverse under composition of the Cauchy transform of $\mu$.\ The relevance of the $R$-transform for free probability comes form the fact that it linearizes free convolution, that is $R_{\mu\boxplus\nu}=R_\mu+R_\nu$ in a neighbourhood of zero. The $R$-transform of fGIG distribution is given by $$\label{eq:R_F} \begin{split} r_{\alpha,\beta,\lambda}(z) &= \frac{-\alpha + (\lambda+1)z + \sqrt{f_{\alpha,\beta,\lambda}(z)}}{2z(\alpha-z)} \end{split}$$ in a neighbourhood of $0$, where the square root is the principal value, $$\label{eq:pol_fpar} f_{\alpha,\beta,\lambda}(z)=(\alpha+(\lambda-1)z)^2-4\beta z (z-\alpha)(z-\gamma),$$ and $$\begin{aligned} \gamma=\frac{\alpha^2 a b+\frac{\beta^2}{ab}-2\alpha\beta\left(\frac{a+b}{\sqrt{ab}}-1\right)-(\lambda-1)^2}{4\beta}.\end{aligned}$$ Note that $z=0$ is a removable singular point of $r_{\alpha,\beta,\lambda}$. Observe that in terms of $A,B$ defined by we have $$\begin{aligned} \gamma &= 2\frac{\lambda A^2 + A B - 2B^2}{B(B-A)^2}. \end{aligned}$$ It is straightforward to observe that implies $A (\lambda A+B)<2 A B<2B^2$, thus we have $\gamma<0$. The following remark was used in [@Szp17 Remark 2.1] without a proof. We give a proof here. \[rem:F\_lambda\_sym\] We have $f_{\alpha,\beta,\lambda}(z)=f_{\alpha,\beta,-\lambda}(z)$, where $\alpha,\beta>0,\lambda\in\mathbb{R}$. To see this one has to insert the definition of $\gamma$ into to obtain $$f_{\alpha,\beta,\lambda}(z)=\alpha z \lambda^2+\left(\left(ab\alpha^2-2\alpha\beta\frac{a+b}{\sqrt{ab}}+\frac{\beta^2}{ab}+2\alpha\beta\right)z-4 \beta z^2-\alpha\right) (z-\alpha),$$ where $a=a(\alpha,\beta,\lambda)$ and $b=b(\alpha,\beta,\lambda)$. Thus it suffices to show that the quantity $g(\alpha,\beta,\lambda):=ab\alpha^2-2\alpha\beta\frac{a+b}{\sqrt{ab}}+\frac{\beta^2}{ab}$ does not depend on the sign of $\lambda$. To see this, observe from the system of equations and that $a(\alpha,\beta,-\lambda)=\frac{\beta}{\alpha b(\alpha,\beta,\lambda)}$ and $b(\alpha,\beta,-\lambda)=\frac{\beta}{\alpha a(\alpha,\beta,\lambda)}$. It is then straightforward to check that $ g(\alpha,\beta,-\lambda) = g(\alpha,\beta,\lambda). $ The $R$-transform of the measure $\mu(\alpha,\beta,\lambda)$ can be extended to a function (still denoted by $r_{\alpha,\beta,\lambda}$) which is analytic on $\mathbb{C}^{-}$ and continuous on $({\mathbb{C}}^- \cup{\mathbb{R}})\setminus\{\alpha\}$. A direct calculation shows that using parameters $A,B$ defined by the polynomial $f_{\alpha,\beta,\lambda}$ under the square root factors as $$f_{\alpha,\beta,\lambda}(z) = \frac{(B-A)^2(B-\lambda A)}{2 A B}\left[z +\frac{2(B+\lambda A)}{B(B-A)}\right]^2 \left[ \frac{2B}{A(B-\lambda A)} - z \right].$$ Thus we can write $$\label{eq:pol_par} f_{\alpha,\beta,\lambda}(z) = 4\beta (z-\delta)^2(\eta-z),$$ where $$\begin{aligned} &\delta = - \frac{2(B+\lambda A)}{B(B-A)}<0, \label{eq5}\\ &\eta = \frac{2B}{A(B-\lambda A)} >0. \label{eq6}\end{aligned}$$ It is straightforward to verify that implies $\eta \geq \alpha$ with equality valid only when $\lambda=0$.\ Calculating $f_{\alpha,\beta,\lambda}(0)$ using first and then we get $4 \beta \eta \delta^2 = \alpha^2$, since $\eta \geq \alpha$ we see that $\delta \geq -\sqrt{\alpha/(4\beta)}$ with equality only when $\lambda=0$. Since all roots of $f_{\alpha,\beta,\lambda}$ are real, the square root $\sqrt{f_{\alpha,\beta,\lambda}(z)}$ may be defined continuously on ${\mathbb{C}}^-\cup{\mathbb{R}}$ so that $\sqrt{f_{\alpha,\beta,\lambda}(0)}=\alpha$. As noted above $\delta<0$, and continuity of $f_{\alpha,\beta,\lambda}$ implies that we have $$\label{RRR} \sqrt{f_{\alpha,\beta,\lambda}(z)} = 2(z-\delta)\sqrt{\beta(\eta-z)},$$ where we take the principal value of the square root in the expression $\sqrt{4\beta(\eta-z)}$. Thus finally we arrive at the following form of the $R$-transform $$\label{R} \begin{split} r_{\alpha,\beta,\lambda}(z) &= \frac{-\alpha + (\lambda+1)z + 2(z-\delta)\sqrt{\beta(\eta-z)}}{2z(\alpha-z)} \end{split}$$ which is analytic in ${\mathbb{C}}^-$ and continuous in $({\mathbb{C}}^- \cup{\mathbb{R}})\setminus\{\alpha\}$ as required. Next we describe the behaviour of the $R$-transform around the singular point $z=\alpha$. If $\lambda>0$ then $$\label{alpha} \begin{split} r_{\alpha,\beta,\lambda}(z) = \frac{\lambda}{\alpha-z} -\frac{1}{2\alpha} \left(1+\lambda+\frac{\sqrt{\beta}(2\eta-3\alpha+\delta)}{\sqrt{\eta-\alpha}}\right) + o(1),\qquad\mbox{as } z\to\alpha. \end{split}$$ If $\lambda<0$ then $$\label{alpha2} r_{\alpha,\beta,\lambda}(z) = - \frac{1}{2\alpha}\left(1+\lambda+\frac{\sqrt{\beta}(2\eta-3\alpha+\delta)}{\sqrt{\eta-\alpha}}\right)+ o(1), \qquad \mbox{as } z\to\alpha.$$ In the remaining case $\lambda=0$ one has $$\label{alpha3} \begin{split} r_{\alpha,\beta,0}(z) &= \frac{-\alpha + z + 2(z-\delta)\sqrt{\beta(\alpha-z)}}{2z(\alpha-z)} = -\frac{1}{2z} + \frac{\sqrt{\beta}(z-\delta)}{z\sqrt{\alpha-z}}. \end{split}$$ By the definition we have $f_{\alpha,\beta,\lambda}(\alpha) = (\lambda\alpha)^2$, substituting this in the expression we obtain that $ \alpha |\lambda| = 2(\alpha-\delta)\sqrt{\beta (\eta-\alpha)}. $ Taking the Taylor expansion around $z=\alpha$ for $\lambda \neq0$ we obtain $$\label{Taylor} \sqrt{f_{\alpha,\beta,\lambda}(z)}=\alpha|\lambda| + \frac{\sqrt{\beta}(2\eta-3\alpha+\delta)}{\sqrt{\eta-\alpha}}(z-\alpha) + o(|z-\alpha|),\qquad \mbox{as }z\to \alpha.$$ This implies and and so $r_{\alpha,\beta,\lambda}$ may be extended to a continued function on ${\mathbb{C}}^-\cup {\mathbb{R}}$. The case $\lambda=0$ follows from the fact that in this case we have $\eta=\alpha$. In the case $\lambda<0$ one can extend $r_{\alpha,\beta,\lambda}$ to an analytic function in ${\mathbb{C}}^-$ and continuous in ${\mathbb{C}}^- \cup{\mathbb{R}}$. Some properties of fGIG distribution ------------------------------------ We study here further properties of free GIG distribution. Some of them motivate Section 4 where we will characterize fGIG distribution in a way analogous to classical GIG distribution. The next remark recalls the definition and some basic facts about free Poisson distribution, which will play an important role in this paper. \[rem:freePoisson\] 1. Marchenko–Pastur (or free-Poisson) distribution $\nu=\nu(\gamma, \lambda)$ is defined by the formula $$\begin{aligned} \nu=\max\{0,\,1-\lambda\}\,\delta_0+\tilde{\nu}, \end{aligned}$$ where $\gamma,\lambda> 0$ and the measure $\tilde{\nu}$, supported on the interval $(\gamma(1-\sqrt{\lambda})^2,\,\gamma(1+\sqrt{\lambda})^2)$, has the density (with respect to the Lebesgue measure) $$\tilde{\nu}({{\rm d}}x)=\frac{1}{2\pi\gamma x}\,\sqrt{4\lambda\gamma^2-(x-\gamma(1+\lambda))^2}\,{{\rm d}}x.$$ 2. The $R$-transform of the free Poisson distribution $\nu(\gamma,\lambda)$ is of the form $$r_{\nu(\gamma, \lambda)}(z)=\frac{\gamma\lambda}{1-\gamma z}.$$ The next proposition was proved in [@Szp17 Remark 2.1] which is the free counterpart of a convolution property of classical Gamma and GIG distribution. The proof is a straightforward calculation of the $R$-transform with the help of Remark \[rem:F\_lambda\_sym\]. \[GIGPoissConv\] Let $X$ and $Y$ be free, $X$ free GIG distributed $\mu(\alpha,\beta,-\lambda)$ and $Y$ free Poisson distributed $\nu(1/\alpha,\lambda)$ respectively, for $\alpha,\beta,\lambda>0$. Then $X+Y$ is free GIG distributed $\mu(\alpha,\beta,\lambda)$. We also quote another result from [@Szp17 Remark 2.2] which is again the free analogue of a property of classical GIG distribution. The proof is a simple calculation of the density. \[GIGInv\] If $X$ has the free GIG distribution $\mu(\alpha,\beta,\lambda)$ then $X^{-1}$ has the free GIG distribution $\mu(\beta,\alpha,-\lambda)$. The two propositions above imply some distributional properties of fGIG distribution. In the Section 4 we will study characterization of the fGIG distribution related to these properties. \[rem:prop\] 1. Fix $\lambda,\alpha>0$. If $X$ has fGIG distribution $\mu(\alpha,\alpha,-\lambda)$ and $Y$ has the free Poisson distribution $\nu(1/\alpha,\lambda)$ and $X,Y$ are free then $X\stackrel{d}{=}(X+Y)^{-1}$. Indeed by Proposition \[GIGPoissConv\] we get that $X+Y$ has fGIG distribution $\mu(\alpha,\alpha,\lambda)$ and now Proposition \[GIGInv\] implies that $(X+Y)^{-1}$ has the distribution $\mu(\alpha,\alpha,-\lambda)$. 2. One can easily generalize the above observation. Take $\alpha,\beta,\lambda>0$, and $X,Y_1,Y_2$ free, such that $X$ has fGIG distribution $\mu(\alpha,\beta,-\lambda)$, $Y_1$ is free Poisson distributed $\nu(1/\beta,\lambda)$ and $Y_2$ is distributed $\nu(1/\alpha,\lambda)$, then $X\stackrel{d}{=}(Y_1+(Y_2+X)^{-1})^{-1}$. Similarly as before we have that $X+Y_2$ has distribution $\mu(\alpha,\beta,\lambda)$, then by Proposition \[GIGInv\] we get that $(X+Y_2)^{-1}$ has distribution $\mu(\beta,\alpha,-\lambda)$. Then we have that $Y_1+(Y_2+X)^{-1}$ has the distribution $\mu(\beta,\alpha,\lambda)$ and finally we get $(Y_1+(Y_2+X)^{-1})^{-1}$ has the desired distribution $\mu(\alpha,\beta,-\lambda)$. 3. Both identities above can be iterated finitely many times, so that one obtains that $X\stackrel{d}{=}\left(Y_1+\left(Y_2+\cdots\right)^{-1}\right)^{-1}$, where $Y_1,Y_2,\ldots$ are free, for $k$ odd $Y_k$ has the free Poisson distribution $\nu(1/\beta,\lambda)$ and for $k$ even $Y_k$ has the distribution $\nu(1/\alpha,\lambda)$. For the case described in $1^o$ one simply has to take $\alpha=\beta$. We are not sure if infinite continued fractions can be defined. Next we study limits of the fGIG measure $\mu(\alpha,\beta,\lambda)$ when $\alpha\to 0$ and $\beta\to 0$. This was stated with some mistake in [@Szp17 Remark 2.3]. As $\beta\downarrow 0$ we have the following weak limits of the fGIG distribution $$\begin{aligned} \label{eq:limits} \lim_{\beta\downarrow 0}\mu(\alpha,\beta,\lambda) = \begin{cases} \nu(1/\alpha, \lambda), & \lambda \geq1, \\ \frac{1-\lambda}{2}\delta_0 + \frac{1+\lambda}{2}\nu(\frac{1+\lambda}{2\alpha},1), & |\lambda|<1,\\ \delta_0, & \lambda\leq -1. \end{cases} \end{aligned}$$ Taking into account Proposition \[GIGInv\] one can also describe limits when $\alpha\downarrow 0$ for $\lambda \geq1$. This result reflects the fact that GIG matrix generalizes the Wishart matrix for $\lambda \geq1$, but not for $\lambda <1$ (see [@Fer06] for GIG matrix and [@HP00] for the Wishart matrix). We will find the limit by calculating limits of the $R$-transform, since convergence of the $R$-transform implies weak convergence. Observe that from Remark \[rem:F\_lambda\_sym\] we can consider only $\lambda\geq0$, however we decided to present all cases, as the consideration will give asymptotic behaviour of support of fGIG measure. In view of , the only non-trivial part is limits of $\beta\gamma$ when $\beta\to0$. Observe that if we define $F(a,b,\alpha,\beta,\lambda)$ by $$\begin{aligned} \left(1-\lambda+\alpha\sqrt{ab}-\beta\frac{a+b}{2ab},1+\lambda+\frac{\beta}{\sqrt{ab}}-\alpha\frac{a+b}{2}\right)^T &=\left(f(a,b,\alpha,\beta,\lambda),g((a,b,\alpha,\beta,\lambda))\right)^T\\ &=F(a,b,\alpha,\beta,\lambda). \end{aligned}$$ Then the solution to the system , are functions $(a(\alpha,\beta,\lambda),b(\alpha,\beta,\lambda))$, such that $F(a(\alpha,\beta,\lambda),b(\alpha,\beta,\lambda),\alpha,\beta,\lambda)=(0,0)$. To use Implicit Function Theorem, by calculating the Jacobian with respect to $(a,b)$, we observe that $a(\alpha,\beta,\lambda)$ and $b(\alpha,\beta,\lambda)$ are continuous (even differentiable) functions of $\alpha,\beta>0$ and $\lambda\in \mathbb{R}$. **Case 1.** $\lambda>1$\ Observe if we take $\beta=0$ then a real solution $0<a<b$ for the system , $$\begin{aligned} \label{Case1} 1-\lambda+\alpha\sqrt{ab}&=0\\ 1+\lambda-\alpha\frac{a+b}{2}&=0 \end{aligned}$$ still exists. Moreover, because at $\beta=0$ Jacobian is non-zero, Implicit Function Theorem says that solutions are continuous at $\beta=0$. Thus using we get $$\begin{aligned} \beta \gamma=\frac{\alpha^2 ab+\tfrac{\beta^2}{ab}-2\alpha\beta (\tfrac{a+b}{\sqrt{ab}}-1)-(\lambda-1)^2}{4}=\frac{\tfrac{\beta^2}{ab}-2\alpha\beta (\tfrac{a+b}{\sqrt{ab}}-1)}{4}. \end{aligned}$$ The above implies that $\beta\gamma\to 0$ when $\beta\to 0$ since $a,b$ have finite and non-zero limit when $\beta\to 0$, as explained above. **Case 2.** $\lambda<-1$\ In that case we see that setting $\beta=0$ in leads to an equation with no real solution for $(a,b)$. In this case the part $\beta\tfrac{a+b}{2ab}$ has non-zero limit when $\beta\to 0$. To be precise substitute $a=\beta a^\prime$ and $b=\beta b^\prime$ in , , and then we get $$\begin{aligned} 1-\lambda+\alpha\beta\sqrt{a^\prime b^\prime}-\frac{a^\prime+b^\prime}{2a^\prime b^\prime}=&0\\ 1+\lambda+\frac{1}{\sqrt{a^\prime b^\prime}}-\alpha\beta\frac{a^\prime+b^\prime}{2}=&0. \end{aligned}$$ The above system is equivalent to the system , with $\alpha:=\alpha\beta$ and $\beta:=1$. If we set $\beta=0$ as in Case 1 we get $$\begin{aligned} 1-\lambda-\frac{a^\prime+b^\prime}{2a^\prime b^\prime}=&0\\ \label{Case2} 1+\lambda+\frac{1}{\sqrt{a^\prime b^\prime}}=&0. \end{aligned}$$ The above system has solution $0<a^\prime<b^\prime$ for $\lambda<-1$. Calculating the Jacobian we see that it is non-zero at $\beta=0$, so Implicit Function Theorem implies that $a^\prime$ and $b^\prime$ are continuous functions at $\beta=0$ in the case $\lambda<-1$.\ This implies that in the case $\lambda<-1$ the solutions of , are $a(\beta)= \beta a^\prime+o(\beta)$ and $b(\beta)= \beta b^\prime +o(\beta)$. Thus we have $$\begin{aligned} \lim_{\beta\to 0}\beta \gamma&=\frac{\alpha^2 ab+\tfrac{\beta^2}{ab}-2\alpha\beta (\tfrac{a+b}{\sqrt{ab}}-1)-(\lambda-1)^2}{4}\\&=\frac{\tfrac{1}{a^\prime b^\prime}-(\lambda-1)^2 }{4}=\frac{(\lambda+1)^2-(\lambda-1)^2 }{4}=\lambda, \end{aligned}$$ where in the equation one before the last we used . **Case 3.** $|\lambda|< 1$\ Observe that neither nor has a real solution in the case $|\lambda|< 1$. This is because in this case asymptotically $a(\beta)=a^\prime \beta +o(\beta)$ and $b$ has a finite positive limit as $\beta\to0$. Similarly as in Case 2 let us substitute $a=\beta a^\prime $ in , , which gives $$\begin{aligned} 1-\lambda+\alpha\sqrt{\beta a^\prime b}-\frac{\beta a^\prime+b}{2a^\prime b}&=0\\ 1+\lambda+\frac{\sqrt{\beta}}{a^\prime b}-\alpha\frac{\beta a^\prime+b}{2}&=0. \end{aligned}$$ If we set $\beta=0$ we get $$\begin{aligned} 1-\lambda-\frac{1}{2a^\prime}&=0,\\ 1+\lambda-\alpha\frac{b}{2}&=0, \end{aligned}$$ which obviously has positive solution $(a^\prime,b)$ when $|\lambda|<1$. As before the Jacobian is non-zero at $\beta=0$, so $a^\prime$ and $b$ are continuous at $\beta=0$.\ Now we go back to the limit $\lim_{\beta\to 0}\beta\gamma$. We have $a(\beta)=\beta a^\prime+o(\beta)$, thus $$\begin{aligned} \lim_{\beta\to 0}\beta \gamma=\lim_{\beta\to 0}\frac{\alpha^2 ab+\tfrac{\beta^2}{ab}-2\alpha\beta (\tfrac{a+b}{\sqrt{ab}}-1)-(\lambda-1)^2}{4}=-\frac{(\lambda-1)^2}{4}. \end{aligned}$$ **Case 4.** $|\lambda|= 1$\ An analysis similar to the above cases shows that in the case $\lambda=1$ we have $a(\beta)=a^\prime \beta^{2/3}+o(\beta^{2/3})$ and $b$ has positive limit when $\beta\to 0$. In the case $\lambda=-1$ one gets $a(\beta)=a^\prime \beta+o(\beta)$ and $b(\beta)=b^\prime\beta^{1/3}+o(\beta^{1/3})$ as $\beta\to 0$. Thus we can calculate the limit of $f_{\alpha,\beta,\lambda}$ as $\beta\to 0$ , $$\lim_{\beta\downarrow0} f_{\alpha,\beta,\lambda}(z) = \begin{cases} (\alpha+(\lambda-1)z)^2, & \lambda>1, \\ \alpha^2+(\lambda^2-1)\alpha z, & |\lambda| \leq 1, \\ (\alpha-(\lambda+1)z)^2, & \lambda<-1. \end{cases}$$ The above allows us to calculate limiting $R$-transform and hence the Cauchy transform which implies . Considering the continuous dependence of roots on parameters shows the following asymptotic behaviour of the double root $\delta<0$ and the simple root $\eta \geq \alpha$. 1. If $|\lambda|>1$ then $\delta\to\alpha/(1-|\lambda|)$ and $\eta\to +\infty$ as $\beta \downarrow0$. 2. If $|\lambda|<1$ then $\delta\to-\infty$ and $\eta \to \alpha/(1-\lambda^2)$ as $\beta \downarrow0$. 3. If $\lambda=\pm1$ then $\delta\to-\infty$ and $\eta \to +\infty$ as $\beta \downarrow0$. Regularity of fGIG distribution under free convolution ====================================================== In this section we study in detail regularity properties of the fGIG distribution related to the operation of free additive convolution. In the next theorem we collect all the results proved in this section. The theorem contains several statements about free GIG distributions. Each subsection of the present section proves a part of the theorem. \[thm:sec3\] The following holds for the free GIG measure $\mu(\alpha,\beta,\lambda)$: 1. It is freely infinitely divisible for any $\alpha,\beta>0$ and $\lambda\in\mathbb{R}.$ 2. The free Levy measure is of the form $$\label{FLM} \tau_{\alpha,\beta,\lambda}({{\rm d}}x)=\max\{\lambda,0\} \delta_{1/\alpha}({{\rm d}}x) + \frac{(1-\delta x) \sqrt{\beta (1-\eta x)}}{\pi x^{3/2} (1-\alpha x)} 1_{(0,1/\eta)}(x)\, {{\rm d}}x.$$ 3. It is free regular with zero drift for all $\alpha,\beta>0$ and $\lambda\in\mathbb{R}$. 4. It is freely self-decomposable for $\lambda \leq -\frac{B^{\frac{3}{2}}}{A\sqrt{9B-8A}}.$ 5. It is unimodal. Free infinite divisibility and free Lévy measure ------------------------------------------------ As we mentioned before, having the operation of free convolution defined, it is natural to study infinite divisibility with respect to $\boxplus$. We say that $\mu$ is freely infinitely divisible if for any $n\geq 1$ there exists a probability measure $\mu_n$ such that $$\mu=\underbrace{\mu_n\boxplus\ldots\boxplus\mu_n}_{\text{$n$ times}}.$$ It turns out that free infinite divisibility of compactly supported measures can by described in terms of analytic properties of the $R$-transform. In particular it was proved in [@Voi86 Theorem 4.3] that the free infinite divisibility is equivalent to the inequality ${\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(z)) \leq0$ for all $z\in{\mathbb{C}}^-$. As in the classical case, for freely infinitely divisible probability measures, one can represent its free cumulant transform with a Lévy–Khintchine type formula. For a probability measure $\mu$ on ${\mathbb{R}}$, the *free cumulant transform* is defined by $${\mathcal{C}^\boxplus}_\mu(z) = z r_\mu(z).$$ Then $\mu$ is FID if and only if ${\mathcal{C}^\boxplus}_\mu$ can be analytically extended to ${\mathbb{C}}^-$ via the formula $$\label{FLK2} {\mathcal{C}^\boxplus}_{\mu}(z)=\xi z+\zeta z^2+\int_{{\mathbb{R}}}\left( \frac{1}{1-z x}-1-z x \,1_{[-1,1] }(x) \right) \tau({{\rm d}}x) ,\qquad z\in {\mathbb{C}}^-,$$ where $\xi \in {\mathbb{R}},$ $\zeta\geq 0$ and $\tau$ is a measure on ${\mathbb{R}}$ such that $$\tau (\{0\})=0,\qquad \int_{{\mathbb{R}}}\min \{1,x^2\}\tau ({{\rm d}}x) <\infty.$$ The triplet $(\xi,\zeta,\tau)$ is called the *free characteristic triplet* of $\mu$, and $\tau$ is called the *free Lévy measure* of $\mu$. The formula is called the *free Lévy–Khintchine formula*. The above form of free Lévy–Khintchine formula was obtained by Barndorff-Nielsen and Thorbj[ø]{}rnsen [@BNT02b] and it has a probabilistic interpretation (see [@Sat13]). Another form was obtained by Bercovici and Voiculescu [@BV93], which is more suitable for limit theorems. In order to prove that all fGIG distributions are freely infinitely divisible we will use the following lemma. \[lem:32\] Let $f\colon (\mathbb C^{-} \cup \mathbb R) \setminus \{x_0\} \to \mathbb C$ be a continuous function, where $x_0\in\mathbb{R}$. Suppose that $f$ is analytic in $\mathbb{C}^{-}$, $f(z)\to 0$ uniformly with $z\to \infty$ and ${\text{\normalfont Im}}(f(x))\leq 0$ for $x \in \mathbb R \setminus \{x_0\}$. Suppose moreover that ${\text{\normalfont Im}}(f(z))\leq 0$ for ${\text{\normalfont Im}}(z)\leq 0$ in neighbourhood of $x_0$ then ${\text{\normalfont Im}}(f(z))\leq 0$ for all $z\in \mathbb{C}^{-}$. Since $f$ is analytic the function ${\text{\normalfont Im}}f$ is harmonic and thus satisfies the maximum principle. Fix ${\varepsilon}>0$. Since $f(z)\to 0$ uniformly with $z\to \infty,$ let $R>0$ be such that ${\text{\normalfont Im}}f(z)<{\varepsilon}$. Consider a domain $D_{\varepsilon}$ with the boundary $$\partial D_{\varepsilon}=[-R,x_0-{\varepsilon}] \cup \{x_0+{\varepsilon}e^{{{\rm i}}\theta}: \theta \in[-\pi,0]\} \cup[x_0+{\varepsilon},R]\cup\{R e^{{{\rm i}}\theta}: \theta \in[-\pi,0]\}$$ Observe that on $\partial D_{\varepsilon}$ ${\text{\normalfont Im}}f(z)<{\varepsilon}$ by assumptions, and hence by the maximum principle we have ${\text{\normalfont Im}}f(z)<{\varepsilon}$ on whole $D_{\varepsilon}$. Letting ${\varepsilon}\to 0$ we get that ${\text{\normalfont Im}}f(z) \leq 0$ on $\mathbb{C}^{-}$. Next we proceed with the proof of free infinite divisibility of fGIG distributions. $ $ [**Case 1.**]{} $\lambda<1$. Observe that we have $$\label{hypo} {\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(x)) \leq 0, \qquad x\in{\mathbb{R}}\setminus\{\alpha\}.$$ From we see that ${\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(x))=0$ for $x \in(-\infty,\alpha) \cup (\alpha,\eta]$, and $$\label{r} {\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(x)) = \frac{(x-\delta)\sqrt{\beta (x-\eta)}}{x (\alpha-x)}<0,\qquad x>\eta$$ since $\eta>\alpha>0>\delta$.\ Moreover observe that by for ${\varepsilon}>0$ small enough we have ${\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(\alpha+{\varepsilon}e^{i \theta}))<0$, for $\theta\in[-\pi,0]$. Now Lemma \[lem:32\] implies that free GIG distribution if freely ID in the case $\lambda>0$. [**Case 2**]{} $\lambda>0$. In this case similar argument shows that $\mu(\alpha,\beta,\lambda)$ is FID. Moreover by $\eqref{alpha2}$ point $z=\alpha$ is a removable singularity and $r_{\alpha,\beta,\lambda}$ extends to a continuous function on ${\mathbb{C}}^-\cup{\mathbb{R}}$. Thus one does not need to take care of the behaviour around $z=\alpha$. [**Case 3**]{} $\lambda=0$. For $\lambda=0$ one can adopt a similar argumentation using . It also follows from the fact that free GIG family $\mu(\alpha,\beta,\lambda)$ is weakly continuous with respect to $\lambda$. Since free infinite divisibility is preserved by weak limits, then the case $\lambda=0$ may be deduced from the previous two cases. Next we will determine the free Lévy measure of free GIG distribution $\mu(\alpha,\beta,\lambda)$. Let $(\xi_{\alpha,\beta,\lambda}, \zeta_{\alpha,\beta,\lambda},\tau_{\alpha,\beta,\lambda})$ be the free characteristic triplet of the free GIG distribution $\mu(\alpha,\beta,\lambda)$. By the Stieltjes inversion formula mentioned in Remark \[rem:Cauchy\], the absolutely continuous part of the free Lévy measure has the density $$-\lim_{{\varepsilon}\to0}\frac{1}{\pi x^2}{\text{\normalfont Im}}(r_{\alpha,\beta,\lambda}(x^{-1}+{{\rm i}}{\varepsilon})), \qquad x \neq 0,$$ atoms are at points $1/p~(p\neq0)$, such that the weight is given by $$\tau_{\alpha,\beta,\lambda}(\{1/p\})=\lim_{z\to p} (p - z) r_{\alpha,\beta,\lambda}(z),$$ is non-zero, where $z$ tends to $p$ non-tangentially from ${\mathbb{C}}^-$. In our case the free Lévy measure does not have a singular continuous part since $r_{\alpha,\beta,\lambda}$ is continuous on ${\mathbb{C}}^-\cup{\mathbb{R}}\setminus\{\alpha\}$. Considering – and we obtain the free Lévy measure $$\tau_{\alpha,\beta,\lambda}({{\rm d}}x)=\max\{\lambda,0\} \delta_{1/\alpha}({{\rm d}}x) + \frac{(1-\delta x) \sqrt{\beta (1-\eta x)}}{\pi x^{3/2} (1-\alpha x)} 1_{(0,1/\eta)}(x)\, {{\rm d}}x.$$ Recall that $\eta \geq \alpha >0>\delta$ holds, and $\eta=\alpha$ if and only if $\lambda=0$. The other two parameters $\xi_{\alpha,\beta,\lambda}$ and $\zeta_{\alpha,\beta,\lambda}$ in the free characteristic triplet will determined in Section \[sec:FR\]. Free regularity {#sec:FR} --------------- In this subsection we will deal with a property stronger than free infinite divisibility, so called free regularity.\ Let $\mu$ be a FID distribution with the free characteristic triplet $(\xi,\zeta,\tau)$. When the semicircular part $\zeta$ is zero and the free Lévy measure $\tau$ satisfies a stronger integrability property $\int_{{\mathbb{R}}}\min\{1,|x|\}\tau({{\rm d}}x) < \infty$, then the free Lévy-Khintchine representation reduces to $$\label{FR} {\mathcal{C}^\boxplus}_\mu(z)=\xi' z+\int_{{\mathbb{R}}}\left( \frac{1}{1-z x}-1\right) \tau({{\rm d}}x) ,\qquad z\in {\mathbb{C}}^-,$$ where $\xi' =\xi -\int_{[-1,1]}x \,\tau({{\rm d}}x) \in {\mathbb{R}}$ is called a *drift*. The distribution $\mu$ is said to be *free regular* [@PAS12] if $\xi'\geq0$ and $\tau$ is supported on $(0,\infty)$. A probability measure $\mu$ on ${\mathbb{R}}$ is free regular if and only if the free convolution power $\mu^{\boxplus t}$ is supported on $[0,\infty)$ for every $t>0$, see [@AHS13]. Examples of free regular distributions include positive free stable distributions, free Poisson distributions and powers of free Poisson distributions [@Has16]. A general criterion in [@AHS13 Theorem 4.6] shows that some boolean stable distributions [@AH14] and many probability distributions [@AHS13; @Has14; @AH16] are free regular. A recent result of Ejsmont and Lehner [@EL Proposition 4.13] provides a wide class of examples: given a nonnegative definite complex matrix $\{a_{ij}\}_{i,j=1}^n$ and free selfadjoint elements $X_1,\dots, X_n$ which have symmetric FID distributions, the polynomial $\sum_{i,j=1}^n a_{ij} X_i X_j$ has a free regular distribution with zero drift. For the free GIG distributions, the semicircular part can be found by $\displaystyle\zeta_{\alpha,\beta,\lambda}= \lim_{z\to \infty} z^{-1} r_{\alpha,\beta,\lambda}(z)=0$. The free Lévy measure satisfies $${{\rm supp}}(\tau_{\alpha,\beta,\lambda}) \subset (0,\infty), \qquad \int_0^\infty \min\{1,x\} \tau_{\alpha,\beta,\lambda}({{\rm d}}x)<\infty$$ and so we have the reduced formula . The drift is given by $\displaystyle \xi_{\alpha,\beta,\lambda}'=\lim_{u \to -\infty} r_{\alpha,\beta,\lambda}(u)=0$. Free selfdecomposability ------------------------ Classical GIG distribution is selfdecomposable [@Hal79; @SS79] (more strongly, hyperbolically completely monotone [@Bon92 p. 74]), and hence it is natural to ask whether free GIG distribution is freely selfdecomposable.\ A distribution $\mu$ is said to be *freely selfdecomposable* (FSD) [@BNT02a] if for any $c\in(0,1)$ there exists a probability measure $\mu_c$ such that $\mu= (D_c\mu) \boxplus \mu_c $, where $D_c\mu$ is the dilation of $\mu$, namely $(D_c\mu)(B)=\mu(c^{-1}B)$ for Borel sets $B \subset {\mathbb{R}}$. A distribution is FSD if and only if it is FID and its free Lévy measure is of the form $$\label{SD Levy} \frac{k(x)}{|x|}\, {{\rm d}}x,$$ where $k\colon {\mathbb{R}}\to [0,\infty)$ is non-decreasing on $(-\infty,0)$ and non-increasing on $(0,\infty)$. Unlike the free regular distributions, there are only a few known examples of FSD distributions: the free stable distributions, some free Meixner distributions, the classical normal distributions and a few other distributions (see [@HST Example 1.2, Corollary 3.4]). The free Poisson distribution is not FSD. In view of , the free GIG distribution $\mu(\alpha,\beta,\lambda)$ is not FSD if $\lambda > 0$. Suppose $\lambda\leq0$, then $\mu(\alpha,\beta,\lambda)$ is FSD if and only if the function $$k_{\alpha,\beta,\lambda}(x)=\frac{(1-\delta x) \sqrt{\beta (1-\eta x)}}{\pi \sqrt{x} (1-\alpha x)}$$ is non-increasing on $(0,1/\eta)$. The derivative is $$k_{\alpha,\beta,\lambda}'(x) = - \frac{\sqrt{\beta} [1+(\delta-3\alpha)x +(2 \alpha \eta -2\eta\delta + \alpha \delta )x^2]}{2\pi x^{3/2} (1-\alpha x)^2 \sqrt{1-\eta x}}.$$ Hence FSD is equivalent to $$g(x):=1+(\delta-3\alpha)x +(2 \alpha \eta -2\eta\delta + \alpha \delta )x^2 \geq 0,\qquad 0\leq x \leq 1/\eta.$$ Using $\eta \geq\alpha >0>\delta$, one can show that $2 \alpha \eta -2\eta\delta + \alpha \delta >0$, a straightforward calculation shows that the function $g$ takes a minimum at a point in $(0,1/\eta)$. Thus FSD is equivalent to $$D:=(\delta-3\alpha)^2 - 4 (2 \alpha \eta -2\eta\delta + \alpha \delta ) \leq 0.$$ In order to determine when the above inequality holds, it is convenient to switch to parameters $A,B$ defined by . Using formulas derived in Section \[sec:form\] we obtain $$D= \frac{4(B+\lambda A)(8\lambda^2 A^3 -9\lambda^2 A^2 B +B^3)}{A^2 B (A-B)^2 (B-\lambda A)}.$$ Calculating $\lambda$ for which $D$ is non-positive we obtain that $$\lambda\leq-\frac{B^{\frac{3}{2}}}{A\sqrt{9B-8A}}.$$ One can easily find that the maximum of the function $-\frac{B^{\frac{3}{2}}}{A\sqrt{9B-8A}}$ over $A,B\geq 0$ equals $-\frac{4}{9}\sqrt{3}$. Thus the set of parameters $(A,B)$ that give FSD distributions is nonempty if and only if $\lambda \leq -\frac{4}{9}\sqrt{3}$. In the critical case $\lambda = -\frac{4}{9}\sqrt{3}$ only the pairs $(A, \frac{4}{3}A), A>0$ give FSD distributions. If one puts $A=12 t, B= 16 t$ then $a=(2-\sqrt{3})^2t,b=(2+\sqrt{3})^2t$, $\alpha = \frac{3-\sqrt{3}}{18 t}$, $\beta = \frac{3+\sqrt{3}}{18}t, \delta= -\frac{3-\sqrt{3}}{6t}=- 2\eta$. One can easily show that $\mu(\alpha,\beta,-1)$ is FSD if and only if $(0<A<)~B\leq \frac{-1+\sqrt{33}}{2} A$. Finally note that the above result is in contrast to the fact that classical GIG distributions are all selfdecomposable. Unimodality ----------- Since relations of free infinite divisibility and free self decomposability were studied in the literature, we decided to determine whether measures from the free GIG family are unimodal. A measure $\mu$ is said to be *unimodal* if for some $c\in{\mathbb{R}}$ $$\label{UM} \mu({{\rm d}}x)=\mu(\{c\})\delta_c({{\rm d}}x)+f(x)\, {{\rm d}}x,$$ where $f\colon{\mathbb{R}}\to[0,\infty)$ is non-decreasing on $(-\infty,c)$ and non-increasing on $(c,\infty)$. In this case $c$ is called the *mode*. Hasebe and Thorbj[ø]{}rnsen [@HT16] proved that FSD distributions are unimodal. Since some free GIG distributions are not FSD, the result from [@HT16] does not apply. However it turns out that free GIG measures are unimodal. Calculating the derivative of the density of $\mu(\alpha,\beta,\lambda)$ one obtains $$\frac{ x (a + b - 2 x)(x \alpha + \tfrac{\beta}{\sqrt{a b}}) - 2 (b - x) (x-a) (x \alpha + \tfrac{2 \beta}{\sqrt{ a b}})}{2 x^3 \sqrt{(b - x) (x-a)}}$$ Denoting by $f(x)$ the quadratic polynomial in the numerator, one can easily see from the shape of the density that $f(a)>0>f(b)$ and hence the derivative vanishes at a unique point in $(a,b)$ (since $f$ is quadratic). Characterizations the free GIG distribution =========================================== In this section we show that the fGIG distribution can be characterized similarly as classical GIG distribution. In [@Szp17] fGIG was characterized in terms of free independence property, the classical probability analogue of this result characterizes classical GIG distribution. In this section we find two more instances where such analogy holds true, one is a characterization by some distributional properties related with continued fractions, the other is maximization of free entropy. Continued fraction characterization ----------------------------------- In this section we study a characterization of fGIG distribution which is analogous to the characterization of GIG distribution proved in [@LS83]. Our strategy is different from the one used in [@LS83]. We will not deal with continued fractions, but we will take advantage of subordination for free convolutions, which allows us to prove the simpler version of ”continued fraction” characterization of fGIG distribution. \[thm:char\] Let $Y$ have the free Poisson distribution $\nu(1/\alpha,\lambda)$ and let $X$ be free from $Y$, where $\alpha,\lambda>0$ and $X>0$, then we have $$\begin{aligned} \label{eq:distr_char} X\stackrel{d}{=}\left(X+Y\right)^{-1}\end{aligned}$$ if and only if $X$ has free GIG distribution $\mu(\alpha,\alpha,-\lambda)$. Observe that the “if” part of the above theorems is contained in the remark \[rem:prop\]. We only have to show that if $\eqref{eq:distr_char}$ holds where $Y$ has free Poisson distribution $\nu(1/\alpha,\lambda)$, then $X$ has free GIG distribution. As mentioned above our proof of the above theorem uses subordination of free convolution. This property of free convolution was first observed by Voiculescu [@Voi93] and then generalized by Biane [@Bia98]. Let us shortly recall what we mean by subordination of free additive convolution. Subordination of free convolution states that for probability measures $\mu,\nu$, there exists an analytic function defined on $\mathbb{C}\setminus\mathbb{R}$ with the property $F(\overline{z})=\overline{F(z)}$ such that for $z\in\mathbb{C}^+$ we have ${\text{\normalfont Im}}F(z)>{\text{\normalfont Im}}z$ and $$G_{\mu\boxplus\nu}(z)=G_\mu(\omega(z)).$$ Now if we denote by $\omega_1$ and $\omega_2$ subordination functions such that $G_{\mu\boxplus\nu}=G_\mu(\omega_1)$ and $G_{\mu\boxplus\nu}=G_\nu(\omega_2)$, then $\omega_1(z)+\omega_2(z)=1/G_{\mu\boxplus\nu}(z)+z$. Next we proceed with the proof of Theorem \[thm:char\] which is the main result of this section. First note that is equivalent to $$\frac{1}{X}\stackrel{d}{=}X+Y,$$ Which may be equivalently stated in terms of Cauchy transforms of both sides as $$\begin{aligned} \label{eqn:CharCauch} G_{X^{-1}}(z)=G_{X+Y}(z). \end{aligned}$$ Subordination allows as to write the Cauchy transform of $X+Y$ in two ways $$\begin{aligned} \label{Sub1} G_{X+Y}(z)&=G_X(\omega_X(z)),\\ \label{Sub2} G_{X+Y}(z)&=G_Y(\omega_Y(z)). \end{aligned}$$ Moreover $\omega_X$ and $\omega_Y$ satisfy $$\begin{aligned} \omega_X(z)+\omega_Y(z)=1/G_{X+Y}(z)+z. \end{aligned}$$ From the above we get $$\begin{aligned} \label{subrel} \omega_X(z)=1/G_{X+Y}(z)+z-\omega_Y(z), \end{aligned}$$ this together with and gives $$\begin{aligned} G_{X^{-1}}(z)&=G_X\left(\frac{1}{G_{X^{-1}}}(z)+z-\omega_Y(z)\right). \end{aligned}$$ Since we know that $Y$ has free Poisson distribution $\nu(\lambda,1/\alpha)$ we can calculate $\omega_Y$ in terms of $G_{X^{-1}}$ using . To do this one has to use the identity $G_Z^{\langle -1\rangle}(z)=r_Z(z)+1/z$ for any self-adjoint random variable $Z$ and the form of the $R$-transform of free Poisson distribution recalled in Remark \[rem:freePoisson\]. $$\begin{aligned} \label{omegax} \omega_Y(z)=\frac{\lambda }{\alpha-G_{X^{-1}}(z) }+\frac{1}{G_{X^{-1}}(z)} \end{aligned}$$ Now we can use , where we substitute $G_{X+Y}(z)=G_{X^{-1}}(z)$ to obtain $$\begin{aligned} \label{FE} G_{X^{-1}}(z)=G_{X}\left(\frac{\lambda }{G_{X^{-1}}(z)-\alpha }+z\right). \end{aligned}$$ Next we observe that we have $$\begin{aligned} \label{CauchInv} G_{X^{-1}}(z)=\frac{1}{z}\left(-\frac{1}{z}G_X\left(\frac{1}{z}\right)+1\right), \end{aligned}$$ which allows to transform to an equation for $G_X$. It is enough to show that this equation has a unique solution. Indeed from Remark \[rem:prop\] we know that free GIG distribution $\mu(\alpha,\alpha,\lambda)$ has the desired property, which in particular means that for $X$ distributed $\mu(\alpha,\alpha,\lambda)$ equation is satisfied. Thus if there is a unique solution it has to be the Cauchy transform of the free GIG distribution. To prove uniqueness of the Cauchy transform of $X$, we will prove that coefficients of the expansion of $G_X$ at a special “good” point, are uniquely determined by $\alpha$ and $\lambda$. First we will determine the point at which we will expand the function. Observe that with our assumptions $G_{X^{-1}}$ is well defined on the negative half-line, moreover $G_{X^{-1}}(x)<0$ for any $x<0$, and we have $G_{X^{-1}}(x)\to0$ with $x\to-\infty$. On the other hand the function $f(x)=1/x-x$ is decreasing on the negative half-line, and negative for $x\in(-1,0)$. Thus there exist a unique point $c\in(-1,0)$ such that $$\label{key eq} \frac{1}{c} = \frac{\lambda}{G_{X^{-1}}(c)-\alpha}+c.$$ Let us denote $$M(z):= G_X\left(\frac{1}{z}\right)$$ and $$\label{eqn:funcN} N(z):=\left(\frac{\lambda}{G_{X^{-1}}(z)-\alpha}+z\right)^{-1} = \frac{-z+ \alpha z^2 +M(z)}{-(1+\lambda)z^2 +\alpha z^3 + z M(z)},$$ where the last equality follows from .\ One has $N(c)=c$, and our functional equation may be rewritten (with the help of ) as $$\label{FE2} -M(z) +z = z^2 M(N(z)).$$ Functions $M$ and $N$ are analytic around any $x<0$. Consider the expansions $$\begin{aligned} M(z) &= \sum_{n=0}^\infty \alpha_n (z-c)^n, \\ N(z) &= \sum_{n=0}^\infty \beta_n (z-c)^n. \end{aligned}$$ Observe that $\beta_0=c$ since $N(c)=c$. Differentiating we observe that any $\beta_n,\, n\geq1$ is a rational function of $\alpha, \lambda, c, \alpha_0,\alpha_1,\dots, \alpha_n$. Moreover any $\beta_n,\,n\geq 1 $ is a degree one polynomial in $\alpha_n$. We have $$\begin{aligned} \beta_n = \frac{-\lambda}{[\alpha_0 -(1+\lambda)c+\alpha c^2]^2} \alpha_n + R_n, \end{aligned}$$ where $R_n$ is a rational function of $n+3$ variables evaluated at $(\alpha,\lambda,c,\alpha_0,\alpha_1,\dots, \alpha_{n-1})$, which does not depend on the distribution of $X$. For example $\beta_1$ is given by $$\label{eq:beta1} \begin{split} \beta_1 &=N'(c) =\left. \left(\frac{-z+ \alpha z^2 +M(z)}{-(1+\lambda)z^2 +\alpha z^3 + z M(z)}\right)' \right|_{z=c} \\ &=\frac{-\lambda c^2 \alpha_1 + c^2(-1-\lambda +2\alpha c -\alpha^2 c^2)+2c(1+\lambda-\alpha c)\alpha_0- \alpha_0^2 }{c^2[\alpha_0-(1+\lambda)c+\alpha c^2]^2}. \end{split}$$ Next we investigate some properties of $c, \alpha_0$ and $\alpha_1$. Evaluating both sides of at $z=c$ yields $$-M(c)+c = c^2 M(N(c)) = c^2M(c),$$ since $M(c)=\alpha_0$ we get $$\label{eq1c} \alpha_0=\frac{c}{1+c^2}.$$ Observe that $\alpha_0= M(c) = G_X(1/c)$ and $\alpha_1=M'(c)=-c^{-2} G_X'(1/c)$ hence we have $$\frac{1}{1+c^2}= \int_{0}^\infty \frac{1}{1-c x} {{\rm d}}\mu_X( x), \qquad \alpha_1 = \int_{0}^\infty \frac{1}{(1-c x)^2} {{\rm d}}\mu_X( x),$$ where $\mu_X$ is the distribution of $X$. Using the Schwarz inequality for the first estimate and a simple observation that $0\leq1/(1-cx) \leq1$ for $x>0$, for the latter estimate we obtain $$\label{eq:alpha1} \frac{1}{(1+c^2)^{2}}=\left(\int_{0}^\infty \frac{1}{1-c x} \mu_X({{\rm d}}x)\right)^2 \leq \int_{0}^\infty \frac{1}{(1-c x)^2} \mu_X({{\rm d}}x)= \alpha_1 \leq \frac{1}{1+c^2}.$$ The equation together with gives $$\label{eq2c} \frac{1}{c} = \frac{\lambda c^2}{-\alpha_0 + c - \alpha c^2} +c.$$ Substituting to after simple calculations we get $$\label{eq:c} \alpha c^4 - (1+\lambda)c^3+(1-\lambda)c -\alpha = 0.$$ We start by showing that $\alpha_0$ is determined only by $\alpha$ and $\lambda$. We will show that $c$, which we showed before is a unique number, depends only on $\alpha$ and $\lambda$ and thus shows that $\alpha_0$ is determined by $\alpha$ and $\lambda$. Since the polynomial $c^4 - (1+\lambda)c^3$ is non-negative for $c<0$ and has a root at $c=0$, and the polynomial $(\lambda-1)c +\alpha$ equals $\alpha>0$ at $c=0$ it follows that there is only one negative $c$, such that the two polynomials are equal and thus the number $c$ is uniquely determined by $(\alpha,\lambda)$. From we see that $\alpha_0$ is also uniquely determined by $(\alpha,\lambda)$. Next we will prove that $\alpha_1$ only depends on $\alpha$ and $\lambda$. Differentiating and evaluating at $z=c$ we obtain $$\label{eq3c} 1-\alpha_1 = 2 c \alpha_0 + c^2 \alpha_1 \beta_1.$$ Substituting $\alpha_0$ and $\lambda$ from the equations and we simplify and we get $$\beta_1 = \frac{(1-c^4)\alpha_1 -1+2c^2-\alpha c^3 -\alpha c^5}{c(\alpha-c+\alpha c^2)}$$ and then equation may be expressed in the form $$\label{eq:alpha2} c(1+c^2)^2 \alpha_1^2 + (\alpha(1+c^2)^2-2c )(1+c^2)\alpha_1 -(\alpha-c + \alpha c^2) =0.$$ The above is a degree 2 polynomial in $\alpha_1$, denote this polynomial by $f$, we have then $$f(0) <0,\qquad f\left(\frac{1}{1+c^2}\right) = \alpha c^2 (1+c^2)>0.$$ Where the first inequality follows from the fact that $c<0$. Since the coefficient $c(1+c^2)^2$ is negative we conclude that $f$ has one root in the interval $(0,1/(1+c^2))$ and the other in $(1/(1+c^2),\infty)$. The inequality implies that $\alpha_1$ is the smaller root of $f$, which is a function of $ \alpha$ and $c$ and hence of $\alpha$ and $\lambda$. In order to prove that $\alpha_n$ depends only on $(\alpha,\lambda)$ for $n\geq2$, first we estimate $\beta_1$. Note that and imply that $$\label{eq4c} \beta_1 = \frac{1-c^2}{\alpha_1 c^2(1+c^2)} -\frac{1}{c^2}.$$ Combining this with the inequality we easily get that $$\label{eq:beta} -1 \leq \beta_1 \leq -c^2.$$ Now we prove by induction on $n$ that $\alpha_n$ only depends on $\alpha$ and $\lambda$. For $n\geq2$ differentiating $n$-times and evaluating at $z=c$ we arrive at $$\label{eq5c} -\alpha_n = c^2(\alpha_n \beta_1^n + \alpha_1 \beta_n) + Q_n,$$ where $Q_n$ is a universal polynomial (which means that the polynomial does not depend on the distribution of $X$) in $2n+1$ variables evaluated at $(\alpha,\lambda,c,\alpha_1,\dots, \alpha_{n-1}, \beta_1,\cdots, \beta_{n-1})$. According to the inductive hypothesis, the polynomials $R_n$ and $Q_n$ depend only on $\alpha$ and $\lambda$. We also have that $\beta_n=p \alpha_n + R_n$, where $$p := \frac{-\lambda}{[\alpha_0 -(1+\lambda)c+\alpha c^2]^2} = \frac{1-c^4}{c(\alpha-c+\alpha c^2)}.$$ The last formula is obtained by substituting $\alpha_0$ and $\lambda$ from and . The equation then becomes $$(1+c^2\beta_1^n + c^2 p \alpha_1)\alpha_n + c^2 \alpha_1 R_n + Q_n=0.$$ The inequalities and show that $$\begin{split} 1+c^2\beta_1^n + c^2 p \alpha_1 &\geq 1-c^2 +\frac{c^2(1-c^4)}{c(\alpha-c+\alpha c^2) (1+c^2)} =\frac{\alpha(1-c^4)}{\alpha-c+\alpha c^2}>0, \end{split}$$ thus $1+c^2\beta_1^n + c^2 p \alpha_1$ is non-zero. Therefore, the number $\alpha_n$ is uniquely determined by $\alpha$ and $\lambda$. Thus we have shown that, if a random variable $X>0$ satisfies the functional equation for fixed $\alpha>0$ and $\lambda>0$, then the point $c$ and all the coefficients $\alpha_0,\alpha_1,\alpha_2,\dots$ of the series expansion of $M(z)$ at $z=c$ are determined only by $\alpha$ and $\lambda$. By analytic continuation, the Cauchy transform $G_X$ is determined uniquely by $\alpha$ and $\lambda$, so there is only one distribution of $X$ for which this equation is satisfied. Remarks on free entropy characterization ---------------------------------------- Féral [@Fer06] proved that fGIG $\mu(\alpha,\beta,\lambda)$ is a unique probability measure which maximizes the following free entropy functional with potential $$\begin{aligned} I_{\alpha,\beta,\lambda}(\mu)=\int\!\!\!\int \log|x-y|\, {{\rm d}}\mu(x) {{\rm d}}\mu(y)-\int V_{\alpha,\beta,\lambda}(x)\, {{\rm d}}\mu(x), \end{aligned}$$ among all the compactly supported probability measures $\mu$ on $(0,\infty)$, where $\alpha, \beta>0$ and $\lambda \in {\mathbb{R}}$ are fixed constants, and $$V_{\alpha,\beta,\lambda}(x)=(1-\lambda) \log x+\alpha x+\frac{\beta}{x}.$$ Here we point out the classical analogue. The (classical) GIG distribution is the probability measure on $(0,\infty)$ with the density $$\label{C-GIG} \frac{(\alpha/\beta)^{\lambda/2}}{2K_\lambda(2\sqrt{\alpha\beta})} x^{\lambda-1} e^{-(\alpha x + \beta /x)}, \qquad \alpha,\beta>0, \lambda\in{\mathbb{R}},$$ where $K_\lambda$ is the modified Bessel function of the second kind. Note that this density is proportional to $\exp(-V_{\alpha,\beta,\lambda}(x))$. Kawamura and Iwase [@KI03] proved that the GIG distribution is a unique probability measure which maximizes the classical entropy with the same potential $$H_{\alpha,\beta,\lambda}(p) = - \int p(x) \log p(x)\, {{\rm d}}x -\int V_{\alpha,\beta,\lambda}(x)p(x)\, {{\rm d}}x$$ among all the probability density functions $p$ on $(0,\infty)$. This statement is slightly different from the original one [@KI03 Theorem 2], and for the reader’s convenience a short proof is given below. The proof is a straightforward application of the Gibbs’ inequality $$\label{Gibbs} -\int p(x)\log p(x)\,{{\rm d}}x \leq -\int p(x)\log q(x)\,{{\rm d}}x,$$ for all probability density functions $p$ and $q$, say on $(0,\infty)$. Taking $q$ to be the density of the classical GIG distribution and computing $\log q(x)$, we obtain the inequality $$\label{C-entropy} H_{\alpha,\beta,\lambda}(p) \leq -\log \frac{(\alpha/\beta)^{\lambda/2}}{2K_\lambda(2\sqrt{\alpha\beta})}.$$ Since the Gibbs inequality becomes equality if and only if $p=q$, the equality in holds if and only if $p=q$, as well. From the above observation, it is tempting to investigate the map $$C e^{-V(x)}\, {{\rm d}}x \mapsto \text{~the maximizer $\mu_V$ of the free entropy functional $I_V$ with potential $V$},$$ where $C>0$ is a normalizing constant. Under some assumption on $V$, the free entropy functional $I_V$ is known to have a unique maximizer (see [@ST97]) and so the above map is well defined. Note that the density function $C e^{-V(x)}$ is the maximizer of the classical entropy functional with potential $V$, which follows from the same arguments as above. This map sends Gaussian to semicircle, gamma to free Poisson (when $\lambda \geq1$), and GIG to free GIG. More examples can be found in [@ST97]. Acknowledgement {#acknowledgement .unnumbered} =============== The authors would like to thank BIRS, Banff, Canada for hospitality during the workshop “Analytic versus Combinatorial in Free Probability” where we started to work on this project. TH was supported by JSPS Grant-in-Aid for Young Scientists (B) 15K17549. KSz was partially supported by the NCN (National Science Center) grant 2016/21/B/ST1/00005. [100]{} O. Arizmendi and T. Hasebe, Classical and free infinite divisibility for Boolean stable laws, Proc. Amer. Math. Soc. 142 (2014), no. 5, 1621–1632. O. Arizmendi and T. Hasebe, Classical scale mixtures of boolean stable laws, Trans. Amer. Math. Soc. 368 (2016), 4873–4905. O. Arizmendi, T. Hasebe and N. Sakuma, On the law of free subordinators, ALEA Lat. Am. J. Probab. Math. Stat. 10 (2013), no. 1, 271–291. O.E. Barndorff-Nielsen and Ch. Halgreen, Infinite divisibility of the hyperbolic and generalized inverse Gaussian distributions, Z. Wahrsch. Verw. Gebiete 38 (1977), no. 4, 309–311. O.E. Barndorff-Nielsen and S. Thorbj[ø]{}rnsen, Self-decomposability and Lévy processes in free probability, Bernoulli 8(3) (2002), 323–366. O.E. Barndorff-Nielsen and S. Thorbj[ø]{}rnsen, Lévy laws in free probability, Proc. Nat. Acad. Sci. 99 (2002), 16568–16575. H. Bercovici and V. Pata, Stable laws and domains of attraction in free probability theory. With an appendix by Philippe Biane, Ann. of Math. (2) 149 (1999), no. 3, 1023–1060. H. Bercovici and D. Voiculescu, Free convolution of measures with unbounded support, Indiana Univ. Math. J. 42, no. 3 (1993), 733–773. P. Biane, Processes with free increments, Math. Z. 227 (1998), no. 1, 143–174 L. Bondesson, Generalized gamma convolutions and related classes of distributions and densities, Lecture Notes in Stat. 76, Springer, New York, 1992. W. Ejsmont and F. Lehner, Sample variance in free probability, J. Funct. Anal. 273, Issue 7 (2017), 2488–2520. D. Féral, The limiting spectral measure of the generalised inverse Gaussian random matrix model, C. R. Math. Acad. Sci. Paris 342 (2006), no. 7, 519–522. U. Haagerup and S. Thorbj[ø]{}rnsen, On the free gamma distributions, Indiana Univ. Math. J. 63 (2014), no. 4, 1159–1194. C. Halgreen, Self-decomposability of the generalized inverse gaussian and hyperbolic distributions, Z. Wahrsch. verw. Gebiete 47 (1979), 13–17. T. Hasebe, Free infinite divisibility for beta distributions and related ones, Electron. J. Probab. 19, no. 81 (2014), 1–33. T. Hasebe, Free infinite divisibility for powers of random variables, ALEA Lat. Am. J. Probab. Math. Stat. 13 (2016), no. 1, 309–336. T. Hasebe, N. Sakuma and S. Thorbj[ø]{}rnsen, The normal distribution is freely self-decomposable, Int. Math. Res. Notices, available online. arXiv:1701.00409 T. Hasebe and S. Thorbj[ø]{}rnsen, Unimodality of the freely selfdecomposable probability laws, J. Theoret. Probab. 29 (2016), Issue 3, 922–940. F. Hiai and D. Petz, *The semicircle law, free random variables and entropy*, Mathematical Surveys and Monographs 77, American Mathematical Society, Providence, RI, 2000. T. Kawamura and K. Iwase, Characterizations of the distributions of power inverse Gaussian and others based on the entropy maximization principle, J. Japan Statist. Soc. 33, no. 1 (2003), 95–104. G. Letac and V. Seshadri, A characterization of the generalized inverse Gaussian distribution by continued fractions, Z. Wahrsch. Verw. Gebiete 62 (1983), 485-489. G. Letac and J. Weso[ł]{}owski, An independence property for the product of [GIG]{} and gamma laws, Ann. Probab. 28 (2000) 1371-1383. E. Lukacs, A characterization of the gamma distribution, Ann. Math. Statist. 26 (1955) 319–324. H. Matsumoto and M. Yor, An analogue of [P]{}itman’s [$2M-X$]{} theorem for exponential [W]{}iener functionals. [II]{}. [T]{}he role of the generalized inverse [G]{}aussian laws, Nagoya Math. J. 162 (2001) 65–86. J. A. Mingo and R. Speicher, [*Free Probability and Random Matrices*]{}. Springer, 2017. A. Nica and R. Speicher, [*Lectures on the Combinatorics of Free Probability.*]{} London Mathematical Society Lecture Note Series, 335. Cambridge University Press, Cambridge, 2006. V. Pérez-Abreu and N. Sakuma, Free generalized gamma convolutions, Electron. Commun. Probab. 13 (2008), 526–539. V. Pérez-Abreu and N. Sakuma, Free infinite divisibility of free multiplicative mixtures of the Wigner distribution, J. Theoret. Probab. 25, No. 1 (2012), 100–121. E.B. Saff and V. Totic, [*Logarithmic Potentials with External Fields*]{}, Springer-Verlag, Berlin, Heidelberg, 1997. K. Sato, [*Lévy Processes and Infinitely Divisible Distributions*]{}, corrected paperback edition, Cambridge Studies in Advanced Math. 68, Cambridge University Press, Cambridge, 2013. D.N. Shanbhag and M. Sreehari, An extension of Goldie’s result and further results in infinite divisibility, Z. Wahrsch. verw. Gebiete 47 (1979), 19–25. K. Szpojankowski, On the Lukacs property for free random variables, Studia Math. 228 (2015), no. 1, 55–72. K. Szpojankowski, A constant regression characterization of the Marchenko-Pastur law. Probab. Math. Statist. 36 (2016), no. 1, 137–145. K. Szpojankowski, On the Matsumoto-Yor property in free probability, J. Math. Anal. Appl. 445(1) (2017), 374–393. D. Voiculescu, Symmetries of some reduced free product $C^\ast$-algebras, in: Operator Algebras and their Connections with Topology and Ergodic Theory, 556–588, Lecture Notes in Mathematics, Vol. 1132, Springer-Verlag, Berlin/New York, 1985. D. Voiculescu, Addition of certain noncommuting random variables, J. Funct. Anal. 66 (1986), no. 3, 323–346. D. Voiculescu. The analogues of entropy and of Fisher’s information measure in free probability theory, I. Comm. Math. Phys. 155 (1993), no. 1, 71–92. D. Voiculescu, K. Dykema and A. Nica, [*Free random variables*]{}. A noncommutative probability approach to free products with applications to random matrices, operator algebras and harmonic analysis on free groups. CRM Monograph Series, 1. American Mathematical Society, Providence, RI, 1992.
--- abstract: 'We describe the first-principles design and subsequent synthesis of a new material with the specific functionalities required for a solid-state-based search for the permanent electric dipole moment of the electron. We show computationally that perovskite-structure europium barium titanate should exhibit the required large and pressure-dependent ferroelectric polarization, local magnetic moments, and absence of magnetic ordering at liquid helium temperature. Subsequent synthesis and characterization of Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ ceramics confirm the predicted desirable properties.' author: - 'K. Z. Rushchanskii' - 'S. Kamba' - 'V. Goian' - 'P. Vaněk' - 'M. Savinov' - 'J. Prokleška' - 'D. Nuzhnyy' - 'K. Knížek' - 'F. Laufek' - 'S. Eckel' - 'S. K. Lamoreaux' - 'A. O. Sushkov' - 'M. Ležaić' - 'N. A. Spaldin' title: 'First-principles design and subsequent synthesis of a material to search for the permanent electric dipole moment of the electron' --- \ The Standard Model of particle physics incorporates the breaking of the discrete symmetries of parity ($P$) and the combined charge conjugation and parity ($CP$). It is thought however, that the $CP$-violation within the framework of the Standard Model is insufficient to explain the observed matter-antimatter asymmetry of the Universe [@Trodden1999], therefore a so far unknown source of $CP$-violation likely exists in nature. The existence of a non-zero permanent electric dipole moment (EDM) of a particle, such as an electron, neutron, or atom, would violate time reversal ($T$) symmetry (Fig. \[PT\_cartoon\]) and therefore imply $CP$-violation through the $CPT$ theorem [@Khriplovich1997]. In the Standard Model these EDMs are strongly suppressed, the theoretical predictions lying many orders of magnitude below the current experimental limits. However, many theories beyond the Standard Model, such as supersymmetry, contain a number of $CP$-violating phases that lead to EDM predictions within experimental reach [@Bernreuther1991]. Searching for EDMs therefore constitutes a background-free method of probing the $CP$-violating physics beyond the Standard Model. A number of experimental EDM searches are currently under way or are being developed – systems studied in these experiments include diatomic molecules [@Hudson2002; @Kawall2004], diamagnetic atoms [@Griffith2009; @Guest2007; @Tardiff2007], molecular ions [@Stutz2004], cold atoms [@Weiss2003], neutrons [@Baker2006], liquids [@Ledbetter2005], and solids [@Heidenreich2005; @Bouchard2008] – with one of the most promising novel techniques being electric-field-correlated magnetization measurements in solids [@Shapiro1968; @Lamoreaux2002; @Budker2006]. This technique rests on the fact that, since spin is the only intrinsic vector associated with the electron, a non-vanishing electron EDM is either parallel or antiparallel to its spin and hence its magnetic moment. As a result, when an electric field, which lifts the degeneracy between electrons with EDMs parallel and antiparallel to it, is applied to a sample, the associated imbalance of electron populations generates a magnetization (Fig. \[Zeeman\]). The orientation of the magnetization is reversed when the electric field direction is switched; in our proposed experiment we will monitor this change in sample magnetization using a SQUID magnetometer [@Sushkov2009; @Sushkov2010]. Such [*magnetoelectric responses*]{} in materials with permanent [*macroscopic*]{} magnetizations and polarizations are of great current interest in the materials science community because of their potential for enabling novel devices that tune and control magnetism using electric fields[@Spaldin/Ramesh:2008]. \ Since the experiment aims to detect the intrinsic magnetoelectric response associated with the tiny electric dipole moment of the electron, the design constraints on the material are stringent. First, the solid must contain magnetic ions with unpaired spins, since the equal and opposite spins of paired electrons have corresponding equal and opposite EDMs and contribute no effect. Second, it must be engineered such that the [*conventional*]{} linear magnetoelectric tensor is zero; our approach to achieving this is to use a paramagnet in which the conventional effect is forbidden by time-reversal symmetry[@Fiebig:2005]. To reach the required sensitivity, a high atomic density of magnetic ions ($n\approx 10^{22}$ cm$^{-3}$) is needed, and these magnetic ions must reside at sites with broken inversion symmetry. The energy splitting $\Delta$ shown in Fig. \[Zeeman\] is proportional to the product of the effective electric field experienced by the electron, $E^*$, and its electric dipole moment, $d_e$. The effective electric field, which is equal to the electric field one would have to apply to a free electron to obtain the same energy splitting, is in turn determined by the displacement of the magnetic ion from the center of its coordination polyhedron; for a detailed derivation see Ref. [@Mukhamedjanov2003]. For example, in Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ ceramics (see below) with $\sim$1 $\mu$C/cm$^2$ remanent polarization, the mean displacement of the Eu$^{2+}$ ion with respect to its oxygen cage is 0.01 Å and this results in an effective electric field of $\sim$10 MV/cm, even when no external electric field is applied. We choose a ferroelectric so that it is possible to reverse the direction of the ionic displacements, and hence of the effective electric field, with a moderate applied electric field. Finally, the experiment will be performed inside liquid helium, so the material properties described above must persist at low temperature. A detailed derivation of the dependence of the sensitivity on the material parameters is given in Ref. [@Sushkov2010]. Note that conventional impurities such as defects or domain walls are not detrimental to the experiment since they do not violate time-reversal symmetry. In summary, the following material specifications will allow a sensitive EDM search to be mounted: (i) The material should be ferroelectric, with a large electric polarization, and switchable at liquid He temperature. (ii) There should be a high concentration of ions with local magnetic moments that remain paramagnetic at liquid He temperature; both long-range order and freezing into a glassy state must be avoided. (iii) The local environment at each magnetic ion should be strongly modified by the ferroelectric switching, and (iv) the sample should be macroscopic. With these material properties, and optimal SQUID noise levels, the projected experimental sensitivity is 10$^{-28}$ e.cm after ten days of averaging[@Sushkov2010]. No known materials meet all the requirements. Indeed the contra-indication between ferroelectricity and magnetism has been studied extensively over the last decade in the context of multiferroics [@Hill:2000], where the goal has been to achieve simultaneous ferroelectric and ferromagnetic ordering at high temperature. In spite of extensive efforts, a room temperature multiferroic with large and robust ferroelectricity and magnetization at room temperature remains elusive. While the low temperature constraints imposed here seem at first sight more straightforward, avoiding any magnetic ordering at low temperature, while retaining a high concentration of magnetic ions poses a similarly demanding challenge. In addition the problem of ferroelectric switchability at low temperature is challenging, since coercivities tend to increase as temperature is lowered [@Merz:1951]. We proceed by proposing a trial compound and calculating its properties using density functional theory to determine whether an experimental synthesis should be motivated. We choose an alloy of europium titanate, EuTiO$_3$ and barium titanate, BaTiO$_3$, with motivation as follows: To incorporate magnetism we require unfilled orbital manifolds of localized electrons; to avoid magnetic ordering the exchange interactions should be small. Therefore the tightly bound $4f$ electrons are likely to be the best choice. For conventional ferroelectricity we require transition metal ions with empty $d$ orbitals to allow for good hybridization with coordinating anions on off-centering [@Rondinelli/Eidelson/Spaldin:2009]. (Note that while here we use a conventional ferroelectric mechanism, many alternative routes to ferroelectricity that are compatible with magnetism – and which could form a basis for future explorations – have been recently identified; for a review see Ref. ). Both EuTiO$_3$ and BaTiO$_3$ form in the ABO$_3$ perovskite structure, with divalent Eu$^{2+}$ or Ba$^{2+}$ on the A site, and formally $d^0$ Ti$^{4+}$ on the B site. BaTiO$_3$ is a prototypical ferroelectric with a large room temperature polarization of 25 $\mu$C/cm$^2$.[@Wemple:1968] In the cubic paraelectric phase its lattice constant is 3.996 Å [@Miyake/Ueda:1947]. The Ba$^{2+}$ ion has an inert gas electron configuration and hence zero magnetic moment. The lattice parameter of EuTiO$_3$ is 3.905 Å [@Katsufuji/Takagi:2001], notably smaller than that of BaTiO$_3$. It is not ferroelectric, but has a large dielectric constant ($\epsilon \approx 400$) at low temperature, indicative of proximity to a ferroelectric phase transition; indeed it has recently been reported to be a quantum paraelectric[@Katsufuji/Takagi:2001; @kamba:2007]. First-principles electronic structure calculations have shown that ferroelectricity should be induced along the elongation direction by either compressive or tensile strain [@Fennie/Rabe:2006]. The Eu$^{2+}$ ion has seven unpaired localized $4f$ electrons resulting in a large spin magnetization of 7 $\mu_B$, and EuTiO$_3$ is an antiferromagnet with $G$-type ordering at a low Néel temperature of $\sim$5.3K [@McGuire_et_al:1966; @Chien/DeBenedetti/Barros:1974]. (Independently of the study presented here, EuTiO$_3$ is of considerable current interest because its dielectric response is strongly affected by the magnetic ordering [@Katsufuji/Takagi:2001; @kamba:2007] and because of its unusual third order magnetoelectric response [@Shvartsman_et_al:2010]. These behaviors indicate coupling between the magnetic and dielectric orders caused by sensitivity of the polar soft mode to the magnetic ordering [@Fennie/Rabe:2006; @Goian:2009].) Our hypothesis is that by alloying Ba on the A-site of EuTiO$_3$, the magnetic ordering temperature will be suppressed through dilution, and the tendency to ferroelectricity will be increased through the expansion of the lattice constant. Our hope is to identify an alloying range in which the magnetic ordering temperature is sufficiently low while the ferroelectric polarization and the concentration of magnetic ions remain sufficiently large. In addition, we expect that the polarization will be sensitive to the lattice constant, allowing its magnitude and consequently the coercivity, to be reduced with pressure. First-Principles Calculations ============================= Taking the 50/50 (Eu,Ba)TiO$_3$ ordered alloy as our starting point (Fig. \[th\_phonons\] inset), we next calculate its properties using first-principles. For details of the computations see the Methods section. We began by calculating the phonon dispersion for the high symmetry, cubic perovskite reference structure at a lattice constant of 3.95 Å (chosen, somewhat arbitrarily, for this first step because it is the average of the experimental BaTiO$_3$ and EuTiO$_3$ lattice constants), with the magnetic spins aligned ferromagnetically; our results are shown in Fig. \[th\_phonons\], plotted along the high symmetry lines of the Brillouin zone. Importantly we find a polar $\Gamma$-point instability with an imaginary frequency of 103$i$ cm$^{-1}$ which is dominated by relative oxygen – Ti/Eu displacements (the eigenmode displacements for Eu, Ba, Ti, O$_{\parallel}$ and O$_{\perp}$ are 0.234, -0.059, 0.394, -0.360 and -0.303 respectively); such polar instabilities are indicative of a tendency to ferroelectricity. The zone boundary rotational instabilities that often occur in perovskite oxides and lead to non-polar, antiferrodistortive ground states are notably absent (in fact the flat bands at $\sim$60 cm$^{-1}$ are stable rotational vibrations). Interestingly we find that the Eu ions have a significant amplitude in the soft-mode eigenvector, in contrast to the Ba ions both here and in the parent BaTiO$_3$. Next we performed a structural optimization of both the unit cell shape and the ionic positions of our Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ alloy with the total volume constrained to that of the ideal cubic structure studied above (3.95$^3$ Å$^3$ per formula unit). Our main finding is that the Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ alloy is polar with large relative displacements of oxygen and both Ti and Eu relative to the high symmetry reference structure. Using the Berry phase method we obtain a ferroelectric polarization value of $P = 23$ $\mu$C/cm$^2$. Our calculated ground state is orthorhombic with the polarization oriented along a \[011\] direction and lattice parameters $a=3.94$ Å, $b=5.60$ Å and $c=5.59$ Å. As expected from our analysis of the soft mode, the calculated ground state is characterized by large oxygen – Ti/Eu displacements, and the absence of rotations or tilts of the oxygen octahedra. Importantly, the large Eu amplitude in the soft mode manifests as a large off-centering of the Eu from the center of its oxygen coordination polyhedron in the ground state structure. The origin of the large Eu displacement lies in its small ionic radius compared with that of divalent Ba$^{2+}$: The large coordination cage around the Eu ion which is imposed by the large lattice constant of the alloy results in under-bonding of the Eu that can be relieved by off-centering. Indeed, we find that in calculations for fully relaxed single phase EuTiO$_3$, the oxygen octahedra tilt to reduce the volume of the A site in a similar manner to those known to occur in SrTiO$_3$, in which the A cation size is almost identical. This Eu off-centering is desirable for the EDM experiment because the change in local environment at the magnetic ions on ferroelectric switching determines the sensitivity of the EDM measurement. P ($\mu$C/cm$^2$) ------- ------------------- ---- 61.63 (constrained) 23 62.30 (experimental) 28 64.63 (relaxed) 44 : Calculated ferroelectric polarizations, P, of Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ at three different volumes.[]{data-label="PversusV"} We note that the magnitude of the polarization is strongly dependent on the volume used in the calculation (Table \[PversusV\]). At the experimental volume (reported in the next section), which is only slightly larger than our constrained volume of $3.95^3$ Å$^3$, we obtain a polarization of 28 $\mu$C/cm$^2$. At full relaxation, where we find a larger volume close to that of BaTiO$_3$, we obtain a polarization of 44 $\mu$C/cm$^2$, almost certainly a substantial over-estimate. This volume dependence suggests that the use of pressure to reduce the lattice parameters and suppress the ferroelectric polarization could be a viable tool for reducing the coercivity at low temperatures. Indeed our computations show that, at a pressure corresponding to 2.8 GPa applied to the experimental volume the theoretical structure is cubic, with both the polarization and coercive field reduced to zero. Finally, to investigate the likelihood of magnetic ordering, we calculated the relative energies of the ferromagnetic state discussed above and of two antiferromagnetic arrangements: planes of ferromagnetically ordered spins coupled antiferromagnetically along either the pseudo-cubic $z$ axis or the $x$ or $y$ axes. (Note that these are degenerate in the high-symmetry cubic structure). For each magnetic arrangement we re-relaxed the lattice parameters and atomic positions. As expected for the highly localized Eu $4f$ electrons on their diluted sublattice, the energy differences between the different configurations are small – around 1 meV per 40 atom supercell – suggesting an absence of magnetic ordering down to low temperatures. While our calculations find the ferromagnetic state to be the lowest energy, this is likely a consequence of our A-site ordering and should not lead us to anticipate ferromagnetism at low temperature (Note that, after completing our study, we found a report of an early effort to synthesize (Eu,Ba)TiO$_3$[@Janes/Bodnar/Taylor:1978] in which a large magnetization, attributed to A-site ordering and ferromagnetism, was reported. A-site ordering is now known to be difficult to achieve in perovskite-structure oxides, however, and we find no evidence of it in our samples. Moreover the earlier work determined a tetragonal crystal structure in contrast to our refined orthorhombic structure.) In summary, our predicted properties of the (Eu,Ba)TiO$_3$ alloy – large ferroelectric polarization, reducible with pressure, with large Eu displacements, and strongly suppressed magnetic ordering – meet the criteria for the electron electric dipole moment search and motivate the synthesis and characterization of the compound, described next. Synthesis ========= Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ was synthesized by solid-state reaction using mechanochemical activation before calcination. For details see the Methods section. The density of the sintered pellets was 86-88% of the theoretical density. X-ray diffraction at room temperature revealed the cubic perovskite $Pm\bar{3}m$ structure with a=3.9642(1)Å. At 100K we obtain an orthorhombic ground state with space group $Amm2$, in agreement with the GGA$+U$ prediction, and lattice parameters 3.9563(1), 5.6069(2) and 5.5998(2) Å. Characterization ================ The final step in our study is the characterization of the samples, to check that the measured properties are indeed the same as those that we predicted and desired. Figure \[Fig3\] shows the temperature dependence of the complex permittivity between 1Hz and 1MHz, measured using an impedance analyzer ALPHA-AN (Novocontrol). The low-frequency data below 100kHz are affected above 150[$\,\mbox{K}$]{} by a small defect-induced conductivity and related Maxwell-Wagner polarization; the high-frequency data clearly show a maximum in the permittivity near $T_c$=213[$\,\mbox{K}$]{} indicating the ferroelectric phase transition. Two regions of dielectric dispersion – near 100[$\,\mbox{K}$]{} and below 75[$\,\mbox{K}$]{} – are seen in tan$\delta(T)$; these could originate from oxygen defects or from ferroelectric domain wall motion. Measurement of the polarization was adversely affected by the sample conductivity above 150[$\,\mbox{K}$]{}, but at lower temperatures good quality ferroelectric hysteresis loops were obtained (Fig. \[Fig3\], inset). At 135[$\,\mbox{K}$]{} we obtain a saturation polarization of $\sim$8 $\mu$C/cm$^2$. The deviation from the predicted value could be the result of incomplete saturation as well as the strong volume dependence of the polarization combined with the well-known inaccuracies in GGA$+U$ volumes. As expected, at lower temperatures the coercive field strongly increases, and only partial polarization switching was possible even with an applied electric field of 18kV/cm (at higher electric field dielectric breakdown was imminent). The partial switching is responsible for the apparent decrease in saturation polarization below 40K. Time-domain THz transmission and infrared reflectivity spectra (not shown here) reveal a softening of the polar phonon from $\sim$40[$\,\mbox{cm}^{-1}$]{} at 300[$\,\mbox{K}$]{} to $\sim$15[$\,\mbox{cm}^{-1}$]{} at $T_c$, and then its splitting into two components in the ferroelectric phase. Both components harden on cooling below $T_c$, with the lower frequency component remaining below 20[$\,\mbox{cm}^{-1}$]{} down to 10[$\,\mbox{K}$]{}, and the higher-frequency branch saturating near 70[$\,\mbox{cm}^{-1}$]{} at 10[$\,\mbox{K}$]{}. This behavior is reminiscent of the soft-mode behavior in BaTiO$_{3}$[@Hlinka:2008]. However, when we extract the contribution to the static permittivity that comes from the polar phonon, we find that it is considerably smaller than our measured value (Fig. \[Fig3\]) indicating an additional contribution to the dielectric relaxation. Our observations suggest that the phase transition is primarily soft-mode driven, but also exhibits some order-disorder character. Finally, we measured the magnetic susceptibility $\chi$ at various static magnetic fields as a function of temperature down to 0.4K. (For details see the Methods section.) Our results are shown in Fig. \[Fig4\]. $\chi(T)$ peaks at $T\sim$1.9K indicating an absence of magnetic ordering above this temperature. The $\chi(T)$ data up to 300K show Curie-Weiss behavior $\chi(T)=\frac{C}{T+\theta}$ with $\theta$=-1.63K and $C = 0.017$ emuK/(gOe). The peak in susceptibility at 1.9K is frequency independent and not influenced by zero field heating measurements after field cooling, confirming antiferromagnetic order below $T_N = 1.9$K. As in pure EuTiO$_3$, the $\chi(T)$ peak is suppressed by a static external magnetic field, indicating stabilization of the paramagnetic phase [@Katsufuji/Takagi:2001]. Magnetization curves (Fig. \[Fig4\] inset) show saturation above $2\times10^4$ Oe at temperatures below $T_{N}$ and slower saturation at 5K. No open magnetic hysteresis loops were observed. In summary, we have designed a new material – Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ – with the properties required to enable a measurement of the EDM to a higher accuracy than can currently be realized. Subsequent synthesis of Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ ceramics confirmed their desirable ferroelectric polarization and absence of magnetic ordering above 1.9K. The search for the permanent dipole moment of the electron using Eu$_{0.5}$Ba$_{0.5}$TiO$_3$ is now underway. Initial measurements have already achieved an EDM upper limit of 5 $\times 10^{-23}$ e.cm, which is within a factor of 10 of the current record with a solid-state-based EDM search [@Heidenreich2005]. We are currently studying a number of systematic effects that may mask the EDM signal. The primary error originates from ferroelectric hysteresis-induced heating of the samples during polarization reversal. This heating gives rise to a change in magnetic susceptibility, which, in a non-zero external magnetic field, leads to an undesirable sample magnetization response. We are working to control the absolute magnetic field at the location of the samples to the 0.1 $\mu$G level. Our projected sensitivity of 10$^{-28}$ e.cm should then be achievable. Acknowledgments =============== This work was supported by the US National Science Foundation under award number DMR-0940420 (NAS), by Yale University, by the Czech Science Foundation (Project Nos. 202/09/0682 and AVOZ10100520) and the Young Investigators Group Programme of Helmholtz Association, Germany, contract VH-NG-409. We thank O. Pacherova, R. Krupkova and G. Urbanova for technical assistance and Oleg Sushkov for invaluable discussions. Author contributions ===================== SKL supervised the EDM measurement effort at Yale. AOS and SE performed the analysis and made preliminary measurements, showing that these materials could be useful in an EDM experiment. ML and NAS selected (Eu,Ba)TiO$_3$ as the candidate material according to the experimental requirements and supervised the ab-initio calculations. KZR performed the ab-initio calculations. ML, NAS and KZR analysed the ab-initio results and wrote the theoretical component of the paper. Ceramics were prepared by PV. Crystal structure was determined by KK and FL. Dielectric measurements were performed by MS. JP investigated magnetic properties of ceramics. VG performed infrared reflectivity studies. DN investigated THz spectra. SK coordinated all experimental studies and wrote the synthesis and characterization part of manuscript. NAS coordinated the preparation of the manuscript. Methods ======= Computational details --------------------- We performed first-principles density-functional calculations within the spin-polarized generalized gradient approximation (GGA) [@PBE:1996]. The strong on-site correlations of the Eu $4f$ electrons were treated using the GGA+$U$ method [@Anisimov/Aryasetiawan/Liechtenstein:1997] with the double counting treated within the Dudarev approach [@Dudarev_et_al:1998] and parameters $U=5.7$ eV and $J=1.0$ eV. For structural relaxation and lattice dynamics we used the Vienna *Ab Initio* Simulation Package (VASP) [@VASP_Kresse:1996] with the default projector augmented-wave (PAW) potentials [@Bloechl:1994] (valence-electron configurations Eu: $5s^2 5p^6 4f^{7}6s^{2}$, Ba: $5s^{2}5p^{6}6s^{2}$, Ti: $3s^{2}3p^{6}3d^{2}4s^{2}$ and O: $2s^{2}2p^{4}$.) Spin-orbit interaction was not included. The 50/50 (Eu,Ba)TiO$_3$ alloy was represented by an ordered A-site structure with the Eu and Ba ions alternating in a checkerboard pattern (Fig. \[th\_phonons\], inset). Structural relaxations and total energy calculations were performed for a 40-atom supercell (consisting of two 5-atom perovskite unit cells in each cartesian direction) using a $4\times4\times4$ $\Gamma$-centered $k$-point mesh and a plane-wave cutoff of 500 eV. Ferroelectric polarizations and Born effective charges were calculated using the Berry phase method [@King-Smith:1993]. Lattice instabilities were investigated in the frozen-phonon scheme [@Kunc:1982; @Alfe:2009] for an 80 atom supercell using a $\Gamma$-centered $2\times2\times2$ $k$-point mesh and 0.0056 Å atomic displacements to extract the Hellman-Feynman forces. Synthesis --------- Eu$_2$O$_3$, TiO$_2$ (anatase) and BaTiO$_3$ powders (all from Sigma-Aldrich) were mixed in stoichiometric ratio then milled intensively in a planetary ball micro mill Fritsch Pulverisette 7 for 120min. in a dry environment followed by 20 min. in suspension with n-heptane. ZrO$_2$ grinding bowls (25ml) and balls (12mm diameter, acceleration 14g) were used. The suspension was dried under an IR lamp and the dried powder was pressed in a uniaxial press (330MPa, 3min.) into 13mm diameter pellets. The pellets were calcined in pure H$_2$ atmosphere at 1200[$\,{}^\circ$C]{} for 24hr (to reduce Eu$^{3+}$ to Eu$^{2+}$), then milled and pressed by the same procedure as above and sintered at 1300[$\,{}^\circ$C]{} for 24hr in Ar+10%H$_2$ atmosphere. Note that pure H$_2$ can not be used for sintering without adversely increasing the conductivity of the sample. Characterization ---------------- Magnetic susceptibility was measured using a Quantum Design PPMS9 and a He$^3$ insert equipped with a home-made induction coil that allows measurement of ac magnetic susceptibility, $\chi$ from 0.1 to 214Hz. [10]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} . ** ****, (). & ** (, , ). & . ** ****, (). , , & . ** ****, (). , , , & . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). *et al.* . ** ****, (). & . ** ****, (). , & . ** ****, (). *et al.* . ** ****, (). , & . ** ****, (). *et al.* . ** ****, (). , , , & . ** ****, (). . ** ****, (). . ** ****, (). , , & . ** ****, (). , & . ** ****, (). , & ** ****, (). & . ** ****, (). . ** ****, (). , & . ** ****, (). ** ****, (). . ** ****, (). , & . ** ****, (). & . ** ****, (). , & . ** ****, (). & . ** ****, (). & . ** ****, (). *et al.* . ** ****, (). & . ** ****, (). , , , & . ** ****, (). , & . ** ****, (). , , , & . ** ****, (). *et al.* . ** ****, (). , & . ** ****, (). *et al.* . ** ****, (). , & . ** ****, (). , & . ** ****, (). , , , & . ** ****, (). & . ** ****, (). . ** ****, (). & . ** ****, (). & . ** ****, (). . ** ****, ().
--- abstract: 'In three-dimensional turbulent flows, the flux of energy from large to small scales breaks time symmetry. We show here that this irreversibility can be quantified by following the relative motion of several Lagrangian tracers. We find by analytical calculation, numerical analysis and experimental observation that the existence of the energy flux implies that, at short times, two particles separate temporally slower forwards than backwards, and the difference between forward and backward dispersion grows as $t^3$. We also find the geometric deformation of material volumes, surrogated by four points spanning an initially regular tetrahedron, to show sensitivity to the time-reversal with an effect growing linearly in $t$. We associate this with the structure of the strain rate in the flow.' author: - Jennifer Jucha - Haitao Xu - Alain Pumir - Eberhard Bodenschatz title: 'Time-symmetry breaking in turbulence' --- In turbulent flows, far from boundaries, energy flows from the scale at which it is injected, $l_I$, to the scale where it is dissipated, $l_D$. For intense three-dimensional turbulence, $l_D \ll l_I$, and the energy flux, ${\epsilon}$, is from large to small scales [@frisch95]. As a consequence, time symmetry is broken, since the time reversal $t \rightarrow -t$ would also reverse the direction of the energy flux. Exploring the implications of this time asymmetry on the relative motion between fluid particles is the aim of this Letter. The simplest problem in this context concerns the dispersion of two particles whose positions, $\mathbf{r}_1(t)$ and $\mathbf{r}_2(t)$, are separated by $\mathbf{R}(t) = \mathbf{r}_2(t) - \mathbf{r}_1(t)$. The growth of the mean squared separation, $\langle \mathbf{R}^2 (t) \rangle$, forwards ($t>0$) and backwards in time ($t<0$) is a fundamental question in turbulence research [@R26] and is also related to important problems such as turbulent diffusion and mixing [@S01; @SC09]. At long times, both for $t > 0$ and $t < 0$, it is expected that the distance between particles increases according to the Richardson prediction as $ \langle \mathbf{R}^2 (t) \rangle \approx g_{f,b} {\epsilon}|t|^3$ [@SC09], with two constants, $g_f$ and $g_b$, for forward and backward dispersion, respectively. The lack of direct evidence for the Richardson $t^3$ regime in well-controlled laboratory flows [@B06a] or in Direct Numerical Simulations (DNS) [@SC09; @BIC14] makes the determination of the constants $g_f$ and $g_b$ elusive, although it is expected that $g_b > g_f$ [@SC09; @SYB05; @B06]. In this Letter we show that [for short times]{} the flow irreversibility imposes a quantitative relation between forward and backward particle dispersion. For particle pairs, the energy flux through scales is captured by $$\left\langle \frac{d}{dt} \left[\mathbf{v}_2(t) - \mathbf{v}_1(t) \right]^2 \Big|_0 \right\rangle = - 4 {\epsilon}, \label{eq:flux_lag}$$ where $\mathbf{v}_{1}(t) $ and $\mathbf{v}_{2}(t)$ are the Lagrangian velocities of the particles and the average is taken over all particle pairs with the same initial separation, $ | \mathbf{R}(0) | =R_0$, in the inertial subrange ($l_D \ll R_0 \ll l_I$). Equation (\[eq:flux\_lag\]) is exact in the limit of very large Reynolds number [@MOA99; @FGV01; @PSC01] and can be seen as the Lagrangian version of the Kolmogorov 4/5-law [@frisch95]. For short times, Eq. (\[eq:flux\_lag\]) implies that backward particle dispersion is faster than the forward case, with $$\langle \mathbf{R}^2(-t) \rangle - \langle \mathbf{R}^2(t) \rangle = 4 {\epsilon}t^3 + {\mathcal{O}}(t^5). \label{eq:diff_bac_for}$$ The $t^3$ power in Eq. (\[eq:diff\_bac\_for\]) is strongly reminiscent of the Richardson prediction, with the expectation that $g_b > g_f$ at longer times. The relation between the irreversibility predicted by Eq. (\[eq:diff\_bac\_for\]) and the one expected at longer times ($g_b > g_f$), however, remains to be established. Whereas the difference between backward and forward pair dispersion at short times is weak ($\propto t^3$), we found a strong manifestation of the time asymmetry when investigating multi-particle dispersion. The analysis of the deformation of an initially regular tetrahedron consisting of four tracer particles [@PSC00; @XOB08] reveals a stronger flattening of the shape forwards in time, but a stronger elongation backwards in time. We relate the observed time asymmetry in the shape deformation to a fundamental property of the flow [@Betchov56; @Siggia81; @Ashurst87; @Pumir13] by investigating the structure of the perceived rate of strain tensor based on the velocities of the four Lagrangian particles [@XPB11]. Our finding relies on analytical calculation, DNS, and data from 3D Lagrangian particle tracking in a laboratory flow. The experiments were conducted with a von Kármán swirling water flow. The setup consisted of a cylindrical tank with a diameter of $\unit{48.3}{\centi\meter}$ and a height of $\unit{60.5}{\centi\meter}$, with counterrotating impellers installed at the top and bottom. Its geometry is very similar to the one described in Ref. [@O06], but with a slightly different design of the impellers to weaken the global structure of the flow. At the center of the tank, where the measurements were performed, the flow is nearly homogeneous and isotropic. As tracers for the fluid motion, we used polystyrene microspheres of density $\rho=1.06\, \rho_{\text{water}} $ and a diameter close to the Kolmogorov length scale, $\eta$. We measured the trajectories of these tracers using Lagrangian particle tracking with sampling rates exceeding 20 frames per Kolmogorov time scale, $\tau_\eta$ [@O06a; @X08]. We obtained three data sets at $R_\lambda=270$, $350$ and $690$, with corresponding Kolmogorov scales $\eta=\unit{105}{\micro\meter}$, $\unit{66}{\micro\meter}$, and $\unit{30}{\micro\meter}$ and $\tau_\eta=\unit{11.1}{\milli\second}$, $\unit{4.3}{\milli\second}$, and $\unit{0.90}{\milli\second}$, respectively. The integral length scales of $L\approx\unit{5.5}{\centi\meter}$ for the first two and $L\approx\unit{7.0}{\centi\meter}$ for the last data set are both smaller than the size of the measurement volume, which is approximately $(\unit{8}{\centi\meter})^3$. Many independent, one-second recordings of $\sim 100$ particles where combined to generate sufficient statistics. For example, the $R_\lambda = 690$ dataset contains 555,479 particle trajectories lasting at least $20 \tau_\eta$. Our experimental results are compared to DNS data obtained from pseudo-spectral codes [@vosskuhle:2013; @li2008; @Y12]. To study the dispersion between two particles, it is more convenient to analyze the change in separation, $\delta \mathbf{R}(t) = \mathbf{R}(t) - \mathbf{R}(0)$, than the separation $\mathbf{R}(t)$ itself [@B50; @O06; @SC09]. We expand $\delta \mathbf{R}(t)$ in a Taylor series and average over many particle pairs with a fixed initial separation $ | \mathbf{R}(0)|=R_0$ to obtain $$\frac{\langle \delta \mathbf{R}(t)^2\rangle}{R_0^2} = \frac{\langle \mathbf{u}(0)^2\rangle}{R_0^2} t^2 + \frac{\left\langle \mathbf{u}(0) \cdot \mathbf{a}(0) \right\rangle}{R_0^2} t^3 + {\mathcal{O}}(t^4) , \label{eq:evol_dR2}$$ where $\mathbf{u}(0)$ and $\mathbf{a}(0)$ are the relative velocity and acceleration between the two particles at time $t=0$. Using Eq.  reduces the $t^3$ term in Eq.  to $-2 (t/t_0)^3$, where $t_0 = (R_0^2/{\epsilon})^{1/3}$ is the (Kolmogorov) time scale characteristic of the motion of eddies of size $R_0$ [@frisch95]. Eq.  can thus be expressed as $$\frac{\langle \delta \mathbf{R}(t)^2\rangle}{R_0^2} = \frac{\langle \mathbf{u}(0)^2\rangle}{({\epsilon}R_0)^{2/3}} \Bigl( \frac{t}{t_0} \Bigr)^2 - 2 \Bigl(\frac{t}{t_0} \Bigr)^3 + {\mathcal{O}}(t^4). \label{eq:evol_dR2_nodim}$$ For short times, the dominant behavior is given by the $t^2$ term in Eq.  [@B50], which is even in $t$, and thus reveals no asymmetry in time. The odd $t^3$ term is the first to break the $ t \rightarrow -t$ symmetry. This is better seen from the difference between the forward and backward dispersion, $$\begin{aligned} \frac{\langle \delta \mathbf{R}(-t)^2- \delta \mathbf{R}(t)^2\rangle}{R_0^2} & = -2 \frac{\left\langle \mathbf{u}(0) \cdot \mathbf{a}(0) \right\rangle}{R_0^2} t^3 + {\mathcal{O}}(t^5) \nonumber \\ &= 4 ({t}/{t_0})^3 + {\mathcal{O}}(t^5), \label{eq:Rb_Rf}\end{aligned}$$ which is equivalent to Eq. . We note that the simple form of Eq. , which suggests that the evolution of $\langle \delta \mathbf{R}^2(t) \rangle$ depends on $(t/t_0)$ alone, is accurate only up to ${\mathcal{O}}(t/t_0)^3$. Not all higher-order terms in the Taylor expansion can be reduced to functions of $(t/t_0)$ [@F13]. To test Eq. , we identified particle pairs from our large set of experimental and numerical trajectories with a given initial separation $R_0$ and studied the evolution of $\delta \mathbf{R}(t)^2$, both forwards and backwards in time. One of the difficulties of reliably measuring $\langle \delta \mathbf{R}(t)^2 \rangle$ in experiments comes from the finite size of the measurement volume in which particles are tracked. The residence time of particle pairs in the measurement volume decreases with the separation velocity, inducing a bias. We analyze how this affects the results in the Appendix and show that the effect is weak. The very good agreement between experiments and DNS convinces us that the finite-volume bias does not alter our results. ![(color online). The difference between the backward and forward mean squared relative separation, $\langle \delta \mathbf{R}(-t)^2 - \delta \mathbf{R}(t)^2 \rangle$, compensated using Eq. . The symbols correspond to experiments: circles for $R_\lambda = 690$ ($R_0/\eta = 267,\,333,\,400$), stars for $R_\lambda = 350$ ($R_0/\eta = 152,\,182,\,212$), and squares for $R_\lambda = 270$ ($R_0/\eta = 95,\,114,\,133$). The lines correspond to DNS at $R_\lambda=300$ ($R_0/\eta=19,\,38,\,58,\,77,\,92,\,123$).[]{data-label="figThirdOrder"}](./figure1.eps){height="\picheight"} Fig. \[figThirdOrder\] shows the difference, $\langle \delta \mathbf{R}^2(-t) - \delta \mathbf{R}^2(t) \rangle$, compensated by $- \frac{\left\langle \mathbf{u}(0) \cdot \mathbf{a}(0) \right\rangle}{2 R_0^2} t^3 $, using Eq. , obtained from both experiments and DNS at 4 different Reynolds numbers. The DNS, $R_\lambda = 300$ data consisted of $32,768$ particle trajectories in a statistically stationary turbulent flow [@vosskuhle:2013] over $\sim 4.5$ large-eddy turnover times, allowing particle pairs with a prescribed size to be followed for a long period of time. The data all show a clear plateau up to $t\approx t_0/10$, in complete agreement with Eq. . At longer times, both experimental and DNS data decrease rapidly towards zero without any sign of the plateau expected from the Richardson prediction, $$\frac{\langle \delta \mathbf{R}(-t)^2- \delta \mathbf{R}(t)^2\rangle }{R_0^2} =(g_b - g_f) \Bigl(\frac{t}{t_0} \Bigr)^3 . \label{eqRichardson}$$ While the slightly faster decay of the experimental data for $t \gtrsim t_0$ could be due to a residual finite-volume bias, this should not affect the DNS data. Previous experiments at $R_\lambda = 172$ with initial separations in the range $4 \le R_0/\eta \le 28$ suggested a value of the difference of $(g_b - g_f) = 0.6 \pm 0.1$ [@B06]. Fig. \[figThirdOrder\] does not provide evidence for this value, although it does not rule out the existence of a plateau at a lower value of $(g_b - g_f)$. Note that Eq.  predicts the time irreversibility caused by the energy flux to persist into the inertial range and remarkably to grow as $t^3$ as well. It is therefore tempting to draw an analogy between Eq. , which is exact and valid at short times, and the expected Richardson regime at longer times [@B12]. The fact that a plateau corresponding to $(g_b - g_f)$ would be substantially lower than the value of $4$ given by Eq.  indicates that the connection between the short-time behavior, Eq. , and the longer-time behavior, Eq. , requires a deeper understanding. The time irreversibility predicted by Eq.  for particle pair separations grows slowly at small times, $\propto t^3$. We discuss below a stronger ($\propto t$) manifestation of the time irreversibility by analyzing the evolution of four particles initially forming a regular tetrahedron. Additionally, the motion of tetrahedra provides insight into the structure of a flow [@CPS99; @PSC00; @XOB08; @XPB11; @Pumir13] and in fact into the origin of the irreversibility observed in particle pair separation. The geometry of a set of four points $({\mathbf}{x}_1, ... {\mathbf}{x}_4)$, i.e., a tetrahedron, can be effectively described by three vectors. The position of the tetrahedron is immaterial in a homogeneous flow. The shape tensor, $G_{ij} = \sum_a (x_{a,i} - x_{C,i})(x_{a,j} - x_{C,j})$, where $x_{a,i}$ is the $i^{th}$ component of ${\mathbf}{x}_a$, provides an effective description of the tetrahedron geometry. The radius of gyration of the tetrahedron, $R^2(t) = \text{tr}(\mathbf{G})=\frac14 \sum_{a<b} |{\mathbf}{x}_a(t) - {\mathbf}{x}_b(t)|^2$, is simply given by the trace of $\mathbf{G}$. The shape is described by the three eigenvalues $g_i$ of $G$, with $g_1\geq g_2 \geq g_3$. For a regular tetrahedron, where all edges have the same length, all three eigenvalues are equal. For $g_1 \gg g_2\approx g_3$, the tetrahedron is needle-like, while $g_1\approx g_2 \gg g_3$ represents a pancake-like shape. ![(color online). Eigenvalues of the perceived rate-of-strain tensor, $\lambda_{0,i} t_0$, $(i=1,\,2,\,3)$, defined on tetrahedra with different sizes $R_0 /\eta$. Open symbols are from experiments at $R_\lambda= 690$ and $350$ and filled symbols from DNS at $R_\lambda=300$. The solid lines are the corresponding averages for $i=1$ (top), $2$ (middle), and $3$ (bottom).[]{data-label="figStrain"}](./figure2.eps){height="\picheight"} The evolution of $\mathbf{G}$ can be conveniently written in the compact form [@Pumir13] $$\frac{{\mathrm{d}}}{{\mathrm{d}}t}\mathbf{G}(t) = \mathbf{M}(t) \mathbf{G}(t) + \mathbf{G}(t) \mathbf{M}^T(t) , \label{eq:dG_dt}$$ where $\mathbf{M}(t)$ is the perceived velocity gradient tensor that describes the turbulent flow field seen by the 4 points [@CPS99; @XPB11]. The perceived velocity gradient reduces to the usual velocity gradient when the tetrahedron becomes smaller than the Kolmogorov scale, $\eta$ [@Pumir13]. We solve Eq.  for short times using a Taylor expansion around $t=0$ and taking $G_{ij}(0) = (R_0^2/2) \delta_{ij}$ as the initial condition, i.e., the tetrahedra are initially regular with edge lengths $R_0$. The solutions for the average size and shape are $$\begin{aligned} \langle R^2(t) \rangle & = \frac{R_0^2}{2} \bigg[3 + 2 \text{tr} \langle \mathbf{S}_0^2\rangle t^2 \nonumber\\ & \quad + 2 \text{tr}\left( \frac23 \langle \mathbf{S}_0^3 \rangle +\langle\mathbf{S}_0 \mathbf{\dot{S}}_0 \rangle \right) t^3 + {\mathcal{O}}(t^4) \bigg] \label{eqRadius}\end{aligned}$$ and $$\begin{aligned} \langle g_i \rangle &= \frac{R_0^2}{2} \bigg[1 + 2 \langle \lambda_{0,i} \rangle t \nonumber\\ & \quad + \left( 2 \langle \lambda_{0,i}^2 \rangle + \langle \mathbf{\dot{S}}_{0,ii} \rangle \right) t^2 + {\mathcal{O}}(t^3) \bigg]. \label{eqEigen}\end{aligned}$$ At the orders considered, the evolution of the tetrahedron geometry depends only on the perceived rate-of-strain tensor, $\mathbf{S}_0 = \mathbf{S}(0) = \frac12 [ \mathbf{M}(0) +\mathbf{M}(0)^T]$, whose eigenvalues, $\lambda_{0,i}$, are sorted in decreasing order ($\lambda_{0,1} \ge \lambda_{0,2} \ge \lambda_{0,3}$), and on its time-derivative, $\mathbf{\dot{S}}_0 = \frac{{\mathrm{d}}}{{\mathrm{d}}t} \mathbf{S}(t)\big|_0$. In Eq. , all terms are in fact expressed in the eigenbasis of $\mathbf{S}_0$. ![image](./figure3a.eps){height="\picheight"} ![image](./figure3b.eps){height="\picheight"} We first note that the radius of gyration, $R^2(t)$, can also be expressed as an average over the squares of the edge lengths of the tetrahedron. Thus, Eq.  must be consistent with Eq. . This implies that $\text{tr} \langle \mathbf{S}_0^2\rangle = \frac{3}{2 R_0^2} \langle \mathbf{u}(0)^2\rangle$ and $\text{tr}\big( \frac23 \langle \mathbf{S}_0^3 \rangle _t+\langle\mathbf{S}_0 \mathbf{\dot{S}}_0 \rangle \big) = \frac32 \left\langle \mathbf{u}(0)\cdot\mathbf{a}(0) \right\rangle$, which we explicitly confirmed with our data. Furthermore, the incompressibility of the flow imposes that $\mathbf{M}$ (and hence $\mathbf{S}$) is traceless [on average]{}, which means that $\langle \lambda_{0,1} \rangle \geq 0$ and $\langle \lambda_{0,3} \rangle \leq 0$. The generation of small scales by turbulent flows, which plays a key role in the energy cascade, implies that the intermediate eigenvalue of the rate of strain tensor is positive [@Betchov56]. This property also applies to the [perceived]{} velocity gradient tensor in the inertial range [@Pumir13] (Fig. \[figStrain\]). Remarkably, our data suggest that $\langle \lambda_{0,i} \rangle t_0 \approx \text{const}$ over the range of Reynolds numbers and inertial scales covered here. For initially regular tetrahedra of edge length $R_0$, Eq.  predicts that $\langle g_i (t) \rangle = \frac12 R_0^2$ at $t=0$ and that $\langle g_i (t) \rangle$ grows [linearly]{} as $R_0^2 \langle \lambda_{0,i} \rangle t$ for small $t$. The tetrahedra obtained experimentally and numerically at $R_\lambda=300$, however, are not strictly regular, but correspond to a set of 4 points whose relative distances are equal to within a fixed relative tolerance in the range $2.5 - 10 \%$. Fig. \[figShape\](a) shows that the linear behavior predicted by Eq.  is observed when the tetrahedra are regular, as obtained using the Johns Hopkins University database [@li2008; @Y12] ($R_\lambda = 430$), or when the tolerance is reduced. The time asymmetry in this shape evolution, seen from the eigenvalues of $\mathbf{G}$ in Fig. \[figShape\], originates from the positive value of $\langle \lambda_{0,2} \rangle $. For regular tetrahedra, Eq. shows that in the eigenbasis of $\mathbf{S}_0$, the largest eigenvalue of $\mathbf{G}$ is $g_1$ for $t > 0$, and $g_3$ for $t < 0$. The difference between the largest eigenvalues at $t > 0$ (forwards in time) and at $t<0$ (backwards in time) is thus $R_0^2 \langle (\lambda_{0,1} + \lambda_{0,3}) t \rangle = - R_0^2 \langle \lambda_{0,2} t \rangle$. In fact, the difference between the backward and forward growth rates of the intermediate eigenvalue, $\langle g_2 \rangle$, shows an even stronger asymmetry: $$\langle g_{2}(t) - g_{2}(-t) \rangle /[R_0^2 (t/t_0)] = 2 \langle \lambda_{0,2} \rangle t_0 + {\mathcal{O}}(t^2). \label{eq:g2diff}$$ The expected plateau of $2 \langle \lambda_{0,2} \rangle t_0$ is seen in Fig. \[figShape\](b) when the tetrads are regular, or when the tolerance on the initial edge lengths is reduced. In summary, we have shown that the relative motion between several Lagrangian particles reveals the fundamental irreversibility of turbulent flows. At short times, the time asymmetry of two-particle dispersion grows as $t^3$, which is deduced from an identity derived from the Navier-Stokes equations in the large $R_\lambda$ limit that expresses the existence of a downscale energy cascade. Our study, however, leaves open the question of the existence of two different constants governing the dispersion forwards and backwards in time in the Richardson regime [@SYB05; @B06]. A stronger manifestation of the time asymmetry, $\propto t$, was observed by studying the shape deformation of sets of four points. This asymmetry can be understood from another fundamental property of turbulence, namely the existence of a positive intermediate eigenvalue of the rate-of-strain tensor [@Betchov56; @Pumir13]. Thus, remarkably, the manifestations of irreversibility are related to fundamental properties of the turbulent flow field. The time-symmetry breaking revealed by multi-particle statistics is a direct consequence of the energy flux through spatial scales (see also [@FF13]). The very recently observed manifestation of irreversibility [@XPFB14] when following only a single fluid particle, where an intrinsic length scale is lacking, thus presents an interesting challenge to extend the analysis presented here. We expect that further insights into the physics of turbulence can be gained by analyzing the motion of tracer particles. [31]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [**]{} (, , ) @noop [ ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [ ()](http://arxiv.org/pdf/1403.5502) @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{},   (, ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} @noop (),  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**** ()]{} @noop [****,  ()]{} appendix ======== The measurement volume in our experiment is finite and particles are thus only tracked for a finite time. The larger the relative velocity between two particles, $| \mathbf{u}(0)|$, the shorter they reside in the measurement volume [@B06; @LBOM07]. The experimentally measured mean squared displacement, $\langle \delta \mathbf{R}^2(t) \rangle_m$, determined by particle pairs which could be tracked up to time $t$, is smaller than the true value $\langle \delta \mathbf{R}^2(t) \rangle$ (see Fig. \[figBiasSketch\]). To quantitatively analyze this effect, we parametrize the bias in $\langle \delta \mathbf{R}^2(t) \rangle_m$ due to the loss of particles with large relative motions by generalizing Eq.  to $$\frac{\langle \delta \mathbf{R}(t)^2\rangle_m}{R_0^2} = \frac{\langle \mathbf{u}(0)^2\rangle}{({\epsilon}R_0)^{2/3}} f_1(t) \Bigl( \frac{t}{t_0} \Bigr)^2 - 2 f_2(t) \Bigl(\frac{t}{t_0} \Bigr)^3 + {\mathcal{O}}(t^4). \label{eqBias1}$$ In Eq. , the functions $f_1(t)$ and $f_2(t)$ express that the values of the relative velocities of particles staying in the measurement volume for a time $t$ is [*smaller*]{} than the relative velocity of all particle pairs (see Fig. \[figBiasSketch\]). From our experimental data, we find that $f_i(t)>0.9$ for $t/t_0 <0.2$. Additionally, we restrict ourselves to particle pairs that can be tracked in the interval $[-t , t]$, ensuring that $f_i(t) = f_i(-t)$. We thus find that the time asymmetry between backward and forward dispersion is $$\frac{\langle \delta \mathbf{R}(-t)^2- \delta \mathbf{R}(t)^2\rangle_m}{R_0^2} =4 f_2(t) \Bigl( \frac{t}{t_0} \Bigr)^3 +{\mathcal{O}}(t^5). \label{eqBias2}$$ The bias in Eq.  is due only to the $f_2(t)$ term, and not to the leading term in Eq. . Over the short time interval where Fig. \[figThirdOrder\] shows a plateau, the error due to $f_2(t)$ is smaller than $\sim 10\%$. ![(color online). The blue curve shows the ensemble average for an infinite volume, the red curve the average over a time dependent ensemble for a finite volume. Black curves show examples of single events from these ensembles, with the dashed part not accessible in the case of a finite measurement volume.[]{data-label="figBiasSketch"}](./figure4.eps){width="46.00000%"}
--- abstract: 'The discovery of dynamic memory effects in the magnetization decays of spin glasses in 1983 marked a turning point in the study of the highly disordered spin glass state. Detailed studies of the memory effects have led to much progress in understanding the qualitative features of the phase space. Even so, the exact nature of the magnetization decay functions have remained elusive, causing confusion. In this letter, we report strong evidence that the Thermoremanent Magnetization (TRM) decays scale with the waiting time, $t_{w}$. By employing a series of cooling protocols, we demonstrate that the rate at which the sample is cooled to the measuring temperature plays a major role in the determination of scaling. As the effective cooling time, $t_{c}^{eff}$, decreases, $\frac {t}{t_{w}}$ scaling improves and for $t_{c}^{eff}<20s$ we find almost perfect $\frac{t}{t_{w}}$ scaling, i.e full aging.' author: - 'G. F. Rodriguez' - 'G. G. Kenning' - 'R. Orbach' title: Full Aging in Spin Glasses --- Since the discovery of aging effects in spin glasses approximately twenty years ago[@Cham83][@Lund83], much effort has gone into determining the exact time dependence of the memory decay functions. In particular, memory effects show up in the Thermoremanent Magnetization (TRM) (or complementary Zero-Field Cooled (ZFC) magnetization) where the sample is cooled through its spin glass transition temperature in a small magnetic field (zero field) and held in that particular field and temperature configuration for a waiting time, $t_{w}$. At time $t_{w}$, a change in the magnetic field produces a very long time decay in the magnetization. The decay is dependent on the waiting time. Hence, the system has a memory of the time it spent in the magnetic field. A rather persuasive argument[@Bou92] suggests that for systems with infinite equilibration times, the decays must scale with the only relevant time scale in the experiment, $t_{w}$. This would imply that plotting the magnetization on a t/$t_{w}$ axis would collapse the different waiting time curves onto each other. This effect has not been observed. What has been observed[@Alba86] is that the experimentally determined magnetization decays will scale with a modified waiting time, $(t_{w})^\mu$. Where $\mu$ is a fitting parameter. For $\mu<1$ the system is said to have subaged. A $\mu>1$ is called superaging and $\mu=1$ corresponds to full aging. For TRM experiments a $\mu$ of approximately 0.9 is found for different types of spin glasses[@Alba86; @Ocio85], over a wide range of reduced temperatures indicating subaging. At very low temperatures and temperatures approaching the transition temperature, $\mu$ is observed to decrease from the usual 0.9 value. Superaging has been observed in Monte Carlo simulations of spin glasses[@Sibani]. This has led to confusion as to the exact nature of scaling. Zotev et al.[@Zov02] have suggested that the departures from full $\frac{t}{t_{w}}$ scaling, observed in aging experiments, are mainly due to cooling effects. In a real experimental environment, the situation is complicated by the time it takes for the sample to cool to its measuring temperature. An effect due to the cooling rate at which the sample temperature approaches the measuring temperature has been known[@Nord87; @Nord00]. This effect is not trivial, it does not contribute a constant time to $t_{w}$. Another possible explanation for the deviation from full aging comes from the widely held belief that the magnetization decay is an additive combination of a stationary term ($M_{Stat} = A (\tau_{0} / t)^{\alpha})$ and an aging term ($M = f(\frac{t}{t_{w}})$ )[@Cou95; @Bou95; @Vin96]. Subtraction of a stationary term, where $\tau_{0}$ is a microscopic spin flipping time, A is a dimensionless constant and $\alpha$ is a parameter determined from $\chi''$ measurements, was shown to increase $\mu$ from 0.9 to 0.97[@Vin96]. In this letter we analyze effects of the cooling time through a series of different cooling protocols and we present the first clear and unambiguous experimental evidence that the TRM decays scale as $\frac{t}{t_{w}}$(i.e. full aging). Three different methods have been regularly employed to understand the scaling of the TRM decays. The first and simplest is to scale the time axis of the magnetization decay with the time the sample has spent in a magnetic field (i.e.$\frac{t}{t_{w}}$)[@Bou92]. If the decays scale as a function of waiting time it would be expected that the decay curves would overlap. This has not yet been observed. A second more sophisticated method was initially developed by Struik[@Stu79] for scaling the dynamic mechanical response in glassy polymers and first applied to spin glasses by Ocio et al.[@Ocio85]. This method plots the log of the reduced magnetization $M/M_{fc}$ ($M_{fc}$ is the field cooled magnetization), against an effective waiting time $\xi=\frac{\lambda}{t_{w}^{\mu}}$ where $$\begin{aligned} \lambda = \frac{t_{w}}{1-\mu}[(1+\frac{t}{t_{w}})^{1-\mu} - 1];~~~~\mu < 1 \label{eq:one}\end{aligned}$$ or $$\begin{aligned} \lambda = t_{w} \log[1+\frac{t}{t_{w}}];~~~~~~~~~~~~~~~~\mu = 1 \label{eq:two}\end{aligned}$$ A value of $\mu$ = 1 would correspond to perfect $t/t_{w}$ scaling. Previous values of $\mu$ obtained on the decays using this method have varied from .7 to .94[@Alba86] for a temperature range $.2<T_{r}<.95$. A value of $\mu<1$ is called subaging. Finally a peak in the function S(t) $$\begin{aligned} S(t)=-\frac{1}{H}\frac{dM(t)}{d[Log_{10}(t)]} \label{eq:three}\end{aligned}$$ as a function of time has been shown to be an approximately linear function of the waiting time[@Lund83]. This peak occurs at a time slightly larger then the waiting time, again suggesting possible subaging. In this study we use all three of the above scaling procedures to analyze the data we have produced with different cooling protocols. All measurements in this letter were performed on our homebuilt DC SQUID magnetometer with a $Cu_{.94}Mn_{.06}$ sample. The sample is well documented[@Ken91] and has been used in many other studies. The measurements described in this letter were performed in the following manner: The sample was cooled, in a magnetic field of 20 G, from 35 K through its transition temperature of 31.5 K to a measuring temperature of 26 K. This corresponds to reduced temperature of .83 $T_{g}$. The sample was held at this temperature for a waiting time $t_{w}$, after which time the magnetic field was rapidly decreased to 0G. The resulting magnetization decay is measured 1s after field cutoff to a time greater than or equal to 5$t_{w}$. The only parameters we have varied in this study are $t_{w}$ and the rate and profile at which we cool the sample through the transition temperature to the measuring temperature. The sample is located on the end of the sapphire rod and sits in the upper coil of a second order gradiometer configuration. The temperature measuring thermometer is located 12.5 cm above the sample. Heat is applied to the sample through a heater coil located on the same sapphire rod 17 cm above the sample. Sample cooling occurs by heat transfer with the He bath via a constant amount of He exchange gas, which was previously introduced into each chamber of the double vacuum jacket. We have measured the decay time of our field coil and find that we can quench the field in less then 0.1ms. We have also determined that without a sample, our system has a small reproducible exponential decay, that decays to a constant value less than the system noise within 400 seconds. In order to accurately describe our data we subtract this system decay from all of the data. In this paper we present TRM data for eight waiting times ($t_{w}$= 50s, 100s, 300s, 630s, 1000s, 3600s, 6310s, and 10000s). The same TRM experiments were performed for six different cooling protocols. In this paper we use four of the cooling protocols. Figure 1 (top row) is a plot of temperature vs. time for four of the cooling protocols. These different cooling protocols, were achieved, by varying applications of heat and by varying the amount of exchange gas in the vacuum jackets. A more detailed description of the cooling protocols will be given in a followup publication. In Figure 1 (bottom row) we plot S(t) (Eq $\ref{eq:three}.)$ of the ZTRM protocol (i.e. zero waiting time TRM) in order to characterize a time associated with the cooling protocol, $t_{c}^{eff}$. As observed in Figure 1 we have achieved effective cooling times ranging from 406s down to 19 seconds. These times can be compared with commercial magnetometers which have cooling times in the range of 100-400s. In Figure 2, we plot the data for the TRM decays (first column) with the four cooling protocols. It should be noted that the magnetization (y-axis) is scaled by the field cooled magnetization. The second column is the same data, as column one, with the time axis(x-axis) normalized by $t_{w}$. It can be observed for $\frac{t}{t_{w}}$ scaling (column 2) that as the effective cooling time decreases the spread in the decays decreases giving almost perfect $\frac{t}{t_{w}}$ scaling for the 19 second cooling protocol. The last column in Figure 2, is the data scaled,using $\mu$ scaling which has previously been described. It has long been known that the rate of cooling affected $\mu$ scaling and that $\mu$ scaling is only valid in the limit $t_{w}>>t_{c}^{eff}$. We find this to be true and that the limit is much more rigorous than previously believed. To determine $\mu$ scaling, we focused on applying this scaling to the longest waiting time data (i.e., $t_{w}$ = 3600s, 6310s and 10,000s). For the largest effective cooling time data, $t_{c}^{eff}=406s$, we find that we can fit the longest waiting time data with a $\mu$ value of .88. This is consistent with previously reported values of $\mu$[@Alba86]. We do find, however, that TRM data with waiting times less then 3600s do not fit on the scaling curve. We find that scaling of the three longest waiting time decays produces $\mu$ values which increase as $t_{c}^{eff}$ decreases. We also find that as $t_{c}^{eff}$ decreases, the data with shorter $t_{w}$ begins to fit to the scaling better. It can be observed that at $t_{c}^{eff}$= 19s we obtain almost perfect scaling for all of the data with a value of $\mu=.999$. However we find we can reasonably fit the data to a range of $\mu$ between .989-1.001. The fitting for the large $t_{w}$ decay curves, $t_{w}$ = 3600s, 6310s and 10,000s, is very very good. Small systematic deviations, as a function of $t_{w}$, occur for $t_{w}<3600s$ with the largest deviations for $t_{w}$= 50s. Even with an effective cooling time two orders of magnitude less then the waiting time, one sees deviations from perfect scaling. We have also scaled the data using Eq. 2. We find no noticeable difference between the quality of this fit and the quality of the $\mu=.999$, for $t_{c}^{eff}=19s$, shown in Figure 2. Data with longer cooling times cannot be fit with Eq. 2. We therefore conclude that full aging is observed for the long $t_{w}$ data using the $t_{c}^{eff}=19s$ protocol. It is clear from Figure 1 (bottom row), that the effect of the cooling time has implications for the decay all the way up to the longest time measured, 10,000 seconds. The form of the S(t) of the ZTRM is very broad. The S(t) function is often thought of as corresponding to a distribution of time scales (or barrier heights) within a system which has infinite equilibration times (barriers). The peak in S(t) is generally associated with the time scale (barrier height) probed in time $t_{w}$. In Figure 1, we observe that for the larger effective cooling times, the waiting times correspond to points on or near the peak of S(t) for the ZTRM. We therefore believe that for the larger effective cooling times there is significant contamination from the cooling protocol over the entire region of $t_{w}$s used in this paper. Only for $t_{c}^{eff}=19s$ cooling protocol do we find that the majority of $t_{w}s$ occur far away from the peak in the S(t). All the data in figure 3 used the cooling protocol with $t_{c}^{eff} = 19s$. In Figure 3a we plot the S(t)(ZTRM) for $t_{c}^{eff}=19s$ with arrows to indicate the waiting times for the TRM measurements. It can be observed that after approximately 1000 seconds the slope of the S(t) function decreases, possibly approaching a horizontal curve, which would correspond to a pure logarithmic decay in M(t). If, on the other hand, the slope is continuously changing this part of the decay may be described by a weak power law. Either way, this region would correspond to aging within a pure non-equilibrated state. We believe that the long waiting time data occurs outside the time regime that has been corrupted by the cooling time and that this is the reason that we have, for the first time, observed full aging. It has been suggested that subtraction of a stationary component of the magnetization decay will improve scaling[@Vin96]. The very long time magnetization decay is believed to consist of a stationary term that is thought to decay as a power law. We fit a power law, $M(t)=A(\tau_o/t)^\alpha$ to the long time decay (1000s-5000s), of the ZTRM for $t_{w}$=19s. Using $\tau_{o}=10^{-12}s$, we find $\alpha=.07$ and $A=.27$. Subtracting this power law form from the magnetization decay destroys scaling. We find that the subtraction of a much smaller power law term with A=.06 and $\alpha$=.02 slightly improves scaling at both short and long times. While the $\alpha$ values for the two different power law terms we have fit to are quite different, both values fall within the range determined from the decay of $\chi$"[@Vin96]. In Figure 3(c-d) and 3(e-f), we plot the two different types of scaling we have performed, with and without the subtraction of the weaker power law term. We find that even for $t_{c}^{eff}=19s$ the peak in S(t) for $t_{w}>1000s$ occur at a time larger then $t_{w}$(fig 3b). We find that we can fit the effective time associated with the peak in S(t) to $t_{w}^{eff}=t_w^{1.1}$. In summary, we have performed TRM decays over a wide range of waiting times (50s - 10,000s) for six different cooling protocols. We find that as the time associated with the cooling time decreases, scaling of the TRM curves improves in the $\frac{t}{t_{w}}$ scaling regime and in the $\mu$ scaling regime. In $\mu$ scaling we find that as the effective cooling time decreases $\mu$ increases approaching a value of .999 for $t_{c}^{eff}=19s$. For the $t_{c}^{eff}=19s$ TRM decays, we find that subtraction of a small power law term (A=.06, $\alpha$=.02) slightly improves the scaling. It is however likely that the small systematic deviations of the $t_{c}^{eff}=19s$ data as a function of $t_{w}$ are associated with the small but finite cooling rate. The authors would like to thank V. S. Zotev, E. Vincent and J. M. Hammann for very helpful discussions. [99]{} R. V. Chamberlin, M. Hardiman and R.Orbach, J. Appl. Phys **52**, 1771 (1983). L.Lundgren, P.Svedlindh, P.Nordblad and O.Beckman, Phys. Rev. Lett. **51**, 911(1983); L.Lundgren, P.Svedlindh, P.Nordblad and O.Beckman, J. Appl. Phys. **57**, 3371 (1985). J.P. Bouchaud, J. Phys. I (Paris) **2**,1705 (1992). M. Alba, M. Ocio and J. Hammann, Euro Phys Lett, **2**, 45 (1986); M. Alba, J. Hammann, M. Ocio, Ph. Refregier and H.Bouchiat, J. Appl. Phys **61**, 3683 (1987). M. Ocio, M. Alba,J. Hammann, J. Phys. (PARIS) Lett. **46**, 1101 (1985) Private communication Pablo Sibani V. S. Zotev, G. F. Rodriguez, G. G. Kenning, R. Orbach, E. Vincent and J. Hammann, cond-matt/0202269, To be published in Phys Rev B. P. Nordblad, P. Svedlindh, L. Sandlund and L.Lundgren,Phys Lett A **120**, 475 (1987). K. Jonason, P. Nordblad, Physica B **279**, 334 (2000). L. F. Cugliandolo and J. Kurchan, J. Phys , 5749 (1994). J. P. Bouchaud and D. S. Dean, J.Phys. I (Paris) **5**, 265 (1995). E.Vincent, J.Hammann, M.Ocio, J.P.Bouchaud and L.F. Cugliandolo, *Slow dynamics and aging in spin glasses. Complex behaviour of glassy systems*, ed. M. Rubi, Sitges conference, can be retrieved as cond-matt/9607224. L.C.E Struik, *Physical Ageing in Amorphous polymers and other materials* Lesevier Sci Pub Co. Amesterdam 1978 G. G. Kenning, D. Chu, R. Orbach, Phys. Rev. Lett **66**, 2933 (1991)
--- abstract: 'We present an atomistic self-consistent study of the electronic and transport properties of semiconducting carbon nanotube in contact with metal electrodes of different work functions, which shows simultaneous electron and hole doping inside the nanotube junction through contact-induced charge transfer. We find that the band lineup in the nanotube bulk region is determined by the effective work function difference between the nanotube channel and source/drain electrodes, while electron transmission through the SWNT junction is affected by the local band structure modulation at the two metal-nanotube interfaces, leading to an effective decoupling of interface and bulk effects in electron transport through nanotube junction devices.' author: - 'Yongqiang Xue$^{1,*}$ and Mark A. Ratner$^{2}$' title: 'Electron transport in semiconducting carbon nanotubes with hetero-metallic contacts' --- Devices based on single-wall carbon nanotubes (SWNTs) [@Dekker; @DeMc] have been progressing in a fast pace, e.g., the performance of carbon nanotube field-effect transistors (NTFET) is approaching that of the state-of-the-art silicon Metal-Oxide-Semiconductor field-effect transistors (MOSFET). [@AvFET; @DaiFET; @McFET] But a general consensus on the physical mechanisms and theoretical models remains to appear. A point of continuing controversy in NTFET has been the effect of Schottky barriers at the metal-SWNT interface. [@Barrier; @AvFET1] Since SWNTs are atomic-scale nanostructures in both the axial and the circumferential dimensions, any barrier that may form at the interface has a finite thickness and a finite width. [@AvFET; @XueNT; @AvFET2] In general a microscopic treatment of both the source/drain and gate field modulation effect will be needed to account for faithfully the atomistic nature of the electronic processes in NTFET. Since the characteristics of the NTFETs depend sensitively on the gate geometry, [@AvFET] a thorough understanding of the Schottky barrier effect in the simpler two-terminal metal-SWNT-metal junction devices is essential in elucidating the switching effect caused by applying a finite gate voltage. [@XueNT] As a basic device building block, the two-terminal device is also of interests for applications in electromechanical and electrochemical sensors, where the conduction properties of the SWNT junctions are modulated by mechanical strain [@NTMe] or molecular adsorption respectively. [@NTCh] Previous works have considered symmetric SWNT junctions with different contact geometries. [@XueNT] Here we consider SWNT in contact with metallic electrodes of different work functions. Such hetero-metallic junctions are of interests since: (1) The electrode work function difference leads to a contact potential and finite electric field (built-in field) across the junction at equilibrium. A self-consistent analysis of the hetero-metallic junction can shed light on the screening of the applied field by the SWNT channel and the corresponding band bending effect even at zero bias; (2) For SWNTs not intentionally doped, electron and hole doping can be induced simultaneously inside the channel by contacting with high and low work function metals; (3) Since the metallurgy of the metal-SWNT contact is different at the two interfaces, the asymmetric device structure may facilitate separating interface effect on electron transport from the intrinsic property of the SWNT channel. The hetero-metallic SWNT junction is shown schematically in Fig. \[xueFig1\], where the ends of an infinitely long SWNT wire are buried inside two semi-infinite metallic electrodes with different work functions. The embedded contact scheme is favorable for the formation of low-resistance contact. For simplicity, we assume the embedded parts of the SWNT are surrounded entirely by the metals with overall cylindrical symmetry around the SWNT axis. [@Note1] For comparison with previous work on symmetric SWNT junctions, we investigate $(10,0)$ SWNTs (with work function of $4.5$ eV [@Dekker]) in contact with gold (Au) and titanium (Ti) electrodes (with work functions of $5.1$ and $4.33$ eV respectively for polycrystalline materials [@CRC]). Choosing the electrostatic potential energy in the middle of the junction and far away from the cylindrical surface of the SWNT as the energy reference, the Fermi-level of the Au-SWNT-Ti junction is the negative of the average metal work functions $E_{F}=-4.715$ eV. The SWNT channel length investigated ranges from $L=2.0,4.1,8.4,12.6,16.9$ nm to $21.2$ nm, corresponding to number of unit cells of $5,10,20,30,40$ and $50$ respectively. We calculate the transport characteristics within the coherent transport regime, as appropriate for such short nanotubes. [@Phonon] Using a Green’s function based self-consistent tight-binding (SCTB) theory, we analyze the Schottky barrier effect by examining the electrostatics, the band lineup and the transport characteristics of the hetero-metallic SWNT junction as a function of the SWNT channel length. The SCTB model is essentially the semi-empirical implementation of the self-consistent Matrix Green’s function method for *ab initio* modeling of molecular-scale devices, [@XueMol] which takes fully into account the three-dimensional electrostatics and the atomic-scale electronic structure of the SWNT junctions and has been described in detail elsewhere. [@XueNT; @Note2] The SCTB model starts with the semi-empirical Hamiltonian $H_{0}$ of the bare $(10,0)$ SWNT wire using the Extended Huckel Theory (EHT) with non-orthogonal ($sp$) basis sets $\phi_{m}(\vec r)$. [@Hoffmann88] We describe the interaction between the SWNT channel and the rest of the junction using matrix self-energy operators and calculate the density matrix $\rho_{ij}$ and therefore the electron density of the equilibrium SWNT junction from $$\begin{aligned} \label{GE} G^{R} &=& \{ (E+i0^{+})S-H-\Sigma_{L}(E)-\Sigma_{L;NT}(E) -\Sigma_{R}(E)-\Sigma_{R;NT}(E)\}^{-1}, \\ \rho &=& \int \frac{dE}{2\pi }Imag[G^{R}](E)f(E-E_{F}).\end{aligned}$$ Here $S$ is overlap matrix and $f(E-E_{F})$ is the Fermi distribution in the electrodes. Compared to the symmetric SWNT junctions, here the Hamiltonian describing the SWNT channel $H=H_{0}+\delta V[\delta \rho]+V_{ext}$ includes the contact potential $V_{ext}$ (taken as linear voltage ramp here) in addition to the charge-transfer induced electrostatic potential change $\delta V$ ($\delta \rho$ is the density of transferred charge). The calculated charge transfer per atom and electrostatic potential change along the cylindrical surface of the SWNT for the Au-SWNT-Ti junction are shown in Fig. (\[xueFig2\]). Previous works [@XueNT] have shown that by contacting to the high (low)-work function metal Au (Ti), hole (electron) doping is induced inside the SWNT channel. Here we find simultaneous electron and hole doping inside the SWNT channel for the hetero-metallic Au-SWNT-Ti junction (lower figure in Fig. \[xueFig2\](a)). Since the magnitude of hole doping inside the Au-SWNT-Au junction ($\approx -5.6 \times 10^{-4}$/atom) is much larger than that of the electron doping inside the Ti-SWNT-Ti junction ($\approx 3 \times 10^{-5}$/atom) due to the larger work function difference, the majority of the channel remains hole-doped inside the Au-SWNT-Ti junction. Due to the localized nature of interface bonding, the charge transfer pattern immediately adjacent to the Au(Ti)-SWNT interface remains similar to that of the Au-SWNT-Au (Ti-SWNT-Ti) junction both in magnitude and shape. The short-wavelength oscillation in the transferred charge inside the SWNT channel reflects the atomic-scale variation of charge density within the unit cell of the SWNT. [@XueNT; @Tersoff02] The contact-induced doping affects the transport characteristics by modulating the electrostatic potential profile along the SWNT junction. We find that inside the SWNT channel, the built-in electric field is screened effectively by the delocalized $\pi$-electron of carbon. So the net electrostatic potential change along the cylindrical surface ($V_{ext}+\delta V[\delta \rho]$) is much more flat than the linear voltage ramp denoting the contact potential except close to the metal-SWNT interface (lower figure of Fig. \[xueFig2\](b)), where its shape remains qualitatively similar to that at the Au (Ti)-SWNT interface of the Au-SWNT-Au (Ti-SWNT-Ti) junction. Due to the confined cylindrical structure of the SWNT channel, the charge-transfer induced electrostatic potential change $\delta V$ decays rapidly in the direction perpendicular to the SWNT axis. This has led to a different physical picture of band bending in symmetric SWNT junctions. [@XueNT] In particular, the band lineup inside the SWNT channel has been found to depend mainly on the metal work function, while interaction across the metal-SWNT interface modulates the band structure close to the interface without affecting the band lineup scheme in the middle of the channel. Similar physical picture applies to the hetero-metallic SWNT junction, where we find that the band lineup in the middle of the Au-SWNT-Ti junction is essentially identical to that of the SWNT junction with symmetric contact to metals with work function of $4.715$ eV. This is examined through the local-density-of-states (LDOS) of the SWNT channel as a function of position along the SWNT axis in Figs. \[xueFig3\] and \[xueFig4\]. The coupling across the metal-SWNT interface and the corresponding strong local field variation immediately adjacent to the Ti-SWNT interface has a strong effect on the SWNT band structure there, which extends to $\sim 4$ nm away from the interface (Fig. \[xueFig3\](a)). The band structure modulation at the Au side is weaker. For the 40-unit cell (16.9 nm) SWNT, the band structure in the middle of the SWNT junction remains essentially unaffected. This is shown in Fig. \[xueFig4\], where we compare the LDOS of the Au-SWNT-Ti junction in the left end, the right end and the middle of the SWNT channel with the corresponding LDOS of the Au-SWNT-Au, Ti-SWNT-Ti junction and the bulk (infinitely long) SWNT wire respectively. Since the magnitude of the build-in electric field is smaller than the charge-transfer induced local field at the metal-SWNT interface, the LDOS at the two ends of the SWNT channel remain qualitatively similar to that of the symmetric SWNT junction (Figs. \[xueFig4\](a) and \[xueFig4\](c)). Note that the LDOS plotted here has been energetically shifted so that the SWNT bands in the middle of the hetero-metallic junction line up with those of the symmetric SWNT junctions. The above separation of band lineup scheme at the interface and in the interior of the SWNT junction implies that in NTFETs, the gate segments controlling the device interiors affect the device operation through effective modulation of the work function difference between the source/drain electrode and the bulk portion of the SWNT channel (applying a finite gate voltage to the SWNT bulk leads to an effective modulation of its work function relative to the source/drain electrodes), while the gate segments at the metal-SWNT interfaces affect the device operation by controlling charge injection into the device interior through local modulation of the SWNT band structure and Schottky barrier shapes including height, width and thickness, in agreement with recent lateral scaling analysis of gate-modulation effect [@AvFET3] and interface chemical treatment effect in Schottky barrier NTFETs. [@MolNT] Note that since the band structure modulation at the metal-SWNT interface can extend up to $\sim 4$ nm into the interior of the SWNT junction, it may be readily resolved using scanning nanoprobe techniques. [@NTSTM] The Schottky barrier effect at the metal-SWNT interface can also be analyzed through the length-dependent conductance and current-voltage (I-V) characteristics of the Au-SWNT-Ti junction, which are calculated using the Landauer formula [@XueMol] $G=\frac{2e^{2}}{h}\int dE T(E)[-\frac{df}{dE}(E-E_{F})] =G_{Tu}+G_{Th}$ and $I=\int_{-\infty}^{+\infty}dE \frac{2e}{h}T(E,V) [f(E-(E_{F}+eV/2))-[f(E-(E_{F}-eV/2))]=I_{Tu}+I_{Th}$ and separated into tunneling and thermal-activation contributions as $G_{Tu}=\frac{2e^{2}}{h}T(E_{F}),G_{Th}=G-G_{Tu}$ and $I_{Tu}=\frac{2e}{h}\int_{E_{F}-eV/2}^{E_{F}+eV/2} T(E,V)dE, I_{Th}=I-I_{Tu}$. [@XueNT; @XueMol03] In general the transmission function is voltage-dependent due to the self-consistent screening of the source-drain field by the SWNT channel at each bias voltage. Since in the case of voltage dropping mostly across the interface, the transmission coefficient is approximately voltage-independent at low-bias, [@XueMol03] here we calculate the I-V characteristics using the equilibrium transmission coefficient instead of the full self-consistent calculation at each bias voltage. We find that the conductance of the Au-SWNT-Ti junction shows a transition from tunneling-dominated to thermal activation-dominated regime with increasing channel length, but the length where this occurs is longer than those of the symmetric Au/Ti-SWNT-Au/Ti junctions (Fig. \[xueFig5\](a)). This is partly due to the fact that the Fermi-level is closer to the mid-gap of the SWNT band inside the channel, partly due to the reduced transmission close to the valence-band edge (Fig. \[xueFig4\](d)) caused by the band structure modulation at the Ti-SWNT interface. Due to the finite number of conduction channels, the increase of the conductance with temperature is rather slow (Fig. \[xueFig5\](b)). [@XueNT] The relative contribution of tunneling and thermal-activation to the room-temperature I-V characteristics is shown in Figs. \[xueFig5\](c) and \[xueFig5\](d) for the 20- and 40-unit cell long (8.4 and 16.9 nm) SWNT respectively, where we see that thermal-activation contribution increases rapidly with bias voltage for the 20-unit cell SWNT junction while the thermal-activation contribution dominates the I-V characteristics at all bias voltages for the 40-unit cell SWNT. In conclusion, we have presented an atomistic real-space analysis of Schottky barrier effect in the two-terminal SWNT junction with hetero-metallic contacts, which shows an effective decoupling of interface and bulk effects. Further analysis is needed that treat both the gate and source/drain fields self-consistently in the real space to achieve a thorough understanding of NTFETs. Author to whom correspondence should be addressed. E-mail: yxue@uamail.albany.edu. URL: http://www.albany.edu/ yx152122. Dekker C 1999 *Phys. Today* [**52**]{}(5) 22 Bachtold A, Haley P, Nakanishi T and Dekker C 2001 *Science* [**294**]{} 1317 Avouris Ph, Appenzellaer J, Martel R and Wind S J 2003 *Proc. IEEE* [**91**]{} 1772 Javey A, Guo J, Wang Q, Lundstrom M and Dai H 2003 *Nature* [**424**]{} 654; Javey A, Guo J, Paulsson M, Wang Q, Mann D, Lundstrom M and Dai H 2004 *Phys. Rev. Lett.* [**92**]{} 106804 Yaish Y, Park J-Y, Rosenblatt S, Sazonova V, Brink M and McEuen P L 2004 *Phys. Rev. Lett.* [**92**]{} 46401 Xue Y and Datta S 1999 *Phys. Rev. Lett.* [**83**]{} 4844; Le[ó]{}nard F and Tersoff J 2000 *Phys. Rev. Lett.* [**84**]{} 4693; Odintsov A A 2000 *Phys. Rev. Lett.* [85]{} 150; Nakanishi T, Bachtold A and Dekker C 2002 *Phys. Rev. B* [**66**]{} 73307 Heinze S, Tersoff J, Martel R, Derycke V, Appenzeller J and Avouris Ph 2002 *Phys. Rev. Lett.* [**89**]{} 106801; Appenzeller J, Knoch J, Radosavljevi[ć]{} M and Avouris Ph 2004 *ibid.* [**92**]{} 226802 Xue Y and Ratner M A 2003 *Appl. Phys. Lett.* [**83**]{} 2429; Xue Y and Ratner M A 2004 *Phys. Rev. B* [**69**]{} 161402(R); Xue Y and Ratner M A 2004 *Phys. Rev. B* In Press (*Preprint* cond-mat/0405465) Appenzeller J, Radosavljevi[ć]{} M, Knoch J and Avouris Ph 2004 *Phys. Rev. Lett.* [**92**]{} 48301 Minot E D, Yaish Y, Sazonova V, Park J-Y, Brink M and McEuen P L 2003 *Phys. Rev. Lett.* [**90**]{} 156401; Cao J, Wang Q and Dai H 2003 *Phys.Rev. Lett.* [**90**]{} 157601 Chiu P-W, Kaempgen M, and Roth S 2004 *Phys. Rev. Lett.* [**92**]{} 246802; Chen G, Bandow S, Margine E R, Nisoli C, Kolmogorov A N, Crespi V H, Gupta R, Sumanasekera G U, Iijima S and Eklund P C 2003 *Phys. Rev. Lett.* [**90**]{} 257403 Experimentally low-resistance contacts can be obtained either by growing SWNT’s directly out of the predefined catalyst islands and subsequently covering the catalyst islands with metallic contact pads (Ref. ) or by using standard lithography and lift-off techniques with subsequent annealing at high-temperature (Ref. ). In both cases, the ends of the long SWNT wires are surrounded entirely by the metals with strong metal-SWNT surface chemical bonding, although the exact atomic structure of the metal-SWNT interface remains unclear. Contacts can also be formed by depositing SWNT on top of the predefined metallic electrodes and side-contacted to the surfaces of the metals (side-contact scheme), which corresponds to the weak coupling limit due to the weak van der Waals bonding in the side-contact geometry leading to high contact resistance (Ref. ). Other types of contact may also exist corresponding to intermediate coupling strength. The contact geometry chosen in this work thus serves as a simplified model of the low-resistance contact. A comprehensive study of the contact effects in SWNT junction devices is currently under way and will be reported in a future publication. 1994 *CRC Handbook of Chemistry and Physics* (CRC Press, Boca Raton) Yao Z, Kane C L and Dekker C 2000 *Phys. Rev. Lett.* [**84**]{} 2941; Park J-Y, Rosenblatt S, Yaish Y, Sazonova V, Ustunel H, Braig S, Arias T A, Brouwer P and McEuen P L 2004 *Nano Lett.* [**4**]{} 517 Xue Y, Datta S and Ratner M A 2002 *Chem. Phys.* [**281**]{} 151; See also Datta S 1995 *Electron Transport in Mesoscopic Systems* (Cambridge University Press, Cambridge) The SWNT-metal interaction arises from one discrete cylindrical shell of metal atoms, surrounded by the bulk metal and treated using the Green’s function method as detailed in Ref. . We use a SWNT-metal surface distance of $2.0(\AA)$, close to the average inter-atomic spacing in the SWNTs and metals. Hoffmann R 1988 *Rev. Mod. Phys.* [**60**]{} 601; Rochefort A, Salahub D R and Avouris Ph 1999 *J. Phys. Chem. B* [**103**]{} 641 Le[ó]{}nard F and Tersoff J 2002 *Appl. Phys. Lett.* [**81**]{} 4835 Wind S J, Appenzeller J, and Avouris Ph 2003 *Phys. Rev. Lett.* [**91**]{} 58301 Auvray S, Borghetti J, Goffman M F, Filoramo A, Derycke V, Bourgoin J P and Jost O 2004 *Appl. Phys. Lett.* [**84**]{} 5106 Freitag M, Radosavljevi[ć]{} M, Clauss W and Johnson A T 2000 *Phys. Rev. B* [**62**]{} R2307; Venema L C, Janssen J W, Buitelaar M R, Wild['’o]{}er J W G, Lemay S G, Kouwenhoven L P and Dekker C 2000 *Phys. Rev. B* [**62**]{} 5238 Xue Y and Ratner M A 2003 *Phys. Rev. B* [**68**]{} 115406; Xue Y and Ratner M A 2004 *Phys. Rev. B* [**69**]{} 85403 ![\[xueFig1\] (Color online) (a) Schematic illustration of the Au-SWNT-Ti junction. The ends of the long SWNT wire are surrounded entirely by the semi-infinite electrodes, with only a finite segment being sandwiched between the electrodes (defined as the channel). Also shown is the coordinate system of the nanotube junction. (b) Schematic illustration of the band diagram in the Au-SWNT-Ti junction. The band alignment in the middle of the SWNT junction is determined by the average of the metal work functions. $W_{1(2)},E_{F}$ denote the work functions and Fermi-level of the bi-metallic junction. ](xueFig1.eps){height="4.0in" width="5.0in"} ![\[xueFig2\] Electrostatics of the Au-SWNT-Ti junction for SWNT channels of different lengths. (a) Upper figure shows transferred charge per atom as a function of position along the SWNT axis for SWNT channel lengths of $2.0, 8.4 12.6, 16.9$ and $21.2$ nm. Lower figure shows the magnified view of the transferred charge in the middle of the channel for the longest (21.2 nm) SWNT studied. (b) Upper figure shows the electrostatic potential change at the cylindrical surface of the 20 (8.4 nm) and 40- unitcell (16.9 nm) SWNTs studied. The dotted line denote the linear voltage ramp $V_{ext}$ (contact potential) due to the work function difference of gold and titanium. The dashed line show the charge-transfer induced electrostatic potential change $\delta V(\delta \rho)$. The solid line shows the net electrostatic potential change $V_{ext}+\delta V$. Lower figure shows the magnified view of the electrostatic potential change in the middle of the 40-unitcell SWNT junction. ](xueFig2-1.eps "fig:"){height="3.0in" width="5.0in"} ![\[xueFig2\] Electrostatics of the Au-SWNT-Ti junction for SWNT channels of different lengths. (a) Upper figure shows transferred charge per atom as a function of position along the SWNT axis for SWNT channel lengths of $2.0, 8.4 12.6, 16.9$ and $21.2$ nm. Lower figure shows the magnified view of the transferred charge in the middle of the channel for the longest (21.2 nm) SWNT studied. (b) Upper figure shows the electrostatic potential change at the cylindrical surface of the 20 (8.4 nm) and 40- unitcell (16.9 nm) SWNTs studied. The dotted line denote the linear voltage ramp $V_{ext}$ (contact potential) due to the work function difference of gold and titanium. The dashed line show the charge-transfer induced electrostatic potential change $\delta V(\delta \rho)$. The solid line shows the net electrostatic potential change $V_{ext}+\delta V$. Lower figure shows the magnified view of the electrostatic potential change in the middle of the 40-unitcell SWNT junction. ](xueFig2-2.eps "fig:"){height="3.0in" width="5.0in"} ![\[xueFig3\] (Color online) Local density of states (LDOS) as a function of position along the SWNT axis for SWNT channel length of $16.9 nm$. We show the result when self-consistent SWNT screening of the build-in electric field is included in (a). For comparison we have also shown the result for non self-consistent calculation in (b). The plotted LDOS is obtained by summing over the $10$ atoms of each carbon ring of the $(10,0)$ SWNT. Note that each cut along the energy axis at a given axial position gives the LDOS of the corresponding carbon ring and each cut along the position axis at a given energy gives the corresponding band shift. ](xueFig3-1.eps "fig:"){height="3.8in" width="5.0in"} ![\[xueFig3\] (Color online) Local density of states (LDOS) as a function of position along the SWNT axis for SWNT channel length of $16.9 nm$. We show the result when self-consistent SWNT screening of the build-in electric field is included in (a). For comparison we have also shown the result for non self-consistent calculation in (b). The plotted LDOS is obtained by summing over the $10$ atoms of each carbon ring of the $(10,0)$ SWNT. Note that each cut along the energy axis at a given axial position gives the LDOS of the corresponding carbon ring and each cut along the position axis at a given energy gives the corresponding band shift. ](xueFig3-2.eps "fig:"){height="3.8in" width="5.0in"} ![\[xueFig4\] Local-density-of-states (LDOS) and transmission versus energy (TE) characteristics of the 40-unit cell Au-SWNT-Ti junction. (a) LDOS at the 1st unit cell adjacent to the Au side (left end) of the Au-SWNT-Ti junction (solid line) and the LDOS at the corresponding location of the Au-SWNT-Au junction (dashed line). (b) LDOS in the middle unit cell of the Au-SWNT-Ti junction (solid line) and the LDOS of the bulk (10,0) SWNT (dashed line). (c) LDOS at the 1st unit cell adjacent to the Ti side (right end) of the Au-SWNT-Ti junction (solid line) and the LDOS at the corresponding location of the Ti-SWNT-Ti junction (dashed line). (d) TE characteristics of the Au-SWNT-Ti junction. ](xueFig4.eps){height="4.0in" width="5.0in"} ![\[xueFig5\] (a) Room temperature conductance of the Au-SWNT-Ti junction as a function of SWNT channel length. (b) temperature dependence of the conductance of the 40-unit cell (16.9 nm) SWNT junction. The room temperature current-voltage characteristics of the 20- and 40- unit cell SWNT junctions are shown in (c) and (d) respectively. ](xueFig5.eps){height="4.0in" width="5.0in"}
--- abstract: 'Multiplicity distributions of charged particles for pp collisions at LHC Run 1 energies, from $\sqrt{s}=$ 0.9 to 8 TeV are measured over a wide pseudorapidity range ($-3.4<\eta<5.0$) for the first time. The results are obtained using the Forward Multiplicity Detector and the Silicon Pixel Detector within ALICE. The results are compared to Monte Carlo simulations, and to the IP-Glasma model.' address: 'Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark' author: - 'Valentina Zaccolo, on behalf of the ALICE Collaboration' bibliography: - 'biblio\_QM.bib' title: 'Charged-Particle Multiplicity Distributions over a Wide Pseudorapidity Range in Proton-Proton Collisions with ALICE' --- charged-particle multiplicity distributions ,pp collisions ,saturation models ,forward rapidity Introduction {#Intro} ============ The multiplicity distribution of charged particles ($ N_{\text{ch}}$) produced in high energy pp collisions, $\text{P}(N_{\text{ch}})$, is sensitive to the number of collisions between quarks and gluons contained in the colliding protons and, in general, to the mechanisms underlying particle production. In particular, $\text{P}(N_{\text{ch}})$ is a good probe for the saturation density of the gluon distribution in the colliding hadrons. The pp charged-particle multiplicity distributions are measured for five gradually larger pseudorapidity ranges. The full description of the ALICE detector is given in [@Aamodt:2008zz]. In this analysis, only three subdetectors are used, namely, the V0 detector [@Abbas:2013taa], the Silicon Pixel Detector (SPD) [@Aamodt:2008zz] and the Forward Multiplicity Detector (FMD) [@Christensen:2007yc] to achieve the maximum possible pseudorapidity coverage ($-3.4<\eta<5.0$). Analysis Procedure {#Analysis} ================== Three different collision energies (0.9, 7, and 8 TeV) are analyzed here. Pile-up events produce artificially large multiplicities that enhance the tail of the multiplicity distribution, therefore, special care was taken to avoid runs with high pile up. For the measurements presented here, the pile-up probability is of $\sim2\%$. The fast timing of the V0 and SPD are used to select events in which an interaction occurred and events are divided into two trigger classes. The first class includes all inelastic events (INEL) which is the same condition as used to select events where an interaction occurred (this is called the MB$_{\text{OR}}$ trigger condition). The second class of events requires a particle to be detected in both the V0A and the V0C (MB$_{\text{AND}}$ trigger condition). This class is called the Non-Single-Diffractive (NSD) event class, where the majority of Single-Diffractive events are removed. The FMD has nearly 100$\%$ azimuthal acceptance, but the SPD has significant dead regions that must be accounted for. On the other hand, interactions in detector material will increase the detected number of charged particles and have to be taken into account. The main ingredients necessary to evaluate the primary multiplicity distributions are the raw (detected) multiplicity distributions and a matrix, which converts the raw distribution to the true primary one. The raw multiplicity distributions are determined by counting the number of clusters in the SPD acceptance, the number of energy loss signals in the FMD [@Abbas:2013bpa], or the average between the two if the acceptance of the SPD and FMD overlap. The response of the detector is determined by the matrix $R_{\text{mt}}$ which, when normalized, is the probability that an event with true multiplicity t and measured multiplicity m occurs. This matrix is obtained using Monte Carlo simulations, in this case the PYTHIA ATLAS-CSC flat tune [@d'Enterria:2011kw], where the generated particles are propagated through the detector simulation code (in this case GEANT [@Brun:1994aa]) and then through the same reconstruction steps as the actual data. The response matrix is obtained from an iterative application of Bayes’ unfolding [@2010arXiv1010.0632D]. [0.39]{} ![Charged-particle multiplicity distributions for NSD pp collisions at $\sqrt{s}=0.9$ and 8 TeV. The lines show fits to the data using double NBDs (eq. \[eq2\]). Ratios of the data to the fits are also shown.[]{data-label="V0AND900"}](2015-Sep-21-results_0_1.pdf "fig:"){width="\textwidth"} [0.39]{} ![Charged-particle multiplicity distributions for NSD pp collisions at $\sqrt{s}=0.9$ and 8 TeV. The lines show fits to the data using double NBDs (eq. \[eq2\]). Ratios of the data to the fits are also shown.[]{data-label="V0AND900"}](2015-Sep-21-results_2_1.pdf "fig:"){width="\textwidth"} The probability that an event is triggered, at all, depends on the multiplicity of produced charged particles. At low multiplicities large trigger inefficiencies exist and must be corrected for. The event selection efficiency, $\epsilon_{\text{TRIG}}$, is defined dividing the number of reconstructed events with the selected hardware trigger condition and with the reconstructed vertex less than 4 cm from the nominal IP by the same quantity but for the true interaction classification: $\epsilon_{\text{TRIG}}=N_{\text{ch,reco}}/N_{\text{ch,gen}}$. The unfolded distribution is corrected for the vertex and trigger inefficiency by dividing each multiplicity bin by its $\epsilon_{\text{TRIG}}$ value. Diffraction was implemented using the Kaidalov-Poghosyan model [@Kaidalov:2009aw] to tune the cross sections for diffractive processes (the measured diffraction cross–sections at LHC and the shapes of the diffractive masses $M_{\text{X}}$ are implemented in the Monte-Carlo models used for the $\epsilon_{\text{TRIG}}$ computation). Results {#Results} ======= The multiplicity distributions have been measured for the two event classes (INEL and NSD) for pp collisions at $\sqrt{s}=$ 0.9, 7, and 8 TeV. Fits to the sum of two Negative Binomial Distributions (NBDs) have been performed here and are plotted together with the results in Figs. \[V0AND900\] and \[V0AND7000\]. The distributions have been fitted using the function $$\label{eq2} \text{P}(n)=\lambda[\alpha \text{P}_{NBD}(n,\langle n\rangle_{\text{1}},k_{\text{1}})+(1-\alpha)\text{P}_{NBD}(n,\langle n\rangle_{\text{2}},k_{\text{2}})]$$ To account for NBDs not describing the 0–bin and the first bins for the wider rapidities (and therefore removing that bin from the fit), a normalization factor $\lambda$ is introduced. The $\alpha$ parameter reveals the fraction of soft events. It is lower for higher energies and for wider pseudorapidity ranges, where the percentage of semi–hard events included is higher: $\alpha\backsim65\%$ for $\vert\eta\vert<2.0$ at $\sqrt{s}=$ 0.9 TeV and $\alpha\backsim35\%$ for $-3.4<\eta<5.0$ at 7 and 8 TeV. $\langle n\rangle_{\text{1}}$ is the average multiplicity of the soft (first) component, while $\langle n\rangle_{\text{2}}$ is the average for the semi–hard (second) component. The parameters $k_{\text{1,2}}$ represent the shape of the two components of the distribution. In Fig. \[V0AND900\], the obtained multiplicity distributions for 0.9 TeV and 8 TeV for the NSD event class are shown for five pseudorapidity ranges, $\vert\eta\vert<2.0$, $\vert\eta\vert<2.4$, $\vert\eta\vert<3.0$, $\vert\eta\vert<3.4$ and $-3.4<\eta<5.0$. The distributions are multiplied by factors of 10 to allow all distributions to fit in the same figure without overlapping. Figure \[V0AND7000\] shows the results for the INEL event classes for collisions at 7 TeV (left plot). Comparisons with distributions obtained with the PYTHIA 6 Perugia 0 tune [@Skands:2010ak], PYTHIA 8 Monash tune [@Sjostrand:2014zea], PHOJET [@Bopp:1998rc] and EPOS LHC [@Pierog:2013ria] Monte Carlo generators are shown for INEL events at 7 TeV (right plot) . Both PHOJET and the PYTHIA 6 strongly underestimate the multiplicity distributions. PYTHIA 8 reproduces well the tails for the wider pseudorapidity range, but shows an enhancement in the peak region. EPOS with the LHC tune models well the distributions, both in the first bins, which are dominated by diffractive events, and in the tails. [0.39]{} ![Left: charged-particle multiplicity distributions for INEL pp collisions at $\sqrt{s}=7$ TeV. The lines show fits to the data using double NBDs (eq. \[eq2\]). Ratios of the data to the fits are also shown. Right: comparison of multiplicity distributions for INEL events to PYTHIA 6 Perugia 0, PYTHIA 8 Monash, PHOJET and EPOS LHC at 7 TeV.[]{data-label="V0AND7000"}](2015-Sep-21-results_1_0.pdf "fig:"){width="\textwidth"} [0.39]{} ![Left: charged-particle multiplicity distributions for INEL pp collisions at $\sqrt{s}=7$ TeV. The lines show fits to the data using double NBDs (eq. \[eq2\]). Ratios of the data to the fits are also shown. Right: comparison of multiplicity distributions for INEL events to PYTHIA 6 Perugia 0, PYTHIA 8 Monash, PHOJET and EPOS LHC at 7 TeV.[]{data-label="V0AND7000"}](2015-Sep-21-MCresults_1_0.pdf "fig:"){width="\textwidth"} The multiplicity distributions are compared to those from the IP–Glasma model [@Schenke:2013dpa]. This model is based on the Color Glass Condensate (CGC) [@Iancu:2003xm]. It has been shown that NBDs are generated within the CGC framework [@Gelis:2009wh; @McLerran:2008es]. In Fig. \[IPGlasmaMineNSD7000\], the distribution for $\vert\eta\vert<2.0$ is shown together with the IP–Glasma model distributions as a function of the KNO variable $N_{\text{ch}}/\langle N_{\text{ch}}\rangle$. The IP–Glasma distribution shown in green is generated with a fixed ratio between $Q_{s}$ (gluon saturation scale) and density of color charge. This introduces no fluctuations. The blue distribution, instead, is generated with fluctuations of the color charge density around the mean following a Gaussian distribution with width $\sigma=0.09$. The black distributions includes an additional source of fluctuations, dominantly of non-perturbative origin, from stochastic splitting of dipoles that is not accounted for in the conventional frameworks of CGC [@McLerran:2015qxa]. In this model, the evolution of color charges in the rapidity direction still needs to be implemented and, therefore, in the present model the low multiplicity bins are not reproduced for the wide pseudorapidity range presented here. ![Charged-particle multiplicity distributions for pp collisions at $\sqrt{s}=7$ TeV compared to distributions from the IP–Glasma model with the ratio between $Q_{s}$ and the color charge density either fixed (green), allowed to fluctuate with a Gaussian (blue) [@Schenke:2013dpa] or with additional fluctuations of proton saturation scale (black) [@McLerran:2015qxa].[]{data-label="IPGlasmaMineNSD7000"}](2015-Sep-21-comparison.pdf){width="79.00000%"} Conclusions {#Conclusions} =========== Data from the Silicon Pixel Detector (SPD) and the Forward Multiplicity Detector (FMD) in ALICE were used to access a uniquely wide pseudorapidity coverage at the LHC of more the eight $\eta$ units, from $-3.4<\eta<5.0$. The charged-particle multiplicity distributions were presented for two event classes, INEL and NSD, and extend the pseudorapidity coverage of the earliest results published by ALICE [@Adam:2015gka] and CMS [@Khachatryan:2010nk] around midrapidity, and, consequently, the high-multiplicity reach. PYTHIA 6 and PHOJET produce distributions which strongly underestimate the fraction of high multiplicity events. PYTHIA 8 underestimates slightly the tails of the distributions, while EPOS reproduces both the low and the high multiplicity events. The Color Glass Condensate based IP–Glasma models produce distributions which underestimate the fraction of high multiplicity events, but introducing fluctuations in the saturation momentum the high multiplicity events are better explained.
--- abstract: 'Online social networks (OSNs) are ubiquitous attracting millions of users all over the world. Being a popular communication media OSNs are exploited in a variety of cyber-attacks. In this article, we discuss the [*chameleon* ]{}attack technique, a new type of OSN-based trickery where malicious posts and profiles change the way they are displayed to OSN users to conceal themselves before the attack or avoid detection. Using this technique, adversaries can, for example, avoid censorship by concealing true content when it is about to be inspected; acquire social capital to promote new content while piggybacking a trending one; cause embarrassment and serious reputation damage by tricking a victim to like, retweet, or comment a message that he wouldn’t normally do without any indication for the trickery within the OSN. An experiment performed with closed Facebook groups of sports fans shows that (1) [*chameleon* ]{}pages can pass by the moderation filters by changing the way their posts are displayed and (2) moderators do not distinguish between regular and [*chameleon* ]{}pages. We list the OSN weaknesses that facilitate the [*chameleon* ]{}attack and propose a set of mitigation guidelines.' author: - 'Aviad Elyashar, Sagi Uziel, Abigail Paradise, and Rami Puzis' bibliography: - 'references.bib' title: 'The Chameleon Attack: Manipulating Content Display in Online Social Media' --- Introduction {#sec:introduction} ============ The following scenario is not a conventional introduction. Rather, it’s a brief example to stress the importance and potential impact of the disclosed weakness, unless the countermeasures described in this article are applied. \[ex:teaser\] Imagine a controversial Facebook post shared by a friend of yours. You have a lot to say about the post, but you would rather discuss it in person to avoid unnecessary attention online. A few days later when you talk with your friend about the shared post, the friend does not understand what you’re referring to. Both of you scan through his/her timeline and nothing looks like that post. The next day you open Facebook and discover that in the last six months you have joined three Facebook groups of Satanists; you actively posted on a page supporting an extreme political group (although your posts are not directly related to the topics discussed there), and you liked several websites leading to video clips with child abuse. A terrible situation that could hurt your good name especially if you are a respected government employee! At the time of submission of this article, the nightmare described in Example \[ex:teaser\] is still possible in major online social networks (OSNs) (see Section \[sec:susceptibility\]) due to a conceptual design flaw. Today, OSNs are an integral part of our lives [@boyd2007social]. They are powerful tools for disseminating, sharing and consuming information, opinions, and news [@kwak2010twitter]; and for expanding connections [@gilbert2009predicting], etc. However, OSNs are also constantly abused by cybercriminals who exploit them for malicious purposes, including spam and malware distribution [@lee2010uncovering], harvesting personal information [@boshmaf2011socialbot], infiltration [@elyashar2013homing], and spreading misinformation [@ferrara2015manipulation]. Bots, fake profiles, and fake information are all well-known scourges being tackled by OSN providers, academic researchers, and organizations around the world with various levels of success. It is extremely important to constantly maintain the content of social platforms and service-wise, in order to limit abuse as much as possible. To provide the best possible service to their users, OSNs allow users to edit or delete published content [@facebook_help_edit_post], edit user profiles, and update previews of linked resources, etc. These features are important to keep social content up to date, to correct grammatical or factual errors in published content, and eliminate abusive content. Unfortunately, they also open an opportunity for a scam where OSN users are tricked into engaging with seemingly appealing content that is later modified. This type of scam is trivial to execute and is out of the scope of this article. Facebook partially mitigates the problem of modifications made to posts after their publication by displaying an indication that a post was edited. Other OSNs, such as Twitter or Instagram, do not allow published posts to be edited. Nevertheless, the major OSNs (Facebook, Twitter, and LinkedIn) allow publishing redirect links, and they support link preview updates. This allows changing the way a post is displayed without any indication that the target content of the URLs has been changed. In this article, we present a novel type of OSN attack termed the [*chameleon* ]{}attack, where the content (or the way it is displayed) is modified over time to create social traction before executing the attack (see Section \[sec:chameleon\_attack\]). We discuss the OSN misuse cases stemming from this attack and their potential impacts in Section \[sec:misuse\]. We review the susceptibility of seven major OSN platforms to the [*chameleon* ]{}attack in Section \[sec:susceptibility\] and present the results of an intrusion into closed Facebook groups facilitated by it in Section \[sec:group\_infiltration\_experiments\]. A set of suggested countermeasures that should be applied to reduce the impact of similar attacks in the future is suggested in Section \[sec:mitigation\]. The contribution of this study is three-fold: - We present a new OSN attack termed the [*chameleon* ]{}attack, including an end-to-end demonstration on major OSNs (Facebook, Twitter, and LinkedIn). - We present a social experiment on Facebook showing that chameleons facilitate infiltration into closed communities. - We discuss multiple misuse cases and mitigation from which we derive a recommended course of action to OSNs. Background on redirection and link preview ========================================== #### Redirection It is a common practice on the web that helps Internet users to find relocated resources, use multiple aliases for the same resource, and shorten long and cumbersome URLs. Thus, the use of URL shortening services is very common within OSNs. There are two types of redirect links: server, and client redirects. In the case of a server-side redirect, the server returns the HTTP status code 301 (redirect) with a new URL. Major OSNs follow server-side redirects up to the final destination in order to provide their users with a preview of the linked Web resource. In the case of a client-side redirect, the navigation process is carried out by a JavaScript command executed in the client’s browser. Since the OSNs do not render the Web pages they do not the follow client redirects up to the final destination. #### Short links and brand management There are many link redirection services across the Web that use 301 server redirects for brand management, URL shortening, click counts and various website access statistics. Some of these services that focus on brand management, such as *[rebrandly.com](rebrandly.com)*, allow their clients to change the target URL while maintaining the aliases. Some services, such as *[bitly.com](bitly.com)*, require a premium subscription to change the target URL. The ability to change the target URL without changing the short alias is important when businesses restructure their websites or move them to a different web host. Yet, as will be discussed in Section \[sec:chameleon\_attack\], this feature may be exploited to facilitate the [*chameleon* ]{}attack. #### DNS updates DNS is used to resolve the IP address of a server given a domain name. The owner of the domain name may designate any target IP address for his/her domain and change it at will. The update process may take up to 24 hours to propagate. Rapid DNS update queries, known as Fast Flux, are used by adversaries to launch spam and phishing campaigns. Race conditions due to the propagation of DNS updates cause a domain name to be associated with multiple, constantly changing IP addresses at the same time. #### Link previews Generating and displaying link previews is an important OSN feature that streamlines the social interaction within the OSN. It allows the users to quickly get a first impression of a post or a profile without extra clicks. Based on the meta-tags of the target page, the link preview, usually includes a title, a thumbnail, and a short description of the resource targeted by the URL [@kopetzky1999visual]. When shortened URLs or other server-side redirects are used, the OSN follows the redirection path to generate a preview of the final destination. These previews are cached due to performance considerations. The major OSNs update the link previews upon request (see Section \[sec:weaknesses\] for details). In the case of client-redirect, some OSNs (e.g., Twitter) use the meta-tags of the first HTML page in the redirect chain. Others, (e.g., Facebook) follow the client redirect up to the final destination. The Chameleon Attack {#sec:chameleon_attack} ==================== The [*chameleon* ]{}attack takes advantage of link previews and redirected links to modify the way that published content is displayed within the OSN without any indication for the modifications made. As part of this attack, the adversary circumvents the content editing restrictions of an OSN by using redirect links. ![\[fig:chameleon\_attack\_phases\]Chameleon Attack Phases.](killchain.png){width="0.6\columnwidth"} We align the phases of a typical [*chameleon* ]{}attack to a standard cyber kill chain as follows: 1. **Reconnaissance** (out of scope): The attacker collects information about the victim using standard techniques to create an appealing disguise for the [*chameleon* ]{}posts and profiles. 2. **Weaponizing** (main phase): The attacker creates one or more redirection chains to web resources (see Required Resources in Section \[sec:resources\]). The attacker creates [*chameleon* ]{}posts or profiles that contain the redirect links. 3. **Delivery** (out of scope): The attacker attracts the victim’s attention to the [*chameleon* ]{}posts and profiles, similar to phishing or spear-phishing attacks. 4. **Maturation** (main phase): The [*chameleon* ]{}content builds trust within the OSN, collects social capital, and interacts with the victims. This phase is inherent to spam and phishing attacks that employ fake OSN profiles. But since such attacks are not considered as sophisticated and targeted, this phase is typically ignored in standard cyber kill chains or is referred to by the general term of *social engineering*. Nevertheless, building trust within an OSN is very important for the success of both targeted and un-targeted [*chameleon* ]{}attacks. 5. **Execution** (main phase): The attacker modifies the display of the [*chameleon* ]{}posts or profiles by changing the redirect target and refreshing the cached link previews. Since the [*chameleon* ]{}attack is executed outside the victim’s premises there are no lateral movement or privilege escalation cycles. This attack can be used during the reconnaissance phase of a larger attack campaign or to reduce the cost of weaponizing any OSN based attack campaign (see examples in Section \[sec:misuse\]). The most important phases in the execution flow of the [*chameleon* ]{}attack are *weaponizing*, *maturation*, and *execution* as depicted in Figure \[fig:chameleon\_attack\_phases\]. The attacker may proceed with additional follow-up activities depending on the actual attack goal as described in Section \[sec:misuse\]. ![image](post-facebook-before.png){width="0.16\linewidth"} ![image](post-facebook-after.png){width="0.16\linewidth"} ![image](post-twitter-before.png){width="0.16\linewidth"} ![image](post-twitter-after.png){width="0.16\linewidth"} ![image](post-linkedin-before.jpg){width="0.16\linewidth"} ![image](post-linkedin-after.jpg){width="0.16\linewidth"} ![image](page-facebook-before.png){width="0.16\linewidth"} ![image](page-facebook-after.png){width="0.16\linewidth"} ![image](page-twitter-before.png){width="0.16\linewidth"} ![image](page-twitter-after.png){width="0.16\linewidth"} ![image](page-linkedin-before.jpg){width="0.16\linewidth"} ![image](page-linkedin-after.jpg){width="0.16\linewidth"} A Brief Showcase ---------------- To demonstrate how a [*chameleon* ]{}attack looks from the user’s perspective, we show here examples of [*chameleon* ]{}posts and profiles.[^1] The link preview in this post will change each time you click the video. It may take about 20 seconds and requires refreshing the page. #### Chameleon Post Figures \[fig:posts\] (1,2) present the same post on Facebook with two different link previews. Both versions of the post lead to *YouTube.com* and are displayed accordingly. There is no indication of any modification made to the post in either of its versions because the actual post was not modified. Neither is there an edit history, for the same reason. Likes and comments are retained. If the post was shared, the shares will show the old link preview even after it was modified in the original post. Similarly, Figure \[fig:posts\] (3,4) and (5,6) present two versions of the same post on Twitter and LinkedIn respectively. There is no edit indication nor edit history because Twitter tweets cannot be edited. As with Facebook, likes, comments, and retweets are retained after changing the posted video and updating the link preview. Unlike Facebook, however, the link previews of all retweets and all LinkedIn posts that contain the link will change simultaneously. #### Chameleon Profile Figure \[fig:pages\] presents example of a [*chameleon* ]{}page on Facebook and a [*chameleon* ]{}profile on Twitter. Since the technique used to build [*chameleon* ]{}profiles and [*chameleon* ]{}pages are similar, as well as their look and feel, in the rest of this paper, we will use the terms pages and profiles interchangeably. All OSNs allow changing the background picture and the description of profiles, groups, and pages. A [*chameleon* ]{}profile is different from a regular profile by [*chameleon* ]{}posts included alongside neutral or personal posts. This way a Chelsea fan (Figure \[fig:pages\].1) can pretend to be an Arsenal fan (Figure \[fig:pages\].2) and vice versa. Required Resources {#sec:resources} ------------------ The most important infrastructure element used to execute the [*chameleon* ]{}attack is a redirection service that allows attackers to modify the redirect target without changing the alias. This can be implemented using a link redirection service or a website controlled by the adversary. In the former case, the link redirection service must allow modifying the target link for a previously defined alias. This is the preferred infrastructure to launch the [*chameleon* ]{}attack. In the latter case, if the attacker has control over the redirection server then a server-side 301 redirect can be used, seamlessly utilizing the link preview feature of major OSNs. If the attacker has no control over the webserver, he/she may still use a client-side redirect. He/she will have to supply the required metadata for the OSN to create link previews. If the attacker owns the domain name used to post the links, he/she may re-target the IP address associated with the domain name to a different web resource. Fast flux attack infrastructure can also be used; however, this is overkill for the [*chameleon* ]{}attack and may cause the attack to be detected [@holz2008measuring]. Example Instances {#sec:misuse} ----------------- In this section, we detail several examples of misuse cases [@mcdermott2000eliciting] which extend the general [*chameleon* ]{}attack. Each misuse case provides a specific flavor of the attack execution flow, as well as the possible impact of the attack. ### Incrimination and Shaming This flavor of the [*chameleon* ]{}attack targets specific users. Shaming is one of the major threats in OSNs [@goldman2015trending]. In countries like Thailand, the shaming misuse cases can potentially be dangerous where people face up to 32 years in prison for “liking” or re-sharing content that insults the king. Here, the impact can be greatly amplified if the adversary employs *chameleons* and the victim is careless enough to interact with content posted by a dubious profile or page. #### Execution flow The attacker performs the (1) *reconnaissance* and (3) *delivery* phases using standard techniques, similar to a spear-phishing attack.[^2] \(2) During the *weaponizing* phase, the attacker creates [*chameleon* ]{}posts that endorse a topic favored by the victim, e.g., he/she may post some new music clips by the victim’s favorite band. Each post includes a redirect link that points to a YouTube video or similar web resource, but the redirection is controlled by the attacker. \(4) During the *maturation* phase, the victim shows their appreciation of seemingly appealing content by following the [*chameleon* ]{}page, linking, retweeting, commenting, or otherwise interacting with the [*chameleon* ]{}posts. Unlike in spear-phishing, where the victim is directed to an external resource or is required to expose his/her personal information, here standard interactions that are considered safe within OSNs are sufficient to affiliate the victim with the [*chameleon* ]{}posts. This significantly lowers the attack barrier. (5) Finally, immediately after the victim’s interaction with the [*chameleon* ]{}posts, the adversary switches their display to content that opposes the victim’s agenda to cause maximal embarrassment or political damage. The new link preview will appear in the victim’s timeline. The OSN will amplify this attack by notifying the victim’s friends (Facebook) and followers (Twitter) about the offensive posts liked, commented, or retweeted by the victim. #### Potential impact At the very least, such an attack can cause discomfort to the victim. It can be life-threatening in cases when the victim is a teenager. And it can have far-reaching consequences if used during political campaigns. ### Long Term Avatar Fleet Management Adversaries maintain fleets of fake OSN profiles termed avatars to collect intelligence, infiltrate organizations, disseminate misinformation, etc. To avoid detection by machine learning algorithms and build long term trust within the OSN sophisticated avatars need to be operated by a human [@elyashar2016guided; @paradise2017creation]. The maturation process of such avatars may last from several months to a few years. Fortunately, the attack target and the required number of avatars are usually not known in advance significantly reducing the cost-effectiveness of sophisticated avatars. *Chameleon* profiles exposed here facilitate efficient management of a fleet of avatars by maintaining a pool of mature avatars whose timeline is adapted to the agenda of the attack target once it is known. #### Execution Flow In this special misuse case, the attack phases *weaponizing* and *maturation* are performed twice; both before and after the attack target is known. \(1) The first *weaponizing* phase starts with establishing the redirect infrastructure and building a fleet of avatars. They are created with neutral displays common within the OSN. \(2) During the initial *maturation* process, the neutral avatars regularly publish [*chameleon* ]{}posts with neutral displays. They acquire friends while maximizing the acceptance rate of their friend requests [@paradise2017creation]. \(3) Once the attack target is known the attacker performs the required *reconnaissance*, selects some of the mature [*chameleon* ]{}profiles, and (4) *weaponizes* them with the relevant agenda by changing the profile information and the display of all past [*chameleon* ]{}posts. During (5) *delivery* and (6) the second *maturation* phase, the refreshed [*chameleon* ]{}profiles (avatars) contact the target and build trust with it. The (7) *execution* phase in this misuse case depends on the attacker’s goals. The avatars that already engaged in an attack will likely be discarded. #### Potential Impact The adversary does not have to create an OSN account and build an appropriate agenda for each avatar long before executing an attack. *Chameleon* profiles and posts are created and maintained as a general resource suitable for various attack campaigns. As a result, the cost of maintaining such avatars is dramatically reduced. Moreover, if an avatar is detected and blocked during the attack campaign, its replacement can be *weaponized* and released very quickly. ### Evading Censorship OSNs maintain millions of entities, such as pages, groups, communities, etc. For example, Facebook groups unite users based on shared interests [@casteleyn2009use]. To ensure proper language, avoid trolling and abuse, or allow in only users with a very specific agenda, moderators inspect the users who ask to join the groups and review the published posts. *Chameleon* attack can help in evading censorship, as well as a shallow screening of OSN profiles. See Section \[sec:mitigation\] for specific recommendations on profile screening to detect [*chameleon* ]{}profiles. For example, assume two Facebook groups, uniting Democrat and Republican activists during U.S. elections. Assume a dishonest activist from one political extreme that would like to join a Facebook group of the rivals. Reasons may vary from trolling to spying. Assume, that this activist would like to spread propaganda within the rival group. But pages that exhibit an agenda that is not appropriate for the group would not be allowed by the group owner. The next procedure would allow the rival activist to bypass the censorship of the group moderator. #### Execution Flow During the (1) *reconnaissance* phase, the adversary learns the censorship rules of the target. (2) The *weaponizing* phase includes establishing a [*chameleon* ]{}profile with agenda appropriate to the censorship. During the (3) *maturation* phase, the adversary publishes posts with redirect links to videos fitting the censorship rules. (4) *delivery* in this case represents the censored act, such as requesting to enter a group, sending a friend request, posting a video, etc. The censor (e.g., the group’s administrator) reviews the profile and its timeline and approves them to be presented to all group members. Finally, in the (5) *execution* phase, the adversary changes the display of its profile and posts to reflect a new agenda that would otherwise not be allowed by the censor. #### Potential Impact This attack allows the adversary to infiltrate a closed group and publishing posts in contrast to the administrator’s policy. Moreover, one-time censorship of published content would no longer be sufficient. Moderators would have to invest a lot more effort in the periodical monitoring of group members and their posts to ensure that they still fit the group’s agenda. In Section \[sec:group\_infiltration\_experiments\], we demonstrate the execution of the [*chameleon* ]{}attack for penetrating closed groups using soccer fan groups as an allegory for groups with extreme political agenda. ### Promotion Unfortunately, the promotion of content, products, ideas, etc. using bogus and unfair methods is very common in OSNs. Spam and crowdturfing are two example techniques used for promotion. The objective of spam is to reach maximal exposure through unsolicited messages. Bots and crowdturfers are used to misrepresent the promoted content as a generally popular one by adding likes and comments. Crowdturfers [@lee2013crowdturfers] are human workers who promote social content for economic incentives. *Chameleon* attack can be used to acquire likes and comments of genuine OSN users by piggybacking a popular content. #### Execution Flow During (1) *reconnaissance* phase, the attacker collects information about a topic favorable to the general public that is related to the unpopular content that the attacker wants to promote. (2) During the *weaponizing* phase, the attacker creates a [*chameleon* ]{}page and posts that support the favorite topic. For example, assume an adversary who is a new singer who would like to promote themselves. In the *weaponizing* phase, he/she can create a [*chameleon* ]{}page that supports a well-known singer. During the (3) *delivery* and (4) *maturation* phases, the OSN users show their affection towards seemingly appealing content by interacting with the [*chameleon* ]{}page using likes, linking, retweeting, commenting, etc. As time passes, the [*chameleon* ]{}page obtains social capital. In the final (5) *execution* phase, the [*chameleon* ]{}page’s display changes to support the artificially promoted content retaining the unfairly collected social capital. #### Potential Impact The attacker can use [*chameleon* ]{}pages and posts to promote content by piggybacking popular content. The attacker enjoys the social capital provided by a genuine crowd that otherwise would not interact with the dubious content. Social capital obtained from bots or crowdturfers can be down-rated using various reputation management techniques. In contrast, social capital obtained through the [*chameleon* ]{}trickery is provided by genuine OSN users. ### Clickbait Most of the revenues of online media come from online advertisements [@chakraborty2016stop]. This phenomenon generated a significant amount of competition among online media websites for the readers’ attention and their clicks. To attract users and encourage them to visit a website and click a given link, the website administrators use catchy headlines along with the provided links, which lure users into clicking on the given link [@chakraborty2016stop]. This phenomenon titled clickbait. #### Execution Flow \(1) During the *weaponizing* phase, the attacker creates [*chameleon* ]{}profiles with posts with redirect links. Consider an adversary that is a news provider who would like to increase the traffic to its website. To increase its revenues, he can do the following: in the *weaponizing* phase, he should publish a [*chameleon* ]{}post with a catchy headline with an attached link to an interesting article. Later, in the *maturation* phase, users attract the post by its attractive link preview, as well as its headline, and increase the traffic to a website. Later, in the *execution* phase, the adversary changes the redirect target of the posted link but keeping the link preview not updated. As a result, new users will click on the [*chameleon* ]{}post that its display did not change, but now they will be navigated to the adversary’s website. #### Potential Impact By applying this attack, the attacker can increase his traffic and, eventually, his income. Luring the users with an attractive link preview in which increases the likelihood that the user will click on it and will consume his content. Susceptibility of Social Networks to the Chameleon Attack {#sec:susceptibility} ========================================================= Online Social Networks ---------------------- Attacker’s ability OSN feature Facebook Twitter WhatsApp Instagram Reddit Flickr LinkedIn -------------------- -------------------------------------------------- ---------- --------- ---------- ----------- -------- -------- ---------- Editing a post’s publication date Y N N N N N N Presenting original publication date Y - - - - - - Editing previously published posts Y N N Y Y Y Y Presenting editing indication in published posts Y - - Y N N Y Presenting editing indication in shared post N - - Y - - Y Presenting edit history Y - - N - - N Publishing redirect links Y Y Y N Y Y Y Changing display Displaying link preview Y Y Y - Y N Y Updating link preview Y Y N - N - Y Switching content Hiding posts Y N N Y Y Y N = Facilitates the [*chameleon* ]{}attack. = Mitigates the [*chameleon* ]{}attack \[tab:OsnCompareTbl\] ### Facebook Facebook allows users to manipulate the display of previously published posts based on several different features. The features include the publishing of redirect links, editing post’s publication date, hiding previously published posts, and publishing unauthorized content in a closed group. Up until 2017, in case a user edits a post on Facebook, an indicator is presented for the users to notify them that the content had been updated. After 2017, a Facebook update removed this public notification and enable to see the post’s history only via the button of ’View Edit History.’ While Facebook allows editing post’s publication date, it displays a small indication concerning the original publication date of the post. To watch the original publication date, a user must hover over the clock icon shown in the post, and a bubble will be shown together with the original publication date. Also, concerning Facebook pages, Facebook does not allow to do radical changes to the original name of a page daily. However, it is still possible to conduct limited edits to the page’s name; changes that are so minor that the context of the original name will be not changed. As a result, we were able to rename a page in phases by editing the name of a given page with small changes in each edit action. First, we changed only two characters from the name of a page. Afterward, three days later, we changed two more characters and so forth until eventually, we were able to rename the page entirely as we wished. As a countermeasure, Facebook employs a mechanism called *Link Shim* to keep Facebook users safe from external malicious links [@FacebookLinkShim]. When clicking on an external link posted on Facebook, their mechanism checks whether the link is blacklisted. In case of a suspicious URL, Facebook will notify the user [@FacebookLinkShim]. Redirect links used in [*chameleon* ]{}posts lead to legitimate destinations and so are currently approved by *Link Shim*. ### Twitter As opposed to Facebook, Twitter does not allow users to edit and hide tweets that have already been published, or to manipulate a tweet’s publication date (see Table \[tab:OsnCompareTbl\]). This mechanism makes it more difficult for an attacker to manipulate the display of previously published content. On the other hand, Twitter allows the use of client redirects. This poses the same danger as Facebook redirects, allowing attackers to manipulate the link preview of a tweet with content that is not necessarily related to the target website. Moreover, Twitter allows users to update a link preview using the *Card Validator*.[^3] In addition, Twitter makes it possible to change a user’s display name but does not allow to change the original username chosen during registration (serves as an identifier). ### WhatsApp WhatsApp allows messages to be published with redirect links and it displays link previews but it does not allow the update of an already published link preview. As opposed to other OSNs, WhatsApp is the only OSN that displays an indication that the message was deleted by its author. WhatsApp is safe against most flavors of the [*chameleon* ]{}attack, except *clickbait* where an attacker can trick others by encouraging them to click a malicious link with a preview of a benign link. ### Instagram Concerning redirect links, Instagram does not allow users to publish external links (see Table \[tab:OsnCompareTbl\]). Since the posts are image-based, the attacker cannot change the published content by redirect link. However, Instagram allows to edit already published posts, The editing process includes the text in the description section, as well as the image itself. In case of such a change to a post was made by its owner, no indication is shown to users. ### Reddit Alongside its popularity, Reddit is prone to a [*chameleon* ]{}attack: In this OSN, the attacker can edit, delete or hide already published posts while others will not be able to know that the content has been modified. ### Flickr As opposed to Facebook and WhatsApp, Flickr does not show link previews, but it allows users to update their posts, replace uploaded images, hide already published posts, and edit their account name. All these activities can be performed by users, without any indication for the users to the editing activity. ### LinkedIn LinkedIn permits users to share a redirect link and to update the link preview using *Post Inspector*.[^4] As a result, users can edit their posts, however, the post will be marked as edited. Existing Weaknesses and Security Controls {#sec:weaknesses} ----------------------------------------- Next, we summarize the OSN weaknesses related to the [*chameleon* ]{}attack, as well as controls deployed by the various OSNs to mitigate potential misuse. While the main focus of this article is the [*chameleon* ]{}attack facilitated by cached link previews, in this subsection we also discuss other types of the [*chameleon* ]{}attack successfully mitigated by major OSNs. ### Creating artificial timeline Publishing posts in retrospective is a feature that is easiest to misuse. Such a feature helps an adversary creating OSN accounts that look older and more reliable than they are. Luckily, all OSNs, but Facebook, do not provide such a feature to their users. Although Facebook allows *editing a post’s publication date*, it mitigates possible misuse of this feature for creating artificial timelines by *presenting the original publication date* of the post. ### Changing content Some OSNs provide their users with the ability to *edit previously published posts*. This feature facilitates all misuse cases detailed in Section \[sec:misuse\] without any additional resources required from the attacker. Twitter and WhatsApp do not allow editing of previously published posts. Facebook, Instagram, and LinkedIn mitigate potential misuse by *presenting editing indication in published posts*. Facebook even *presents the edit history* of a post. Unfortunately, in contrast to Instagram and LinkedIn, Facebook does not *present the edit indication in shared posts*. We urge Facebook to correct this minor yet important omission. ### Changing Display The primary weakness of the major OSNs (Twitter, Facebook, and LinkedIn) which facilitates the [*chameleon* ]{}attack discussed in this paper is the combination of three features provided by the OSNs. First, *publishing redirect links* allows attackers to change the navigation target of the posted links without any indication of such a change. Second, OSNs *display a link preview* based on metadata provided by the website at the end of the chain of server redirects. This feature allows the attackers to control the way link previews are displayed. Finally, OSNs allow *updating link preview* following the changes in the redirect chain of the previously posted link. Such an update is performed without displaying an indication that the post was updated. Currently, there are no controls that mitigate the misuse of these features. WhatsApp, Reddit, and LinkedIn display link previews of redirect links similar to Facebook and Twitter. But they do not provide a feature to update the link previews. On one hand, the only applicable misuse case for the [*chameleon* ]{}attack in these OSNs is *clickbait*. On the other hand, updating link previews is important for commercial brand management. ### Switching Content Facebook, Instagram, Reddit, and Flickr allow users to temporarily *hide their posts*. This feature allows a user to prepare multiple sets of posts where each set exhibits a different agenda. Later, the adversary may display the appropriate set of posts and hide the rest. The major downsides of this technique as far as the attacker is concerned are: (1) The need to maintain the sets of posts ahead of time similar to maintaining a set of regular profiles. (2) Social capital acquired by one set of posts cannot be reused by the other sets, except friends and followers. Overall, all reviewed OSNs are well protected against timeline manipulation. The major OSNs, except Reddit and Flickr, are aware of the dangers of post-editing and provide appropriate controls to avoid misuse. Due to the real-time nature of messaging in Twitter and WhatsApp, these OSNs can disable the option of editing posts. The major OSNs, Facebook, Twitter, and LinkedIn, care about the business of their clients and thus, explicitly provide features to update link previews. *Chameleon* attack exposed in this paper misuses this feature to manipulate the display of posts and profiles. Provided that Reddit and Flickr allow editing the post content, only WhatsApp and Instagram are not susceptible to [*chameleon* ]{}attacks. Instagram stores the posted images and not the links to the external resources, an approach that may not scale and may not be suitable for all premium customers. WhatsApp stores the data locally and does not allow recollecting past messages if the receiver is not a member of the group when the message was posted. WhatsApp’s approach is not suitable for bloggers, commercial pages, etc. that would like to share their portfolio with every newcomer. Additional Required Security Controls {#sec:mitigation} ------------------------------------- The best way to mitigate the [*chameleon* ]{}attack is to disallow redirect links and to disable link preview updates in all OSNs. Nevertheless, we acknowledge that it is not possible to stop using external redirect links and short URLs. These features are very popular on social networks and important in brand management. First and foremost an appropriate change indication should be displayed whenever the link preview cache is updated. Since on Facebook the cache is updated by the author of the original post, it can naturally be displayed in the post’s edit history. Link preview cache updates should be treated similar to the editing of posts. However, edit indications on posts will not help unless users will be trained to pay attention to these indications. Facebook, and other OSNs, should make it crystal clear which version of the post a user liked or commented on. To minimize the impact of the [*chameleon* ]{}attack likes, shares and comments of a post should be associated with a specific version of the post within the edit history, by default. It is also important to let users know about subsequent modifications of the posts they liked, commented, or shared. The users will be able, for example, to delete their comments or to confirm it, moving the comment back from the history to the main view. In Twitter and LinkedIn, anyone can update the link preview. The motivation for this feature is two-fold: (1) The business owner should be able to control the look and feel of his business card within the OSN regardless of the specific user who posted it. (2) Link previews should always be up to date. It will be challenging to design appropriate mitigation for the [*chameleon* ]{}without partially giving up these objectives. We suggest notifying a Twitter (or LinkedIn) user who posted a link to an external site whenever the link preview is updated. The user will be able to delete the post or accept the link preview update at his sole discretion. By default, the link preview should remain unchanged. This approach may increase the number of notifications the users receive, but with appropriate filters, it will not be a burden on the users. However, it may require maintaining copies of link previews for all re-posted links, which in turn significantly increase storage requirements. Finally, OSNs should update their anomaly detection algorithms to take into account changes made to the posts’ content and link previews, as well as the reputation of the servers along the redirection path of the posted links. It may take time to implement the measures described. Meanwhile, users should be aware that their *likes and comments are precious assets* that may be used against them if given out blindly. Next, we suggest a few guidelines that will help average OSN users detecting [*chameleon* ]{}posts and profiles. Given a suspected profile, check the textual content of its posts. *Chameleon* profiles should publish general textual descriptions to easily switch agenda. The absence of opinionated textual descriptions in the topic of your mutual interest may indicate a potential [*chameleon* ]{}. A Large number of ambiguous posts that can be interpreted in the context of the cover image or in the context of other posts in the timeline should increase the suspicion. For example, “This is the best goalkeeper in the world!!!” without a name mentioned is ambiguous. Also using public services like Facebook provided[^5] for watching a given post history can be useful for detecting a [*chameleon* ]{}post. Many redirect links within the profile timeline is also an indication of [*chameleon* ]{}capabilities. We do not encourage the users to click links in the posts of suspicious profiles to check whether they are redirected! In Facebook and LinkedIn, a simple inspection of the URL can reveal whether a redirection is involved. Right-click the post and copy-paste the link address in any URL decoder. If the domain name within the copied URL matches the domain name within the link preview and you trust this domain, you are safe. Today, most links on Facebook are redirected through Facebook’s referral service. The URL you should look at follows the “u” parameter within the query string of l.facebook.com/l.php. If the domain name is appearing after “, u=” differs from the domain name within the link preview, the post’s author uses redirection services. Unfortunately, today, links posted on Twitter are shortened, and the second hop of the redirection cannot be inspected by just copying the URL. Group Infiltration Experiment {#sec:group_infiltration_experiments} ============================= In this section, we present an experiment conducted on Facebook to asses the reaction of Facebook group moderators to [*chameleon* ]{}pages. In this experiment, we follow the execution flow of the misuse case number 4 *evading censorship* in Section \[sec:misuse\]. Experimental Setup {#sec:experimentalSetup} ------------------ In this experiment, four pairs of rival soccer and basketball teams were selected: Arsenal vs Chelsea, Manchester United vs Manchester City, Lakers vs Clippers, and Knicks vs Nets. We used sixteen Facebook pages: one regular and one [*chameleon* ]{}page for each sports team. Regular pages post YouTube videos that support the respective sports team. Their names are explicitly related to the team they support e.g., “Arsenal - The Best Team in the World.” *Chameleon* pages post redirect links that lead to videos that support either the team or their rivals. Their names can be interpreted based on the context e.g., “The Best Team in the World.” The icons and cover images of all pages reflect the team they (currently) support. Next, we selected twelve Facebook groups that support each one of the eight teams (total of 96 Facebook groups) according to the following three criteria: (a) the group allows pages to join it, (b) the group is sufficiently large (at least 50 members), and (c) there was at least some activity within the group in last month. We requested to join each group four times: (1) as a regular fan page, (2) as a regular rival page, (3) as a [*chameleon* ]{}page while supporting the rivals, and (4) the same [*chameleon* ]{}page requested to join the group again now pretending to be a fan page. We requested each group in a random order of the pages. We used a balanced experiment design to test all permutations of pages where the respective [*chameleon* ]{}page first requests to join the group as rival’s page and afterward as fan’s page. We allowed at least five days between consequent requests to join each group. A page can be *Approved* by the group admin or moderator (hereafter admin). In this case, the page becomes a member of the group. While the admin has not decided yet, the request is *Pending*. The owner can *Decline* the request. In this case, the page is not a member of the group, but it is possible to request to join the group again. Neither one of our pages was *Blocked* by the group admins, therefore, we ignore this status in the following results. Whenever a [*chameleon* ]{}page pretending to be a rival page is *Approved* by an admin, there is no point in trying to join the same group using the same page again. We consider this status as *Auto Approved*. The first phase of the experiment started on July 20, 2019, and included only the Facebook groups supporting Chelsea and Arsenal. The relevant [*chameleon* ]{}pages changed the way they are displayed on Aug. 16. The second phase started on Sept. 5, 2019, and included the rest of the Facebook groups. The relevant [*chameleon* ]{}pages changed the way they are displayed on Sept. 23. The following results summarize both phases. Results {#sec:results} ------- During the experiment, 14 Facebook groups prevented (any) pages from joining the group. We speculate that the admins were not aware of the option of accepting pages as group members, and updated the group settings after they saw our first requests. These 14 groups were *Disqualified* in the current experiment. Overall, there were 206 *Approved* requests, 87 *Declined*, and 35 *Pending*. Figure \[fig:chamVsPage\] presents the distribution of request statuses for the different types of pages. ![Request results by type of page[]{data-label="fig:chamVsPage"}](chamVsPage.JPG){width="3.5in"} Some admins blindly approved requests. For example, 28 groups approved all requests. Other group admins meticulously check the membership requests. Thirteen groups *Declined* or ignored the rival pages and *Approved* pages that exhibit the correct agenda. Overall, **the reaction of admins to [*chameleon* ]{}pages is similar to their reaction to regular pages with the same agenda**. To check this hypothesis, we used a one-way ANOVA test to determine whether there is a significant difference between the four types of group membership requests. The test was conducted on the request status values at the end of the experiment (*Declined*, *Pending*, *Approved*). Results showed that there is no statistically-significant difference between the approval of [*chameleon* ]{}fan pages and regular fan pages (p-value = 0.33). There is also no statistically-significant difference between the approval of [*chameleon* ]{}rival pages and regular rival pages (p-value = 0.992). However, the difference between the approval of either regular or [*chameleon* ]{}rival pages and the approval of both types of fan pages is statistically-significant with p-value ranging from 0.00 to 0.003. These results indicate that the reaction of admins to [*chameleon* ]{}pages in our experiment is similar to their reaction to regular (non-*chameleon*) pages with a similar agenda. We conclude that **admins do not distinguish between regular and [*chameleon* ]{}pages.** This conclusion is stressed by the observation that only two groups out of 82 *Declined* [*chameleon* ]{}fan pages and *Approved* regular fan pages. Seven groups approved [*chameleon* ]{}fan pages and rejected regular fan pages. The above results also indicate that, in general, **admins are selective toward pages that they censor.** Next, we measure the selectivity of the group admins using a Likert scale [@joshi2015likert]. Relying on the conclusion that admins do not distinguish between regular and [*chameleon* ]{}pages, we treat them alike to measure admins’ selectivity. Each time a group admin *Declined* a rival page or *Approved* a fan page he/she received one point. Each time a fan page was *Declined* or a rival page was *Approved*, the selectivity was reduced by one point. *Pending* request status added zero toward the selectivity score. For each group, we summed up the points to calculate its selectivity score. When the selectivity score is greater than three, we consider the group as *selective*. If the selectivity score is less than or equal to three, we consider the group as *not selective*. To explain the differences in groups’ selectivity, first, we tested if there is a difference between the number of members in selective and non-selective groups using t-tests. We found that smaller groups are more selective than, larger ones with p-value = 0.00029. This result is quite intuitive. Smaller groups tend to check the identity of the users who ask to join the group, while large groups are less likely to examine the identity of the users who want to join the group. Figure \[fig:groupsScoreAvg\] presents the groups’ activity and size vs. their selectivity score. There is a weak negative correlation between the group’s selectivity score and the number of members (Pearson correlation = -0.187, p-value = 0.093). Related Work ============ Content Spoofing and Spoofing Identification -------------------------------------------- Content spoofing is one of the most prevalent vulnerabilities in web applications [@grossman2017whitehat]. It is also known as content injection or virtual defacement. This attack deceives users by presenting particular content on a website as legitimate and not from an external source [@lungu2010optimizing; @awang2013detecting; @karandel2016security]. Using this, an attacker can upload new, fake, or modified content as legitimate. This malicious behavior can lead to malware exposure, financial fraud, or privacy violations, and can misrepresent an organization or individual [@hayati2009spammer; @benea2012anti]. The content spoofing attack leverages the code injection vulnerability where the user’s input is not sanitized correctly. Using this vulnerability, an attacker can provide new content to the web, usually via the GET or POST parameter. There are two ways to conduct content spoofing attack: An HTML injection, in which the attacker alters the content of a web page for malicious purposes by using HTML tags, or a text injection that manipulates the text data of a parameter [@hussain2019content]. Jitpukdebodin et al. [@jitpukdebodin2014novel] explored a vulnerability in WLAN communication. The proposed method creates a crafting spoof web content and sends it to a user before the genuine web content from a website is transmitted to the user. Hussain et al. [@hussain2019content] presented a new form of compounded SQL injection attack technique that uses the SQL injection attack vectors to perform content spoofing attacks on a web application. There have been a few techniques for the detection of content spoofing attacks: Benea et al. [@benea2012anti] suggested preventing content spoofing by detecting phishing attacks using fingerprints similarity. Niemela and Kesti [@niemela2018detecting] detected unauthorized changes to a website using authorized content policy sets for each of a multiplicity of websites from the web operators. ![Average groups activity by selectivity score[]{data-label="fig:groupsScoreAvg"}](groupsScoreAvg.JPG){width="3.5in"} Website Defacement ------------------ This is an attack that changes the visual appearance of websites [@kanti2011implementing; @borgolte2015meerkat; @romagna2017hacktivism]. Using this attack, an attacker can cause serious consequences to website owners, including interrupting website operations and damaging the owner’s reputation. More interestingly, attackers may support their reputation, promoting a certain ideological, religious, or political orientation [@romagna2017hacktivism; @maggi2018investigating]. Besides, web defacement is a significant threat to businesses since it can detrimentally affect the credibility and reputation of the organization [@borgolte2015meerkat; @medvet2007detection]. Most website defacement occurs when attackers manage to find any vulnerability in the web application and then inject a remote scripting file [@kanti2011implementing]. Several types of research deal with the monitoring and detection of website defacement, with solutions that include signature-based [@gurjwar2013approach; @shani2010system] and anomaly-based detection [@borgolte2015meerkat; @davanzo2011anomaly; @hoang2018website]. The simplest method to detect website defacement is a checksum comparison. The website’s content is calculated using hashing algorithms. The website is then monitored and a new checksum is calculated and compared with the previous one [@kanti2011implementing; @gurjwar2013approach; @shani2010system]. This method is effective for static web pages but not for dynamic pages. Several techniques have been proposed for website defacement based on complex algorithms. Kim et al. [@kim2006advanced] used a 2-grams for building a profile from normal web pages for monitoring and detecting of page defacement. Medvet et al. [@medvet2007detection] detected website defacement automatically based on genetic programming. The method builds an algorithm based on a sequence of readings of the remote page to be monitored, and on a sample set of attacks. Several techniques use machine learning-based methods for website defacement detection [@borgolte2015meerkat; @davanzo2011anomaly; @hoang2018website; @bartoli2006automatic]. Those studies, build a profile of the monitored page automatically, based on machine learning techniques. Borgolte et al. [@borgolte2015meerkat] proposed the ’MEERKAT’ detection system that requires no prior knowledge about the website content or its structure, but only its URL. ’MEERKAT’ automatically learns high-level features from screenshots (image data) of defaced websites by stacked autoencoders and deep neural networks. Its drawback is that it requires extensive computational resources for image processing and recognition. Recently, advanced research [@bergadano2019defacement] proposed an application of adversarial learning to defacement detection for making the learning process unpredictable so that the adversary will be unable to replicate it and predict the classifier’s behavior using a secret key. Cloaking Attack and Identification ---------------------------------- Cloaking, also known as ’bait and switch’ is a common technique used to hide the true nature of a website by delivering different semantic content  [@wang2011cloak; @invernizzi2016cloak]. Wang et al. [@wang2011cloak] presented four cloaking types: repeat cloaking that delivers different web content based on visit times of visitors, user-agent cloaking that delivers specific web content based on visitors’ user-agent string, redirection cloaking that moves users to another website using JavaScript, and IP cloaking, which delivers specific web content based on visitors’ IP. Researchers have responded to the cloaking techniques with a variety of anti-cloaking techniques [@invernizzi2016cloak]. Basic techniques relied on a cross-view comparison technique [@wang2006detecting; @wang2007spam]: A page is classified as cloaking if the redirect chain deviated across fetches. Other approaches mainly target compromised webservers and identify clusters of URLs with trending keywords that are irrelevant to the other content hosted on page [@john2011deseo]. Wang et al. [@wang2011cloak] identified cloaking in near real-time by examining the dynamics of cloaking over time. Invernizzi et al. [@invernizzi2016cloak] developed an anti-cloaking system that detects split-view content returned to two or more distinct browsing profiles by building a classifier that detects deviations in the content. Manipulating Human Behavior --------------------------- These days, cyber-attacks manipulate human weaknesses more than ever [@blunden2010manufactured]. Our susceptibility to deception, an essential human vulnerability, is a significant cause of security breaches. Attackers can exploit the human vulnerability by sending a specially crafted malicious email, tricking humans into clicking on malicious links, and thus downloading malware, (a.k.a. spear-phishing) [@goel2017got]. One of the main attack tools that exploit the human factor is social engineering, which is defined as the manipulation of the human aspect of technology using deception [@uebelacker2014social]. Social engineering plays on emotions such as fear, curiosity, excitement, and empathy, and exploits cognitive biases [@abraham2010overview]. The basic ’good’ human nature characteristics make people vulnerable to the techniques used by social engineers, as it activates various psychological vulnerabilities [@bezuidenhout2010social; @conteh2016cybersecurity; @conteh2016rise; @luo2011social]. The exploitation of the human factor has extensive use in advanced persistent threats (APTs). An APT attack involves sophisticated and well-resourced adversaries targeting specific information in high-profile companies and governments [@chen2014study]. In APT attacks, social engineering techniques are aimed at manipulating humans into delivering confidential information about a targeted organization or getting an employee to take a particular action [@paradise2017creation; @gulati2003threat; @bere2015advanced]. With regard to *chameleons*, they were previously executed in files during content-sniffing XSS attacks [@barth2009secure] but not on the OSNs. Barth et al. discussed [*chameleon* ]{}documents that are files conforming to multiple file formats (e.g., PostScript+HTML). The attack exploited the fact that browsers can parse documents as HTML and execute any hidden script within. In contrast to [*chameleon* ]{}documents, which are parsed differently by different tools without adversarial trigger, our [*chameleon* ]{}posts are controlled by the attacker and are presented differently to the same users at different times. Lately, Stivala and Pellegrino [@Stivala2020deceptive] conducted a study associated with link previews independently. In their research, they analyzed the elements of the preview links during the rendering process within 20 OSNs and demonstrated a misuse case by crafting benign-looking link previews that led to malicious web pages. Conclusions and Future Work =========================== This article discloses a weakness in an important feature provided by three major OSNs: Facebook, Twitter, and LinkedIn, namely *updating link previews without visible notifications while retaining social capital* (e.g., likes, comments, retweets, etc.). This weakness facilitates a new [*chameleon* ]{}attack where the link preview update can be misused to damage the good name of users, avoid censorship, and perform additional OSN scam detailed in Section \[sec:misuse\]. Out of seven reviewed OSNs, only Instagram and WhatsApp are resilient against most flavors of the [*chameleon* ]{}attack. We acknowledge the importance of the link preview update feature provided by the OSNs to support businesses that disseminate information through social networks and suggest several measures that should be applied by the OSNs to reduce the impact of [*chameleon* ]{}attacks. The most important measure is binding social capital to the version of a post to which it was explicitly provided. We also instruct OSN users on how to identify possible *chameleons*. We experimentally show that even the most meticulous Facebook group owners fail to identify [*chameleon* ]{}pages trying to infiltrate their groups. Thus, it is extremely important to raise the awareness of OSN users to this new kind of trickery. We encourage researchers and practitioners to identify potential [*chameleon* ]{}profiles throughout the OSNs in the nearest future; develop and incorporate redirect reputation mechanisms into machine learning methods for identifying OSN misuse; and include the [*chameleon* ]{}attack in security awareness programs alongside phishing scam and related scam. Ethical and Legal Considerations {#sec:ethics} ================================ Our goal is hardening OSNs against misuse while respecting the needs and privacy of OSN users. We follow strict Responsible Full Disclosure Policy, as well as guidelines recommended by the Ben-Gurion University’s Human Subject Research Committee. In particular, we did not access or store any information about the profiles we contacted during the experiment. We only recorded the status of the requests to join their Facebook groups. The [*chameleon* ]{}pages used during the experiment were deleted at the end of the study. Owners of the contacted Facebook groups can decide whether or not to accept the request from our pages. Although we did not inform them about the study before the requests, they are provided with post-experiment written feedback regarding their participation in the trial. We contact the relevant OSNs at least one month before the publication of the trial results and disclosure of the related weaknesses. No rules or agreements were violated in the process of this study. In particular, we used Facebook pages in the showcase and in the experiment rather than profiles to adhere to the Facebook End-User Licence Agreement. Availability ============ *Chameleon* pages, posts, and tweets are publicly available. Links can be found in the GitHub repository.[^6] Source code is not provided to reduce misuse. CWE and official responses of the major OSNs are also provided on the mentioned GitHub page. [^1]: A demo [*chameleon* ]{}post is available at <https://www.facebook.com/permalink.php?story_fbid=101149887975595&id=101089594648291&__tn__=-R> [^2]: Here and in the rest of this section, numbers in parentheses indicate the attack phases in the order they are performed in each misuse case. [^3]: <https://cards-dev.twitter.com/validator> [^4]: <https://www.linkedin.com/post-inspector> [^5]: https://developers.facebook.com/tools/debug/sharing/batch/ [^6]: <https://github.com/aviade5/Chameleon-Attack/>
--- abstract: 'We present bilateral teleoperation system for task learning and robot motion generation. Our system includes a bilateral teleoperation platform and a deep learning software. The deep learning software refers to human demonstration using the bilateral teleoperation platform to collect visual images and robotic encoder values. It leverages the datasets of images and robotic encoder information to learn about the inter-modal correspondence between visual images and robot motion. In detail, the deep learning software uses a combination of Deep Convolutional Auto-Encoders (DCAE) over image regions, and Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN) over robot motor angles, to learn motion taught be human teleoperation. The learnt models are used to predict new motion trajectories for similar tasks. Experimental results show that our system has the adaptivity to generate motion for similar scooping tasks. Detailed analysis is performed based on failure cases of the experimental results. Some insights about the cans and cannots of the system are summarized.' author: - 'Hitoe Ochi, Weiwei Wan, Yajue Yang, Natsuki Yamanobe, Jia Pan, and Kensuke Harada [^1]' bibliography: - 'reference.bib' title: Deep Learning Scooping Motion using Bilateral Teleoperations --- Introduction ============ Common household tasks require robots to act intelligently and adaptively in various unstructured environment, which makes it difficult to model control policies with explicit objectives and reward functions. One popular solution [@argall2009survey][@yang2017repeatable] is to circumvent the difficulties by learning from demonstration (LfD). LfD allows robots to learn skills from successful demonstrations performed by manual teaching. In order to take advantage of LfD, we develop a system which enables human operators to demonstrate with ease and enables robots to learn dexterous manipulation skills with multi-modal sensed data. Fig.\[teaser\] shows the system. The hardware platform of the system is a bi-lateral tele-operation systems composed of two same robot manipulators. The software of the system is a deep neural network made of a Deep Convolutional Auto-Encoder (DCAE) and a Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN). The deep neural network leverages the datasets of images and robotic encoder information to learn about the inter-modal correspondence between visual images and robot motion, and the proposed system use the learnt model to generate new motion trajectories for similar tasks. ![The bilateral teleoperation system for task learning and robotic motion generation. The hardware platform of the system is a bi-lateral tele-operation systems composed of two same robot manipulators. The software of the system is a deep neural network made of a Deep Convolutional Auto-Encoder (DCAE) and a Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN). The deep learning models are trained by human demonstration, and used to generate new motion trajectories for similar tasks.[]{data-label="teaser"}](imgs/teaser.png){width=".47\textwidth"} Using the system, we can conduct experiments of robot learning different basic behaviours using deep learning algorithms. Especially, we focus on a scooping task which is common in home kitchen. We demonstrate that our system has the adaptivity to predict motion for a broad range of scooping tasks. Meanwhile, we examine the ability of deep learning algorithms with target objects placed at different places and prepared in different conditions. We carry out detailed analysis on the results and analyzed the reasons that limited the ability of the proposed deep learning system. We reach to a conclusion that although LfD using deep learning is applicable to a wide range of objects, it still requires a large amount data to adapt to large varieties. Mixed learning and planning is suggested to be a better approach. The paper is organized as follows. Section 2 reviews related work. Section 3 presents the entire LfD system including the demonstration method and the learning algorithm. Section 4 explains how the robot perform a task after learning. Section 5 describes and analyzes experiment setups and results. Section 6 draws conclusions and discusses possible methods to improve system performance. Related Work ============ The learning method we used is DCAE and RNN. This section reviews their origins and state-of-the-art applications. Auto-encoders were initially introduced by Rumelhart et al. [@rumelhart1985learning] to address the problem of unsupervised back propagation. The input data was used as the teacher data to minimize reconstruction errors [@olshausen1997sparse]. Auto-encoders were embedded into deep neural network as DCAE to explore deep features in [@hinton2006fast][@salakhutdinov2009semantic] [@bengio2007greedy][@torralba2008small], etc. DCAE helps to learn multiple levels of representation of high dimensional data. RNN is the feed-backward version of conventional feed forward neural network. It allows the output of one neuron at time $t_i$ to be input of a neuron for time $t_{i+1}$. RNN may date back to the Hopfield network [@hopfield1982neural]. RNN is most suitable for learning and predicting sequential data. Some successful applications of RNN include handwritting recognition [@graves2009novel], speech recognition [@graves2013speech], visual tracking [@dequaire2017deep], etc. RNN has advantages over conventional mathematical models for sequential data like Hidden Markov Model (HMM) [@wan2007hybrid] in that it uses scalable historical characters and is applicable to sequences of varying time lengths. A variation of RNN is Multiple Timescale RNN (MTRNN), which is the multiple timescale version of traditional RNN and was initially proposed by Yamashita and Tani [@yamashita2008emergence] to learn motion primitives and predict new actions by combining the learnt primitives. The MTRNN is composed of multiple Continuous Recurrent Neural Network (CTRNN) layers that allow to have different timescale activation speeds and thus enables scalability over time. Arie et al. [@arie2009creating] Jeong et al. [@jeong2012neuro] are some other studies that used MTRNN to generate robot motion. RNN-based methods suffers from a vanishing gradient problem [@hochreiter2001gradient]. To overcome this problem, Hochereiter and Schmidhuber [@hochreiter1997long] developed the Long Short Term Memory (LSTM) network. The advantages of LSTM is it has an input gate, an output gate, and a forget gate which allow the cells to store and access information over long periods of time. The recurrent neural network used by us in this paper is RNN with LSTM units. It is proved that RNN with LSTM units are effective and scalable in long-range sequence learning [@greff2017lstm][^2]. By introducing into each LSTM unit a memory cell which can maintain its state over time, LSTM network is able to overcome the vanishing gradient problem. LSTM is especially suitable for applications involving long-term dependencies [@karpathy2015visualizing]. Together with the DCAE, we build a system allowing predicting robot trajectories for diverse tasks using vision systems. The system uses bilateral teleoperation to collect data from human beings, like a general LfD system. It trains a DCAE as well as a LSTM-RNN model, and use the model to learn robot motions to perform similar tasks. We performed experiments by especially focusing on a scooping task that is common in home kitchen. Several previous studies like [@yang2017repeatable][@mayer2008system][@rahmatizadeh2017vision][@liu2017imitation] also studied learning to perform similar robotic tasks using deep learning models. Compared with them, we not only demonstrate the generalization of deep models in robotic task learning, but also carry out detailed analysis on the results and analyzed the reasons that limited the ability of the proposed deep learning system. Readers are encouraged to refer to the experiments and analysis section for details. The system for LfD using deep learning ====================================== The bilateral teleoperation platform ------------------------------------ Our LfD system utilizes bilateral teleoperation to allow human operators to adaptively control the robot based on his control. Conventionally, teleoperation was done in master-slave mode by using a joystick [@rahmatizadeh2017vision], a haptic device [@abi2016visual], or a virtual environment [@liu2017imitation] as the master device. Unlike the conventional methods, we use a robot manipulator as the master. As Figure 2 shows, our system is composed of two identical robot systems comprising a Universal Robot 1 arm at the same joint configuration and a force-torque sensor attached to the arm’s end-effector. The human operator drags the master at its end-effector and the controller calculates 6 dimensional Cartesian velocity commands for robots to follow the human operator’s guidance. This dual-arm bilateral teleoperation system provides similar operation spaces for the master and slave, which makes it more convenient for the human operator to drag the master in a natural manner. In addition, we install a Microsoft Kinect 1 above the slave manipulator to capture depth and RGB images of the environment. The bilateral teleoperation platform provides the human operator a virtual sense of the contact force to improve LfD [@hokayem2006bilateral]. While the human operator works on the master manipulator, the slave robot senses a contact force with a force-torque sensor installed at its wrist. A controller computes robot motions considering both the force exerted by human beings and the force feedback from the force sensor. Specifically, when the slave does not contact with the environment, both the master and slave move following human motion. When the slave has contact feedback, the master and slave react considering the impedance from force feedback. The human operator, meanwhile, would feel the impedance from the device he or she is working on (namely the master device) and react accordingly. The deep learning software -------------------------- LSTM-RNN supports both input and output sequences with variable length, which means that one network may be suitable for varied tasks with different length of time. Fig.\[lstmrnn\] illustrates a LSTM recurrent network which outputs prediction. ![LSTM-RNN: The subscripts in $Input_i$ and $Predict_i$ indicate the time of inputs and for which predictions are made. An LSTM unit receives both current input data and hidden states provided by previous LSTM units as inputs to predict the next step.[]{data-label="lstmrnn"}](imgs/lstmrnn.png){width=".47\textwidth"} The data of our LfD system may include an image of the environment, force/torque data sensed by a F/T sensor installed at the slave’s end-effector, robot joint positions, etc. These data has high dimensionality which makes computation infeasible. To avoid the curse of dimensionality, we use DCAE to represent the data with auto-selected features. DCAE encodes the input data with a encoder and reconstructs the data from the encoded values with a decoder. Both the encoder and decoder could be multi-layer convolutional networks shown in Fig.\[dcae\]. DCAE is able to properly encode complex data through reconstruction, and extract data features and reduce data dimension. ![DCAE encodes the input data with a encoder and reconstructs the data from the encoded values with a decoder. The output of DCAE (the intermediate layer) is the extracted data features.[]{data-label="dcae"}](imgs/dcae.png){width=".35\textwidth"} The software of the LfD system is a deep learning architecture composed of DCAE and LSTM-RNN. The LSTM-RNN model is fed with image features computed by the encoder and other data such as joint positions, and predicts a mixture of the next motion and surrounding situation. The combination of DCAE and LSTM-RNN is shown in Fig.\[arch\]. ![The entire learning architecture. OD means Other Data. It could be joint positions, force/torque values, etc. Although drawn as a single box, the model may contain multiple LSTM layers.[]{data-label="arch"}](imgs/arch.png){width=".49\textwidth"} Learning and predicting motions =============================== Data collection and training ---------------------------- The data used to train DCAE and LSTM-RNN is collected by bilateral teleoperation. The components of the bilateral teleoperation platform and the control diagram of LfD using the bilateral teleoperatoin platform are shown in Fig.\[bicontrol\]. A human operator controls a master arm and performs a given task by considering force feedback ($F_h\bigoplus F_e$ in the figure) from the slave side. As the human operator moves the master arm, the Kinect camera installed at the middle of the two arms take a sequence of snapshots as the training images for DCAE. The motor encoders installed at each joint of the robot take a sequence of 6D joint angles as the training data of LSTM-RNN. The snapshots and changing joint angles are shown in Fig.\[sequences\]. Here, the left part shows three sequences of snapshots. Each sequence is taken with the bowl placed at a different position (denoted by $pos1$, $pos2$, and $pos3$ in the figure). The right part shows a sequence of changing joint angles taught by the human operator. ![Bilateral Teleoperation Diagram.[]{data-label="bicontrol"}](imgs/bicontrol.png){width=".49\textwidth"} ![The data used to train DCAE and LSTM-RNN. The left part shows the snapshot sequences taken by the Kinect camera. They are used to train DCAE. The right part shows the changing joint angles taught by human operators. They are used to further train LSTM-RNN.[]{data-label="sequences"}](imgs/sequences.png){width=".49\textwidth"} Generating robot motion ----------------------- After training the DCAE and the LSTM-RNN, the models are used online to generate robot motions for similar tasks. The trajectory generation involves a real-time loop of three phases: (1) sensor data collection, (2) motion prediction, and (3) execution. At each iteration, the current environment information and robot state are collected and processed and then attached to the sequence of previous data. The current robot state is fed to the pre-trained LSTM-RNN model to predict next motions that a manipulator uses to take action. In order to ensure computational efficiency, we keep each input sequence in a queue with fixed length. The process of training and prediction is shown in Fig.\[teaser\]. Using the pre-trained DCAE and LSTM-RNN, the system is able to generate motion sequences to perform similar tasks. Experiments and analysis ======================== We use the developed system to learn scooping tasks. The goal of this task is to scoop materials out from a bowl placed on a table in front of the robot (see Fig.\[teaser\]). Two different bowls filled with different amount of barley are used in experiments. The two bowls include a yellow bowl and a green bowl. The volumes of barley are set to “high” and “low” for variation. In total there are 2$\times$2=4 combinations, namely {“yellow” bowl-“low” barley, “yellow” bowl-“high” barley, “green” bowl-“low” barley, and “green” bowl-“high” barley}. Fig.\[expbowls\](a) shows the barley, the bowls, and the different volume settings. During experiments, a human operator performs teleoperated scooping as he/she senses the collision between the spoon and the bowl after the spoon is inserted into the materials. Although used for control, the F/T data is not fed into the learning system, which means the control policy is learned only based on robot states and 2D images. ![(a) Two different bowls filled with different amount of barley are used in experiments. In total, there are 2$\times$2=4 combinations. (b) One sequence of scooping motion.[]{data-label="expbowls"}](imgs/expbowls.png){width=".35\textwidth"} The images used to train DCAE are cropped by a 130$\times$130 window to lower computational cost. The DCAE has 2 convolutional layers with filter sizes of 32 and 16, followed by 2 fully-connected layers of sizes 100 and 10. The decoder has exactly the same structure. LeakyReLU activation function is used for all layers. Dropout is applied afterwards to prevent over fitting. The computation is performed on a Dell T5810 workstation with Nvidia GTX980 GPU. Experiment 1: Same position with RGB/Depth images ------------------------------------------------- In the first group of experiments, we place the bowl at the same position, and test different bowls with different amounts of contents. In all, we collect 20 sequences of data with 5 for each bowl-content combination. Fig.\[expbowls\](b) shows one sequence of the scooping motion. We use 19 of the 20 sequences of data to train DCAE and LSTM-RNN and use the remaining one group to test the performance. Parameters of DCAE is as follows: Optimization function: Adam; Dropout rate: 0.4; Batch size: 32, Epoch: 50. We use both RGB images and Depth images to train DCAE. The pre-trained models are named RGB-DCAE and Depth-DCAE respectively. Parameters of LSTM-RNN is: Optimization function: Adam; Batch size: 32; Iteration: 3000. The results of DCAE is shown in Fig.\[dcaeresults\](a). The trained model is able to reconstruct the training data with high precision. Readers may compare the first and second rows of Fig.\[dcaeresults\](a.1) for details. Meanwhile, the trained model is able to reconstruct the test data with satisfying performance. Readers may compare the first and second row of Fig.\[dcaeresults\](a.2) to see the difference. Although there are noises on the second rows, they are acceptable. ![image](imgs/dcaeresults.png){width=".99\textwidth"} The results of LSTM-RNN show that the robot is able to perform scooping for similar tasks given the RGB-DCAE. However, it cannot precisely differ “high” and “low” volumes. The results of LSTM-RNN using Depth-DCAE is unstable. We failed to spot a successful execution. The reason depth data is unstable is probably due to the low resolution of Kinect’s depth sensor. The vision system cannot differ if the spoon is at a pre-scooping state or post-scooping state, which makes the robot hard to predict next motions. Experiment 2: Different positions --------------------------------- In the second group of experiments, we place the bowl at different positions to further examine the generalization ability of the trained models. Similar to experiment 1, we use bowls with two different colors (“yellow” and “green”), and used two different volumes of contents (“high” and “low”). The bowls were placed at 7 different positions. At each position, we collect 20 sequences of data with 5 for each bowl-barley combination. In total, we collect 140 sequences of data. 139 of the 140 sequences of data are used to train DCAE and LSTM-RNN. The remaining 1 sequence are used for testing. The parameters of DCAE and LSTM-RNN are the same as experiment 1. The results of DCAE are shown in Fig.\[dcaeresults\](b). The trained model is able to reconstruct the training data with high precision. Readers may compare the first and second rows of Fig.\[dcaeresults\](b.1) for details. In contrast, the reconstructed images show significant difference for the test data. It failed to reconstruct the test data. Readers may compare the first and second row of Fig.\[dcaeresults\](b.2) to see the difference. Especially for the first column of Fig.\[dcaeresults\](b.2), the bowl is wrongly considered to be at a totally different position. The LSTM-RNN model is not able to generate scooping motion for either the training data or the test data. The motion is randomly changing from time to time. It doesn’t follow any pre-taught sequences. The reason is probably the bad reconstruction performance of DCAE. The system failed to correctly find the positions of the bowls using the encoded features. Based on the analysis, we increase the training data of DCAE in Experiment 3 to improve its reconstruction. Experiment 3: Increasing the training data of DCAE -------------------------------------------------- The third group of experiments has exactly the same scenario settings and parameter settings as Experiment 2, except that we use planning algorithms to generate scooping motion and collect more scooping images. The new scooping images are collected following the work flow shows in Fig.\[sample\]. We divide the work space into around 100 grids, place bowl at these places, and sample arm boatswains and orientations at each of the grid. In total, we additionally generate 100$\times$45$\times$3=13500 (12726 exactly) extra training images to train DCAE. Here, “100” indicates the 100 grid positions. “45” and “3” indicate the 45 arm positions and 3 arm rotation angles sampled at each grid. ![Increase the training data of DCAE by automatically generating motions across a 10$\times$10 grids. In all, 100$\times$45$\times$3=13500 extra training images were generated. Here, “100” indicates the 100 grid positions. At each grid, 45 arm positions and 3 arm rotation angles are sampled.[]{data-label="sample"}](imgs/sample.png){width=".43\textwidth"} The DCAE model is trained with the 140 sequences of data in experiment 2 (that is 140$\times$120=16800 images, 17714 exactly), together with the 13500 extra images collected using the planner. The parameters of DCAE are exactly the same as Experiment 1 and 2. The results of DCAE is shown in Fig.\[dcaeresults2\]. Compared with experiment 2, DCAE is more stable. It is able to reconstruct both the training images and the test images with satisfying performance, although the reconstructed spoon position in the sixth and seventh columns of Fig.\[dcaeresults2\](b) have relatively large offsets from the original image. ![image](imgs/dcaeresults2.png){width=".99\textwidth"} The trained DCAE model is used together with LSTM-RNN to predict motions. The LSTM-RNN model is trained using different data to compare the performance. The results are shown in Table.\[conf0\]. Here, $A1$-$A7r\_s1$ indicates the data used to train DCAE and LSTM-RNN. The left side of “\_” shows the data used to train DCAE. $A1$-$A7$ means all the sequences collected at the seven bowl positions in experiments 2 are used. $r$ means the additional data collected in experiment 3 is used. The right side of “\_” shows the data used to train LSTM-RNN. $s1$ means only the sequences at bowl position $s1$ are used to train LSTM-RNN. $s1s4$ means both the sequences at bowl position $s1$ and position $s4$ are used to train LSTM-RNN. The results show that the DCAE trained in experiment 3 is able to predict motion for bowls at the same positions. For example, (row position $s1$, column $A1$-$A7r\_s1$) is $\bigcirc$, (row position $s1$, column $A1$-$A7r\_s1s4$) is also $\bigcirc$. The result, however, is unstable. For example, (row position $s2$, column $A1$-$A7r\_s1s4$) is $\times$, (row position $s4$, column $A1$-$A7r\_s4$) is also $\times$. The last three columns of the table shows previous results: The $A1\_s1$ column and $A4\_s4$ column correspond to the results of experiment 1. The $A1A4\_s1s4$ column correspond to the result of experiment 2. A1-A7r\_s1 A1-A7r\_s1s4 A1-A7r\_s4 A1\_s1 A4\_s4 A1A4\_s1s4 --------------- ------------ -------------- ------------ ------------ ------------ ------------ position $s1$ $\bigcirc$ $\bigcirc$ - $\bigcirc$ - $\times$ position $s4$ - $\times$ $\times$ - $\bigcirc$ $\times$ Results of the three experiments show that the proposed model heavily depends on training data. It can predict motion for different objects at the same positions, but is not able to adapt to objects at different positions. The small amount of training data is an important problem impairing the generalization of the trained models to different bowl positions. The experimental results tell us that a small amount of training data leads to bad results. A large amount of training data shows good prediction. Conclusions and future work =========================== This paper presented a bilateral teleoperation system for task learning and robotic motion generation. It trained DCAE and LSTM-RNN to learn scooping motion using data collected by human demonstration on the bilateral teleoperation system. The results showed the data collected using the bilateral teleoperation system was suitable for training deep learning models. The trained model was able to predict scooping motion for different objects at the same positions, showing some ability of generalization. The results also showed that the amount of data was an important issue that affect training good deep learning models. One way to improve performance is to increase training data. However, increasing training data is not trivial for LfD applications since they require human operators to repeatedly work on teaching devices. Another method is to use a mixed learning and planning model. Practitioners may use planning to collect data and use learning to generalize the planned results. The mixed method is our future direction. Acknowledgment {#acknowledgment .unnumbered} ============== The paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO). [^1]: Hitoe Ochi, Weiwei Wan, and Kensuke Harada are with Graduate School of Engineering Science, Osaka University, Japan. Natsuki Yamanobe, Weiwei Wan, and Kensuke Harada are also affiliated with Natioanl Institute of Advanced Industrial Science and Technology (AIST), Japan. Yajue Yang and Jia Pan are with City University of Hongkong, China. E-mail: [wan@sys.es.osaka-u.ac.jp]{} [^2]: There are some work like [@yu2017continuous] that used MTRNN with LSTM units to enable multiple timescale scalability.
--- author: - 'A. Gallenne' - 'A. Mérand' - 'P. Kervella' - 'O. Chesneau' - 'J. Breitfelder' - 'W. Gieren' bibliography: - './bibliographie.bib' date: 'Received July 11, 2013; accepted August 30, 2013' subtitle: 'IV. T Monocerotis and X Sagittarii from mid-infrared interferometry with VLTI/MIDI[^1]' title: Extended envelopes around Galactic Cepheids --- [We study the close environment of nearby Cepheids using high spatial resolution observations in the mid-infrared with the VLTI/MIDI instrument, a two-beam interferometric recombiner.]{} [We obtained spectra and visibilities for the classical Cepheids X Sgr and T Mon. We fitted the MIDI measurements, supplemented by $B, V, J, H, K$ literature photometry, with the numerical transfer code `DUSTY` to determine the dust shell parameters. We used a typical dust composition for circumstellar environments.]{} [We detect an extended dusty environment in the spectra and visibilities for both stars, although T Mon might suffer from thermal background contamination. We attribute this to the presence of a circumstellar envelope (CSE) surrounding the Cepheids. This is optically thin for X Sgr ($\tau_\mathrm{0.55\mathrm{\mu m}} = 0.008$), while it appears to be thicker for T Mon ($\tau_\mathrm{0.55\mathrm{\mu m}} = 0.15$). They are located at about 15–20 stellar radii. Following our previous work, we derived a likely period-excess relation in the VISIR PAH1 filter, $ f_\mathrm{8.6\,\mu m}$\[%\]$ = 0.81(\pm0.04)P$\[day\]. We argue that the impact of CSEs on the mid-IR period–luminosity (P–L) relation cannot be negligible because they can bias the Cepheid brightness by up to about 30%. For the $K$-band P–L relation, the CSE contribution seems to be lower ($< 5$%), but the sample needs to be enlarged to firmly conclude that the impact of the CSEs is negligible in this band.]{} Introduction ============ A significant fraction of classical Cepheids exhibits an infrared excess, that is probably caused by a circumstellar envelope (CSE). The discovery of the first CSE around the Cepheid $\ell$ Car made use of near- and mid-infrared interferometric observations [@Kervella_2006_03_0]. Similar detections were subsequently reported for other Cepheids [@Merand_2007_08_0; @Merand_2006_07_0; @Barmby_2011_11_0; @Gallenne_2011_11_0], leading to the hypothesis that maybe all Cepheids are surrounded by a CSE. These envelopes are interesting from several aspects. Firstly, they might be related to past or ongoing stellar mass loss and might be used to trace the Cepheid evolution history. Secondly, their presence might induce a bias to distance determinations made with Baade-Wesselink methods and bias the calibration of the IR period–luminosity (P–L) relation. Our previous works [@Gallenne_2011_11_0; @Merand_2007_08_0; @Merand_2006_07_0; @Kervella_2006_03_0] showed that these CSEs have an angular size of a few stellar radii and a flux contribution to the photosphere ranging from a few percent to several tens of percent. While in the near-IR the CSE flux emission might be negligible compared with the photospheric continuum, this is not the case in the mid- and far-IR, where the CSE emission dominates [@Gallenne_2011_11_0; @Kervella_2009_05_0]. Interestingly, a correlation starts to appear between the pulsation period and the CSE brightness in the near- and mid-IR bands: long-period Cepheids seem to show relatively brighter CSEs than short-period Cepheids, indicating that the mass-loss mechanism could be linked to stellar pulsation [@Gallenne_2011_11_0; @Merand_2007_08_0]. Cepheids with long periods have higher masses and larger radii, therefore if we assume that the CSE IR brightness is an indicator of the mass-loss rate, this would mean that heavier stars experience higher mass-loss rates. This behavior could be explained by the stronger velocity fields in longer-period Cepheids and shock waves at certain pulsation phases [@Nardetto_2008_10_0; @Nardetto_2006_07_0]. Studying this correlation between the pulsation period and the IR excess is vital for calibrating relations between the Cepheid’ fundamental parameters with respect to their pulsation periods. If CSEs substantially influence the observational estimation of these fundamental parameters (luminosity, mass, radius, etc.), this a correlation will lead to a biased calibration. It is therefore essential to continue studying and characterizing these CSEs and to increase the statistical sample to confirm their properties. We present new spatially resolved VLTI/MIDI interferometric observations of the classical Cepheids (HD 161592, $P = 7.01$days) and (HD 44990, $P = 27.02$days). The paper is organized as follows. Observations and data reduction procedures are presented in Sect. \[section\_\_observation\]. The data modeling and results are reported in Sect. \[section\_\_cse\_modeling\]. In Sect. \[section\_\_period\_excess\_relation\] we address the possible relation between the pulsation period and the IR excess. We then discuss our results in Sect. \[section\_\_discussion\] and conclude in Sect. \[section\_\_conclusion\]. VLTI/MIDI observations {#section__observation} ====================== Observations ------------ The observations were carried out in 2008 and 2009 with the VLT Unit Telescopes and the MIDI instrument [@Leinert_2003__0]. MIDI combines the coherent light coming from two telescopes in the $N$ band ($\lambda = 8-13\,\mu$m) and provides the spectrum and spectrally dispersed fringes with two possible spectral resolutions ($R = \Delta \lambda / \lambda = 30, 230$). For the observations presented here, we used the prism that provides the lowest spectral resolution. During the observations, the secondary mirrors of the two Unit Telescopes (UT1-UT4) were chopped with a frequency of 2 Hz to properly sample the sky background. MIDI has two photometric calibration modes: HIGH\_SENS, in which the flux is measured separately after the interferometric observations, and SCI\_PHOT, in which the photometry is measured simultaneously with the interferences fringes. These reported observations were obtained in HIGH\_SENS mode because of a relatively low thermal IR brightness of our Cepheids. To remove the instrumental and atmospheric signatures, calibrators of known intrinsic visibility were observed immediately before or after the Cepheid. They were chosen from the @Cohen_1999_04_0 catalog, and are almost unresolved at our projected baselines ($V > 95$%, except for HD 169916, for which $V = 87$%). The systematic uncertainty associated with their a priori angular diameter error bars is negligible compared with the typical precision of the MIDI visibilities (10–15%). The uniform-disk angular diameters for the calibrators as well as the corresponding IRAS 12$\mu$m flux and the spectral type are given in Table \[table\_\_calibrators\]. The log of the MIDI observations is given in Table \[table\_\_journal\]. Observations \#1, \#2, and \#5–\#10 were not used because of low interferometric or/and photometric flux, possibly due to a temporary burst of very bad seeing or thin cirrus clouds. -------- ---------------------- ----------------- -------------------- ---------- HD $\theta_\mathrm{UD}$ $f_\mathrm{W3}$ $f_\mathrm{Cohen}$ Sp. Type (mas) (Jy) (Jy) 49293 $1.91 \pm 0.02$ $4.3 \pm 0.1$ $4.7 \pm 0.1$ K0IIIa 48433 $2.07 \pm 0.03$ $6.5 \pm 0.1$ $5.5 \pm 0.1$ K0.5III 168592 $2.66 \pm 0.05$ $8.3 \pm 0.1$ $7.4 \pm 0.1$ K4-5III 169916 $4.24 \pm 0.05$ $25.9 \pm 0.4$ $21.1 \pm 0.1$ K1IIIb -------- ---------------------- ----------------- -------------------- ---------- : Properties of our calibrator stars. \[table\_\_calibrators\] ---- ----------- -------- ---------- ---------------- ----------- ------ \# MJD $\phi$ Target $B_\mathrm{p}$ PA AM (m) ($\degr$) 1 54 813.26 0.12 T Mon 129.6 63.7 1.20 2 54 813.27 0.12 T Mon 130.0 63.5 1.21 3 54 813.27 130.0 63.4 1.15 4 54 813.29 0.12 T Mon 130.0 62.6 1.26 5 54 813.30 0.12 T Mon 129.6 62.2 1.29 6 54 813.31 130.0 61.6 1.40 7 54 842.10 0.18 T Mon 108.7 62.6 1.26 8 54 842.11 HD 49293 110.1 59.7 1.21 9 54 842.13 0.18 T Mon 118.8 64.4 1.19 10 54 842.14 HD 49293 120.8 62.1 1.14 11 54 900.07 0.33 T Mon 128.5 61.4 1.33 12 54 900.09 HD 48433 128.8 59.9 1.49 13 54 905.39 126.6 39.3 1.18 14 54 905.40 0.76 X Sgr 126.4 49.2 1.06 15 54 905.41 122.6 45.7 1.11 16 54 905.43 0.76 X Sgr 129.4 55.3 1.02 ---- ----------- -------- ---------- ---------------- ----------- ------ : Log of the observations. \[table\_\_journal\] Data reduction -------------- To reduce these data we used two different reduction packages, MIA and EWS[^2]. MIA, developed at the Max-Planck-Institut f$\mathrm{\ddot{u}}$r Astronomie, implements an incoherent method where the power spectral density function of each scan is integrated to obtain the squared visibility amplitudes, which are then integrated over time. EWS, developed at the Leiden Observatory, is a coherent analysis that first aligns the interferograms before co-adding them, which result in a better signal-to-noise ratio of the visibility amplitudes. The data reduction results obtained with the MIA and EWS packages agree well within the uncertainties. The choice of the detector mask for extracting the source spectrum and estimating the background can be critical for the data quality. The latest version of the software uses adaptive masks, where shifts in positions and the width of the mask can be adjusted by fitting the mask for each target. To achieve the best data quality, we first used MIA to fit a specific mask for each target (also allowing a visual check of the data and the mask), and then applied it in the EWS reduction. Photometric templates from @Cohen_1999_04_0 were employed to perform an absolute calibration of the flux density. We finally averaged the data for a given target. This is justified because the MIDI uncertainties are on the order of 7-15% [@Chesneau_2007_10_0], and the projected baseline and PA are not significantly different for separate observing dates. The uncertainties of the visibilities are mainly dominated by the photometric calibration errors, which are common to all spectral channels ; we accordingly chose the standard deviation over a $1\,\mu$m range as error bars. Flux and visibility fluctuations between datasets {#subsection__flux_and_visibility_fluctuations_between_datasets} ------------------------------------------------- MIDI is strongly sensitive to the atmospheric conditions and can provide mis-estimates of the thermal flux density and visibility. This can be even worse for datasets combined from different observing nights, for instance for T Mon in our case. Another source of variance between different datasets can appear from the calibration process, that is, from a poor absolute flux and visibility calibration. In our case, each Cepheid observation was calibrated with a different calibrator (i.e., \#3-4 and \#11-12 for T Mon and \#13-14 and \#15-16 for X Sgr), which enabled us to check the calibrated data. To quantify the fluctuations, we estimated the spectral relative variation for the flux density and visibility, that is, the ratio of the standard deviation to the mean value for each wavelength between two different calibrated observations. For X Sgr, the average variation (over all $\lambda$) is lower than 5% on the spectral flux and lower than 1.5% on the visibility. This is a sightly higher for T Mon because the data were acquired on separate nights ; we measured an average variation lower than 8% on the spectral flux and lower than 4% on the visibility. Circumstellar envelope modeling {#section__cse_modeling} =============================== Visibility and spectral energy distribution {#subsection__visibility_and_sed} ------------------------------------------- The averaged calibrated visibility and spectral energy distribution (SED) are shown with blue dots in Figs. \[graph\_\_visibility\_xsgr\] and \[graph\_\_visibility\_tmon\]. The quality of the data in the window $9.4 < \lambda < 10\,\mu$m deteriorates significantly because of the water and ozone absorption in the Earth’s atmosphere. Wavelengths longer than $12\,\mathrm{\mu m}$ were not used because of low sensitivity. We therefore only used the spectra outside these wavelengths. The photosphere of the stars is considered to be unresolved by the interferometer ($V > 98\,\%$), therefore the visibility profile is expected to be equal to unity for all wavelengths. However, we noticed a decreasing profile for both stars. This behavior is typical of emission from a circumstellar envelope (or disk), where the size of the emitting region grows with wavelength. This effect can be interpreted as emission at longer wavelengths coming from cooler material that is located at larger distances from the Cepheid than the warmer material emitted at shorter wavelengths. @Kervella_2009_05_0 previously observed the same trend for $\ell$ Car and RS Pup. Assuming that the CSE is resolved by MIDI, the flux contribution of the dust shell is estimated to be about 50% at $10.5\,\mathrm{\mu m}$ for T Mon and 7% for X Sgr. It is worth mentioning that the excess is significantly higher for the longer-period Cepheid, T Mon, adding additional evidence about the correlation between the pulsation period and the CSE brightness suspected previously [@Gallenne_2011_11_0; @Merand_2007_08_0]. The CSE is also detected in the SED, with a contribution progressively increasing with wavelength. Compared with Kurucz atmosphere models [@Castelli_2003__0 solid black curve in Fig. \[graph\_\_visibility\_xsgr\] and \[graph\_\_visibility\_tmon\]], we notice that the CSE contribution becomes significant around $8\,\mathrm{\mu m}$ for X Sgr, while for T Mon it seems to start at shorter wavelengths. The Kurucz models were interpolated at $T_\mathrm{eff} = 5900$K , $\log g = 2$ and $V_\mathrm{t} = 4\,\mathrm{km~s^{-1}}$ for X Sgr [@Usenko_2012__0]. For T Mon observed at two different pulsation phases, the stellar temperature only varies from $\sim 5050$K ($\phi = 0.33$) to $\sim 5450$K ($\phi = 0.12$), we therefore chose the stellar parameters $T_\mathrm{eff} = 5200$K , $\log g = 1$, and $V_\mathrm{t} = 4\,\mathrm{km~s^{-1}}$ [@Kovtyukh_2005_01_0] for an average phase of 0.22. This has an effect of a few percent in the following fitted parameters (see Sect. \[subsubsection\_\_tmon\]). Given the limited amount of data and the lack of feature that could be easily identified (apart from the alumina shoulder, see below), the investigation of the dust content and the dust grains geometrical properties is therefore limited by the high level of degeneracy. We restricted ourself to the range of dust compound to the refractory ones or the most frequently encountered around evolved stars. The wind launched by Cepheids is not supposed to be enriched compared with the native composition of the star. Therefore, the formation of carbon grains in the vicinity of these stars is highly unprobable. The polycyclic aromatic hydrocarbons (PAHs) detected around some Cepheids by Spitzer/IRAC and MIPS have an interstellar origin and result from a density enhancement at the interface between the wind and the interstellar medium that leads to a bow shock [@Marengo_2010_12_0]. It is noteworthy that no signature of PAHs is observed in the MIDI spectrum or the MIDI visibilities (see Fig. \[graph\_\_visibility\_xsgr\] and \[graph\_\_visibility\_tmon\]). The sublimation temperature of iron is higher than that of alumina and rapidly increases with density. Hence, iron is the most likely dust species expected to form in dense (shocked) regions with temperatures higher 1500 K [@Pollack_1994_02_0]. Moreover, alumina has a high sublimation temperature in the range of 1200-2000K (depending of the local density), and its presence is generally inferred by a shoulder of emission between 10 and $15\,\mathrm{\mu m}$ [@Chesneau_2005_06_0; @Verhoelst_2009_04_0]. Such a shoulder is identified in the spectrum and visibility of X Sgr, suggesting that this compound is definitely present. Yet, it must be kept in mind that the low aluminum abundance at solar metallicity prevents the formation of a large amount of this type of dust. No marked shoulder is observed in the spectrum and visibilities from T Mon, which is indicative of a lower content. The silicates are easily identified owing to their signature at $10\,\mathrm{\mu m}$. This signature is not clearly detected in the MIDI data. Radiative transfer code: `DUSTY` -------------------------------- To model the thermal-IR SED and visibility, we performed radiative transfer calculations for a spherical dust shell. We used the public-domain simulation code `DUSTY` [@Ivezic_1997_06_0; @Ivezic_1999_11_0], which solves the radiative transfer problem in a circumstellar dusty environment by analytically integrating the radiative-transfer equation in planar or spherical geometries. The method is based on a self-consistent equation for the spectral energy density, including dust scattering, absorption, and emission. To solve the radiative transfer problem, the following parameters for the central source and the dusty region are required: - the spectral shape of the central source’s radiation, - the dust grain properties: chemical composition, grain size distribution, and dust temperature at the inner radius, - the density distribution of the dust and the relative thickness, and - the radial optical depth at a reference wavelength. `DUSTY` then provides the SED, the surface brightness at specified wavelengths, the radial profiles of density, optical depth and dust temperature, and the visibility profile as a function of the spatial frequency for the specified wavelengths. Single dust shell model {#subsection__single_dust_shell_model} ----------------------- We performed a simultaneous fit of the MIDI spectrum and visibilities with various `DUSTY` models to check the consistency with our data. The central source was represented with Kurucz atmosphere models [@Castelli_2003__0] with the stellar parameters listed in Sect. \[subsection\_\_visibility\_and\_sed\]. In the absence of strong dust features, we focused on typical dust species encountered in circumstellar envelopes and according to the typical abundances of Cepheid atmospheres, that is, amorphous alumina [Al$_2$O$_3$ compact, @Begemann_1997_02_0], iron [Fe, @Henning_1995_07_0], warm silicate [W-S, @Ossenkopf_1994_11_0], olivine [MgFeSiO$_4$, @Dorschner_1995_08_0], and forsterite [Mg$_2$SiO$_4$, @Jager_2003_09_0]. We present in Fig. \[graph\_\_dust\_efficiency\] the optical efficiency of these species for the MIDI wavelength region. We see in this plot for instance that the amorphous alumina is optically more efficient around $11\,\mathrm{\mu m}$. We also notice that forsterite, olivine, and warm silicate have a similar optical efficiency, but as we cannot differentiate these dust species with our data, we decided to use warm silicates only. We used a grain size distribution following a standard Mathis-Rumpl-Nordsieck (MRN) relation [@Mathis_1977_10_0], that is, $n(a) \propto a^{3.5}$ for $0.005 \leqslant a \leqslant 0.25\,\mathrm{\mu m}$. We chose a spherical density distribution in the shell following a radiatively driven wind, because Cepheids are giant stars and might lose mass via stellar winds [@Neilson_2008_09_0]. In this case, `DUSTY` computes the density structure by solving the hydrodynamics equations, coupled to the radiative transfer equations. The shell thickness is the only input parameter required. It is worth mentioning that we do not know the dust density profile in the Cepheid outflow, and we chose the hydrodynamic calculation in DUSTY as a good assumption. For both stars, we also added $B, V, J, H$ and $K$ photometric light curves from the literature to our mid-IR data to better constrain the stellar parameters (@Moffett_1984_07_0 [@Berdnikov_2008_04_0; @Feast_2008_06_0] for X Sgr, @Moffett_1984_07_0 [@Coulson_1985__0; @Berdnikov_2008_04_0; @Laney_1992_04_0] for T Mon). To avoid phase mismatch, the curves were fitted with a cubic spline function and were interpolated at our pulsation phase. We then used these values in the fitting process. The conversion from magnitude to flux takes into account the photometric system and the filter bandpass. During the fitting procedure, all flux densities $< 3\,\mathrm{\mu m}$ were corrected for interstellar extinction $A_\lambda = R_\lambda E(B - V)$ using the total-to-selective absorption ratios $R_\lambda$ from @Fouque_2003__0 and @Hindsley_1989_06_0. The mid-IR data were not corrected for the interstellar extinction, which we assumed to be negligible. The free parameters are the stellar luminosity ($L_\star$), the dust temperature at the inner radius ($T_\mathrm{in}$), the optical depth at $0.55\,\mathrm{\mu m}$ ($\tau_\mathrm{0.55\mu m}$), and the color excess $E(B - V)$. Then we extracted from the output files of the best-fitted `DUSTY` model the shell internal diameter ($\theta_\mathrm{in}$), the stellar diameter ($\theta_\mathrm{LD}$), and the mass-loss rate $\dot{M}$. The stellar temperature of the Kurucz model ($T_\mathrm{eff}$), the shell’s relative thickness and the dust abundances were fixed during the fit. We chose $R_\mathrm{out}/R_\mathrm{in} = 500$ for the relative thickness as it is not constrained with our mid-IR data. The distance of the star was also fixed to 333.3pc for X Sgr [@Benedict_2007_04_0] and 1309.2pc for T Mon [@Storm_2011_10_0]. Results ------- ### X Sgr The increase of the SED around $11\,\mathrm{\mu m}$ made us investigate in the direction of a CSE composed of Al$_2$O$_3$ material, which is optically efficient at this wavelength. After trying several dust species, we finally found a good agreement with a CSE composed of 100% amorphous alumina (model \#1 in Table \[table\_\_fit\_result\]). The fitted parameters are listed in Table \[table\_\_fit\_result\] and are plotted in Fig. \[graph\_\_visibility\_xsgr\]. However, a dust composed of 70% Al$_2$O$_3$ + 30% W-S (model \#4), or dust including some iron (model \#5), are also statistically consistent with our observations. Consequently, we chose to take as final parameters and uncertainties the average values and standard deviations (including their own statistical errors added quadratically) between models \#1, \#4, and \#5. The final adopted parameters are listed in Table \[table\_\_fit\_results\_final\]. It is worth mentioning that for these models all parameters have the same order of magnitude. The error on the stellar angular diameter was estimated from the luminosity and distance uncertainties. The CSE of X Sgr is optically thin ($\tau_\mathrm{0.55\mu m} = 0.0079 \pm 0.0021$) and has an internal shell diameter of $\theta_\mathrm{in} = 15.6 \pm 2.9$mas. The condensation temperature we found is in the range of what is expected for this dust composition (1200-1900K). The stellar angular diameter (and in turn the luminosity) is also consistent with the value estimated from the surface-brightness method at that pulsation phase [@Storm_2011_10_0 $1.34 \pm 0.03$mas] and agrees with the average diameter measured by @Kervella_2004_03_0 [$1.47 \pm 0.03\,\mathrm{\mu m}$]. The relative CSE excess in the VISIR PAH1 filter of $13.3 \pm 0.5$% also agrees with the one estimated by @Gallenne_2011_11_0 [$11.7 \pm 4.7\,\%$]. Our derived color excess $E(B-V)$ is within $1\sigma$ of the average value $0.227 \pm 0.013$ estimated from photometric, spectroscopic, and space reddenings [@Fouque_2007_12_0; @Benedict_2007_04_0; @Kovtyukh_2008_09_0]. ### T Mon {#subsubsection__tmon} The CSE around this Cepheid has a stronger contribution than X Sgr. The large excess around $8\,\mathrm{\mu m}$ enables us to exclude a CSE composed of 100% Al$_2$O$_3$, because of its low efficiency in this wavelength range. We first considered dust composed of iron. However, other species probably contribute to the opacity enhancement. As showed in Fig. \[graph\_\_visibility\_tmon\], a 100% Fe dust composition is not consistent with our observations. We therefore used a mixture of W-S, Al$_2$O$_3$ and Fe to take into account the optical efficiency at all wavelengths. The best model that agrees with the visibility profile and the SED is model \#5, including 90% Fe + 5% Al$_2$O$_3$ + 5% W-S. The fitted parameters are listed in Table \[table\_\_fit\_result\] and are plotted in Fig. \[graph\_\_visibility\_tmon\]. However, because no specific dust features are present to constrain the models, other dust compositions are also consistent with the observations. Therefore we have chosen the average values and standard deviations (including their own statistical errors added quadratically) between models \#2, \#4 and \#5 as final parameters and uncertainties. The final adopted parameters are listed in Table \[table\_\_fit\_results\_final\]. The choice of a stellar temperature at $\phi = 0.33$ or 0.12 in the fitting procedure (instead of an average pulsation phase as cited in Sect. \[subsection\_\_visibility\_and\_sed\]) changes the derived parameters by at most 10% (the variation of the temperature is lower in the mid-IR). To be conservative, we added quadratically this relative error to all parameters of Table \[table\_\_fit\_results\_final\]. The CSE of T Mon appears to be thicker than that of X Sgr, with ($\tau_\mathrm{0.55\mu m} = 0.151 \pm 0.042$), and an internal shell diameter of $\theta_\mathrm{in} = 15.9 \pm 1.7$mas. The derived stellar diameter agrees well with the $1.01 \pm 0.03$mas estimated by @Storm_2011_10_0 [at $\phi = 0.22$]. The deduced color excess $E(B-V)$ agrees within $1\sigma$ with the average value $0.181 \pm 0.010$ estimated from photometric, spectroscopic and space reddenings [@Fouque_2007_12_0; @Benedict_2007_04_0; @Kovtyukh_2008_09_0]. We derived a particularly high IR excess in the VISIR PAH1 filter of $87.8 \pm 9.9$%, which might make this Cepheid a special case. It is worth mentioning that we were at the sensitivity limit of MIDI for this Cepheid, and the flux might be biased by a poor subtraction of the thermal sky background. However, the clear decreasing trend in the visibility profile as a function of wavelength cannot be attributed to a background emission, and we argue that this is the signature of a CSE. In Sect.  \[section\_\_discussion\] we make a comparative study to remove the thermal sky background and qualitatively estimate the unbiased IR excess. ------------------------------------ ------------------ ------------------ ---------------------- ------------------- ----------------- ---------------------- --------------------------- ---------------------- ---------- --------------------- ---- -- Model $L_\star$ $T_\mathrm{eff}$ $\theta_\mathrm{LD}$ $E(B - V)$ $T_\mathrm{in}$ $\theta_\mathrm{in}$ $\tau_\mathrm{0.55\mu m}$ $\dot{M}$ $\alpha$ $\chi^2_\mathrm{r}$ \# $(L_\odot)$ (K) (mas) (K) (mas) ($\times10^{-3}$) ($M_\odot\,yr^{-1}$) (%) Al$_2$O$_3$ $2151 \pm 34$ 5900 $1.24 \pm 0.08$ $0.199 \pm 0.009$ $1732 \pm 152$ 13.2 $6.5 \pm 0.8$ $5.1\times10^{-8}$ 13.0 0.78 1 W-S $2155 \pm 40$ 5900 $1.24 \pm 0.08$ $0.200 \pm 0.011$ $1831 \pm 141$ 14.7 $11.8 \pm 0.2$ $6.4\times10^{-8}$ 15.0 1.53 2 Fe $2306 \pm 154$ 5900 $1.28 \pm 0.09$ $0.230 \pm 0.031$ $1456 \pm 605$ 29.3 $8.9 \pm 4.4$ $6.2\times10^{-8}$ 9.5 4.30 3 70% Al$_2$O$_3$ + 30% W-S $2153 \pm 31$ 5900 $1.24 \pm 0.08$ $0.200 \pm 0.009$ $1519 \pm 117$ 19.6 $6.6 \pm 0.7$ $5.6\times10^{-8}$ 13\. 7 0.58 4 60% Al$_2$O$_3$ + 20% W-S + 20% Fe $2160 \pm 35$ 5900 $1.24 \pm 0.08$ $0.201 \pm 0.009$ $1802 \pm 130$ 13.9 $10.6 \pm 1.2$ $6.1\times10^{-8}$ 13.3 0.68 5 Fe $12~453 \pm 775$ 5200 $0.98 \pm 0.03$ $0.183 \pm 0.049$ $1190 \pm 59$ 29.4 $113 \pm 21$ $5.0\times10^{-7}$ 99.5 5.36 1 80% Fe + 10% W-S + 10% Al$_2$O$_3$ $11~606 \pm 434$ 5200 $0.94 \pm 0.03$ $0.144 \pm 0.028$ $1418 \pm 42$ 16.6 $126 \pm 13$ $4.4\times10^{-7}$ 82.2 1.52 2 90% Fe + 10% W-S $11~696 \pm 667$ 5200 $0.95 \pm 0.03$ $0.149 \pm 0.044$ $1389 \pm 54$ 18.0 $147 \pm 23$ $5.0\times10^{-7}$ 94.6 3.04 3 90% Fe + 10% Al$_2$O$_3$ $11~455 \pm 580$ 5200 $0.94 \pm 0.03$ $0.137 \pm 0.040$ $1439 \pm 49$ 15.9 $158 \pm 21$ $5.0\times10^{-7}$ 87.5 2.21 4 90% Fe + 5% Al$_2$O$_3$ + 5% W-S $11~278 \pm 597$ 5200 $0.93 \pm 0.04$ $0.125 \pm 0.042$ $1458 \pm 48$ 15.3 $170 \pm 23$ $5.2\times10^{-7}$ 93.6 2.22 5 ------------------------------------ ------------------ ------------------ ---------------------- ------------------- ----------------- ---------------------- --------------------------- ---------------------- ---------- --------------------- ---- -- \[table\_\_fit\_result\] X Sgr T Mon ---------------------------------------------- ------------------- ------------------- -- -- -- -- -- -- $L_\star$ $(L_\odot)$ $2155 \pm 58$ $11~446 \pm 1486$ $T_\mathrm{eff}$ (K) 5900 5200 $\theta_\mathrm{LD}$ (mas) $1.24 \pm 0.14$ $0.94 \pm 0.11$ $E(B - V)$ $0.200 \pm 0.032$ $0.135 \pm 0.066$ $T_\mathrm{in}$ (K) $1684 \pm 225$ $1438 \pm 166$ $\theta_\mathrm{in}$ (mas) $15.6 \pm 2.9$ $15.9 \pm 1.7$ $\tau_\mathrm{0.55\mu m}$ ($\times10^{-3}$) $7.9 \pm 2.1$ $151 \pm 42$ $\dot{M}$ ($\times10^{-8} M_\odot\,yr^{-1}$) $5.6 \pm 0.6$ $48.7 \pm 5.9$ $\alpha$ (%) $13.3 \pm 0.7$ $87.8 \pm 9.9$ : Final adopted parameters. \[table\_\_fit\_results\_final\] Period-excess relation {#section__period_excess_relation} ====================== @Gallenne_2011_11_0 presented a probable correlation between the pulsation period and the CSE relative excess in the VISIR PAH1 filter. From our fitted `DUSTY` model, we estimated the CSE relative excess by integrating over the PAH1 filter profile. This allowed us another point of view on the trend of this correlation. X Sgr was part of the sample of @Gallenne_2011_11_0 and can be directly compared with our result, while T Mon is a new case. This correlation is plotted in Fig. \[graph\_\_excess\], with the measurements of this work as red triangles. The IR excess for X Sgr agrees very well with our previous measurements [@Gallenne_2011_11_0]. The excess for T Mon is extremely high, and does not seem to follow the suspected linear correlation. Fig. \[graph\_\_excess\] shows that longer-period Cepheids have higher IR excesses. This excess is probably linked to past or ongoing mass-loss phenomena. Consequently, this correlation shows that long-period Cepheids have a larger mass-loss than shorter-period, less massive stars. This behavior might be explained by the stronger velocity fields in longer-period Cepheids, and the presence of shock waves at certain pulsation phases [@Nardetto_2006_07_0; @Nardetto_2008_10_0]. This scenario is consistent with the theoretically predicted range, $10^{-10}$–$10^{-7} M_\odot\,yr^{-1}$, of @Neilson_2008_09_0, based on a pulsation-driven mass-loss model. @Neilson_2011_05_0 also found that a pulsation-driven mass-loss model combined with moderate convective-core overshooting provides an explanation for the Cepheid mass discrepancy, where stellar evolution masses differ by 10-20% from stellar pulsation calculations. We fitted the measured mi-IR excess with a linear function of the form $$f_\mathrm{8.6\,\mu m} = \alpha_\mathrm{8.6\,\mu m}P,$$ with $f$ in % and $P$ in day. We used a general weighted least-squares minimization, using errors on each measurements as weights . We found a slope $\alpha_\mathrm{8.6\,\mu m} = 0.83 \pm 0.04\,\mathrm{\%.d^{-1}}$, including T Mon, and $\alpha_\mathrm{8.6\,\mu m} = 0.81 \pm 0.04\,\mathrm{\%.d^{-1}}$ without. The linear relation is plotted in Fig. \[graph\_\_excess\]. Discussion {#section__discussion} ========== Since the first detection around $\ell$ Car [@Kervella_2006_03_0], CSEs have been detected around many other Cepheids [@Gallenne_2011_11_0; @Merand_2007_08_0; @Merand_2006_07_0]. Our works, using IR and mid-IR high angular resolution techniques, lead to the hypothesis that all Cepheids might be surrounded by a CSE. The mechanism for their formation is still unknown, but it is very likely a consequence of mass loss during the pre-Cepheid evolution stage or during the multiple crossings of the instability strip. The period–excess relation favors the last scenario, because long-period Cepheids have higher masses and cross the instability strip up to three times. Other mid- and far-IR extended emissions have also been reported by @Barmby_2011_11_0 around a significant fraction of their sample (29 Cepheids), based on Spitzer telescope observations. The case of $\delta$ Cep was extensively discussed in @Marengo_2010_12_0. From IRAS observations, @Deasy_1988_04_0 also detected IR excesses and estimated mass-loss rate ranging from $10^{-10}$ to $10^{-6}M_\odot\,yr^{-1}$. The values given by our `DUSTY` models agree. They are also consistent with the predicted mass-loss rate from @Neilson_2008_09_0, ranging from $10^{-10}$ to $10^{-7}M_\odot\,yr^{-1}$. These CSEs might have an impact on the Cepheid distance scale through the photometric contribution of the envelopes. While at visible and near-IR wavelengths the CSE flux contribution might be negligible ($< 5$%), this is not the case in the mid-IR domain [see @Kervella_2013_02_0 for a more detailed discussion]. This is particularly critical because near- and mid-IR P-L relation are preferred due to the diminished impact of dust extinction. Recently, @Majaess_2013_08_0 re-examined the 3.6 and 4.5$\mathrm{\mu m}$ Spitzer observations and observed a nonlinear trend on the period-magnitude diagrams for LMC and SMC Cepheids. They found that longer-period Cepheids are slightly brighter than short-period ones. This trend is compatible with our period-excess relation observed for Galactic Cepheids. @Monson_2012_11_0 derived Galactic P–L relations at 3.6 and 4.5$\mathrm{\mu m}$ and found a strong color variation for Cepheids with $P > 10$days, but they attributed this to enhanced CO absorption at $4.5\,\mathrm{\mu m}$. From their light curves, we estimated the magnitudes expected at our observation phase for X Sgr and T Mon [using the ephemeris from @Samus_2009_01_0] to check the consistency with the values given by our `DUSTY` models (integrated over the filter bandpass). For X Sgr, our models give averaged magnitudes $m_\mathrm{3.6\,\mathrm{\mu m}} = 2.55 \pm 0.06$ and $m_\mathrm{4.5\,\mathrm{\mu m}} = 2.58 \pm 0.05$ (taking into account the 5% flux variations of Sect. \[subsection\_\_flux\_and\_visibility\_fluctuations\_between\_datasets\]), to be compared with $2.54 \pm 0.02$ and $2.52 \pm 0.02$ from @Monson_2012_11_0. For T Mon, we have $m_\mathrm{3.6\,\mathrm{\mu m}} = 2.94 \pm 0.14$ and $m_\mathrm{4.5\,\mathrm{\mu m}} = 2.94 \pm 0.14$ from the models (taking into account the 8% flux variations of Sect. \[subsection\_\_flux\_and\_visibility\_fluctuations\_between\_datasets\] and a 10% flux error for the phase mismatch), to be compared with $3.29 \pm 0.08$ and $3.28 \pm 0.05$ (with the rms between phase 0.12 and 0.33 as uncertainty). Our estimated magnitudes are consistent for X Sgr, while we differ by about $2\sigma$ for T Mon. As we describe below, we suspect a sky background contamination in the MIDI data. The estimated excesses from the model at 3.6 and $4.5\,\mathrm{\mu m}$ are $6.0 \pm 0.5$% and $6.3 \pm 0.5$% for X Sgr, and $46 \pm 5$% and $58 \pm 6$% for T Mon (errors estimated from the standard deviation of each model). This substantial photometric contribution probably affects the Spitzer/IRAC P–L relation derived by @Monson_2012_11_0 and the calibration of the Hubble constant by @Freedman_2012_10_0. We also compared our models with the Spitzer 5.8 and 8.0$\mathrm{\mu m}$ magnitudes of @Marengo_2010_01_0 which are only available for T Mon. However, their measurements correspond to the pulsation phase 0.65, so we have to take a phase mismatch into account. According to the light curves of @Monson_2012_11_0, the maximum amplitude at 3.6 and 4.5$\mathrm{\mu m}$ is decreasing from 0.42 to 0.40mag, respectively. As the light curve amplitude is decreasing with wavelength, we can safely assume a maximum amplitude at 5.8 and 8.0$\mathrm{\mu m}$ of 0.25mag. We take this value as the highest uncertainty, which we added quadratically to the measurements of @Marengo_2010_01_0, which leads to $m_\mathrm{5.8\,\mathrm{\mu m}} = 3.43 \pm 0.25$ and $m_\mathrm{8.0\,\mathrm{\mu m}} = 3.32 \pm 0.25$. Integrating our models on the Spitzer filter profiles, we obtained $m_\mathrm{5.8\,\mathrm{\mu m}} = 2.85 \pm 0.14$ and $m_\mathrm{8.0\,\mathrm{\mu m}} = 2.67 \pm 0.14$, which differ by about $2\sigma = 0.5\,$mag from the empirical values at $8\,\mathrm{\mu m}$. A possible explanation of this discrepancy would be a background contamination in our MIDI measurements. Indeed, due to its faintness, T Mon is at the sensitivity limit of the instrument, and the sky background can contribute to the measured IR flux (only contributes to the incoherent flux). Assuming that this $2\sigma$ discrepancy is due to the sky background emission, we can estimate the contribution of the CSE with the following approach. The flux measured by @Marengo_2010_01_0 corresponds to $f_\star + f_\mathrm{env}$, that is, the contribution of the star and the CSE, while MIDI measured an additional term corresponding to the background emission, $f_\star + f_\mathrm{env} + f_\mathrm{sky}$. From our derived `DUSTY` flux ratio (Table \[table\_\_fit\_results\_final\]) and the magnitude difference between MIDI and Spitzer, we have the following equations: $$\label{eq__1} \dfrac{f_\mathrm{env} + f_\mathrm{sky}}{f_\star} = \alpha,\ \mathrm{and}$$ $$\label{eq__2} 2\sigma = -2.5\,\log \left( \dfrac{f_\star + f_\mathrm{env}}{f_\star + f_\mathrm{env} + f_\mathrm{sky}} \right),$$ where $2\sigma$ is the magnitude difference between the Spitzer and MIDI observations. Combining Eqs. \[eq\_\_1\] and \[eq\_\_2\], we estimate the real flux ratio to be $f_\mathrm{env}/f_\star \sim 19\,$%. Interestingly, this is also more consistent with the expected period-excess relation plotted in Fig. \[graph\_\_excess\], although in a different filter. We also derived the IR excess in the $K$ band to check the possible impact on the usual P–L relation for those two stars. Our models gives a relative excess of $\sim 24.3 \pm 2.7$% for T Mon, and for X Sgr we found $4.3 \pm 0.3$%. However, caution is required with the excess of T Mon since it might suffer from sky-background contamination. Therefore, we conclude that the bias on the $K$-band P–L relation might be negligible compared with the intrinsic dispersion of the P–L relation itself. Conclusion {#section__conclusion} ========== Based on mid-IR observations with the MIDI instrument of the VLTI, we have detected the circumstellar envelope around the Cepheids X Sgr and T Mon. We used the numerical radiative transfer code `DUSTY` to simultaneously fit the SED and visibility profile to determine physical parameters related to the stars and their dust shells. We confirm the previous IR emission detected by @Gallenne_2011_11_0 for X Sgr with an excess of 13.3%, and we estimate a $\sim 19\,$% excess for T Mon at $8\,\mathrm{\mu m}$. As the investigation of the dust content and the dust grains geometrical properties are limited by a high level of degeneracy, we restricted ourselves to typical dust composition for circumstellar environment. We found optically thin envelopes with an internal dust shell radius in the range 15-20mas. The relative CSE excess seems to be significant from $8\,\mathrm{\mu m}$ ($> 10$%), depending on the pulsation period, while for shorter wavelengths, the photometric contribution might be negligible. Therefore, the impact on the $K$-band P–L relation is low ($\lesssim 5$%), but it is considerable for the mid-IR P–L relation [@Ngeow_2012_09_0; @Monson_2012_11_0], where the bias due to the presence of a CSE can reach more than 30%. Although still not statistically significant, we derived a linear period-excess relation, showing that longer-period Cepheids exhibit a higher IR excess than shorter-period Cepheids. It is now necessary to increase the statistical sample and investigate whether CSEs are a global phenomena for Cepheids. Interferometric imaging with the second-generation instrument VLTI/MATISSE [@Lopez_2006_07_0] will also be useful for imaging and probing possible asymmetry of these CSEs. The authors thank the ESO-Paranal VLTI team for supporting the MIDI observations. We also thank the referee for the comments that helped to improve the quality of this paper. A.G. acknowledges support from FONDECYT grant 3130361. W.G. gratefully acknowledge financial support for this work from the BASAL Centro de Astrofísica y Tecnologías Afines (CATA) PFB-06/2007. This research received the support of PHASE, the high angular resolution partnership between ONERA, Observatoire de Paris, CNRS, and University Denis Diderot Paris 7. This work made use of the SIMBAD and VIZIER astrophysical database from CDS, Strasbourg, France and the bibliographic informations from the NASA Astrophysics Data System. [^1]: Based on observations made with ESO telescopes at Paranal observatory under program ID 082.D-0066 [^2]: The MIA+EWS software package is available at http://www.strw.leidenuniv.nl/$\sim$nevec/MIDI/index.html.
--- abstract: 'We present here a microscopic analysis of the cooperative light scattering on an atomic system consisting of $\Lambda$-type configured atoms with the spin-degenerate ground state. The results are compared with a similar system consisting of standard “two-level” atoms of the Dicke model. We discuss advantages of the considered system in context of its possible implications for light storage in a macroscopic ensemble of dense and ultracold atoms.' address: | ${}^1$Department of Theoretical Physics, St-Petersburg State Polytechnic University, 195251, St.-Petersburg, Russia\ ${}^2$Department of Physics, St-Petersburg State University, 198504, St-Petersburg, Russia author: - 'A.S. Sheremet${}^1$, A.D. Manukhova${}^2$, N.V. Larionov${}^1$, D.V. Kupriyanov${}^1$' title: | Cooperative light scattering on an atomic system with\ degenerate structure of the ground state --- Introduction ============ A significant range of studies of ultracold atomic systems have focused on their complex quantum behavior in various interaction processes. Among these, special attention has been payed to the quantum interface between light and matter, and quantum memory in particular [@PSH; @Simon; @SpecialIssueJPhysB]. Most of the schemes for light storage in atomic ensembles are based on idea of the $\Lambda$-type conversion of a signal pulse into the long-lived spin coherence of the atomic ground state. The electromagnetically induced transparency (EIT) protocol in a warm atomic ensemble was successfully demonstrated in Ref. [@NGPSLW], and also in Ref. [@CDLK], where a single photon entangled state was stored in two ensembles of cold atoms with an efficiency of 17%. Recent experiments on conversion of a spin polariton mode into a cavity mode with efficiency close to 90% [@STTV] and on the narrow-bandwidth biphoton preparation in a double $\Lambda$-system under EIT conditions [@DKBYH] show promising potential for developing a quantum interface between light and atomic systems. However, further improvement of atomic memory efficiencies is a challenging and not straightforward experimental task. In the case of warm atomic vapors, any increase of the sample optical depth meets a serious barrier for the EIT effect because of the rather complicated and mainly negative rule of atomic motion and Doppler broadening, which manifest in destructive interference among the different hyperfine transitions of alkali-metal atoms [@MSLOFSBKLG]. In the case of ultracold and dilute atomic gas, which can be prepared in a magneto-optical trap (MOT), for some experimental designs optical depths around hundreds are feasible [@FGCK], but there are a certain challenges in accumulating so many atoms and making such a system controllable. One possible solution requires special arrangements for effective light storage in MOT in a diffusion regime; see Ref. [@GSOH]. Recent progress in experimental studies of light localization phenomenon in the dense and strongly disordered atomic systems [@Kaiser; @BHSK] encourages us to think that the storage protocols for light could be organized more effectively if atoms interacted with the field cooperatively in the dense configuration. If an atomic cloud contains more than one atom in the volume scaled by the radiation wavelength, the essential optical thickness can be attained for a smaller number of atoms than it is typically needed in dilute configuration. In the present paper we address the problem of light scattering by such an atomic system, which has intrinsically cooperative behavior. Although the problem of cooperative or dependent light scattering and super-radiance phenomenon have been well established in atomic physics and quantum optics for decades (see Refs. [@BEMST; @Akkermns]), microscopic analysis for the atoms with degenerate ground states is still quite poorly performed in the literature [@Grubellier]. The microscopic calculations reported so far have been done mostly for “two-level” atoms and were basically motivated by the problem of mesoscopic description of the light transport through disordered media and by an Anderson-type localization, where transition from weak to strong disorder plays a crucial role; see Refs. [@RMO; @GeroAkkermns; @SKKH]. In this paper we develop a microscopic theory of the cooperative light scattering from an atomic system consisted of $\Lambda$-type configured atoms with the spin-degenerate ground state. The results are compared with a similar system of “two-level” atoms of the Dicke model. We discuss advantages of the considered system in the context of its possible implications for the problem of light storage in a macroscopic ensemble of dense and ultracold atoms. Theoretical framework ===================== Transition amplitude and the scattering cross section ----------------------------------------------------- The quantum-posed description of the photon scattering problem is based on the formalism of $T$ matrix, which is defined by $$\hat{T}(E)=\hat{V}+\hat{V}\frac{1}{E-\hat{H}}\hat{V},% \label{2.1}%$$ where $\hat{H}$ is the total Hamiltonian consisting of the nonperturbed part $\hat{H}_0$ and an interaction term $\hat{V}$ such that $\hat{H}=\hat{H}_0+\hat{V}$. The energy argument $E$ is an arbitrary complex parameter in Eq.(\[2.1\]). Then the scattering process, evolving from initial state $|i\rangle$ to the final state $|f\rangle$, is expressed by the following relation between the differential cross section and the transition amplitude, given by the relevant $T$-matrix element considered as a function of the initial energy $E_i$: $$d\sigma_{i\to f}=\frac{{\cal V}^2}{\hbar^2 c^4}\frac{\omega'^2}{(2\pi)^2}% \left|T_{g'\mathbf{e}'\mathbf{k}',g\,\mathbf{e\,k}}(E_i+i0)\right|^2d\Omega% \label{2.2}%$$ Here the initial state $|i\rangle$ is specified by the incoming photon’s wave vector $\mathbf{k}$, frequency $\omega\equiv\omega_k=c\,k$, and polarization vector $\mathbf{e}$, and the atomic system populates a particular ground state $|g\rangle$. The final state $|f\rangle$ is specified by a similar set of the quantum numbers, which are additionally upscribed by the prime sign, and the solid angle $\Omega$ is directed along the wavevector of the outgoing photon $\mathbf{k}'$. The presence of quantization volume ${\cal V}$ in this expression is caused by the second quantized structure of the interaction operators; see below. The scattering process conserves the energy of input and output channels, such that $E_i=E_f$. Our description of interaction process of the electromagnetic field with an atomic system is performed in the dipole approximation. This states that the original Hamiltonian, introduced in the Coulomb gauge and valid for any neutral charge system, has been unitarily transformed to the dipole-type interaction with the assumption that atomic size is much smaller than a typical wavelength of the field modes actually contributing to the interaction dynamics. Such a long-wavelength dipole approximation see Ref. [@ChTnDRGr] for derivation details leads to the following interaction Hamiltonian for an atomic ensemble consisting of $N$ dipole-type scatterers interacting with the quantized electromagnetic field: $$\begin{aligned} \hat{V}&=&-\sum_{a=1}^{N}% \hat{\mathbf{d}}^{(a)}\hat{\mathbf{E}}(\mathbf{r}_a)+\hat{H}_{\mathrm{self}},% \nonumber\\% \hat{H}_{\mathrm{self}}&=&\sum_{a=1}^{N}\frac{2\pi}{{\cal V}}\sum_{s}\left(\mathbf{e}_s\hat{\mathbf{d}}^{(a)}\right)^2% \label{2.3}%\end{aligned}$$ The first and most important term is normally interpreted as interaction of an $a$th atomic dipole $\mathbf{d}^{(a)}$ with electric field $\hat{\mathbf{E}}(\mathbf{r})$ at the point of dipole location. However, strictly defined in the dipole gauge, the latter quantity performs the microscopic displacement field, which can be expressed by a standard expansion in the basis of plane waves $s\equiv{\mathbf{k},\alpha}$ (where $\alpha=1,2$ numerates two orthogonal transverse polarization vectors $\mathbf{e}_s\equiv\mathbf{e}_{\mathbf{k}\alpha}$ for each $\mathbf{k}$) $$\begin{aligned} \lefteqn{\hat{\mathbf{E}}(\mathbf{r})\equiv \hat{\mathbf{E}}^{(+)}(\mathbf{r})% +\hat{\mathbf{E}}^{(-)}(\mathbf{r})}% \nonumber\\% &&=\sum_{s}\left(\frac{2\pi\hbar\omega_s}{{\cal V}}\right)^{1/2}% \left[i\mathbf{e}_s a_s\mathrm{e}^{i\mathbf{k}_s\mathbf{r}}% -i\mathbf{e}_s a_s^{\dagger}\mathrm{e}^{-i\mathbf{k}_s\mathbf{r}}\right]% \nonumber\\% &&=\hat{\mathbf{E}}_{\bot}(\mathbf{r})+\sum_{b=1}^{N}\frac{4\pi}{{\cal V}}% \sum_{s}\mathbf{e}_s(\mathbf{e}_s\hat{\mathbf{d}}^{(b)})% \mathrm{e}^{i\mathbf{k}_s(\mathbf{r}-\mathbf{r}_b)}% \label{2.4}%\end{aligned}$$Here $a_s$ and $a_s^{\dagger}$ are the annihilation and creation operators for the $s$th field’s mode and the quantization scheme includes the periodic boundary conditions in the quantization volume ${\cal V}$. The bottom line in Eq.(\[2.4\]) indicates the important difference between the actual transverse electric field denoted as $\hat{\mathbf{E}}_{\bot}(\mathbf{r})$ and the displacement field. The difference cannot be ignored at the distances comparable with either atomic size or the radiation wavelength, which is the subject of the present report. For such a dense configuration the definitions (\[2.3\]) and (\[2.4\]) should be clearly understood. Let us make a few remarks. The second term in Eq.(\[2.3\]) reveals a nonconverging self-energy (self-action) of the dipoles. This term is often omitted in practical calculations since it does not principally affect the dipoles’ dynamics, particularly when the difference between transverse electric and displacement fields is small. It can be also formally incorporated into the internal Hamiltonian associated with the atomic dipoles. However, as was pointed out in Ref. [@SKKH] via tracing the Heisenberg dynamics of atomic variables, the self-action term is mostly compensated by the self-contact dipole interaction. The latter manifests itself in the dipoles’ dynamics when $\mathbf{r}=\mathbf{r}_a=\mathbf{r}_b$ for interaction of a specific $a$-th dipole in Hamiltonian (\[2.3\]) with the longitudinal field created by the same dipole in the second term in Eq. (\[2.4\]). Both these nonconverging self-action and self-contact interaction terms can be safely renormalized in evaluation of a single-particle contribution into the self-energy part of the perturbation theory expansion for the resolvent operator; see below. Resolvent operator and $N$-particle Green’s function {#II.B} ---------------------------------------------------- The transition amplitude (\[2.1\]) can be simplified if we substitute in it the interaction operator (\[2.3\]) by keeping only the terms with annihilation of the incoming photon in the input state and creating the outgoing photon in the output state. Such a simplification is in accordance with the standard approach of the rotating wave approximation, which is surely fulfilled for a near-resonance scattering process. As a consequence of this approximation the transition amplitude is now determined by the complete resolvent operator projected onto the vacuum state for the field subsystem and onto the singly excited state for the atomic subsystem $$\tilde{\hat{R}}(E)=\hat{P}\,\hat{R}(E)\,\hat{P}\equiv \hat{P}\frac{1}{E-\hat{H}}\hat{P}.% \label{2.5}%$$ Here we defined the following projector $$\begin{aligned} \lefteqn{\hspace{-0.8cm}\hat{P}=\sum_{a=1}^{N}\;\sum_{\{m_j\},j\neq a}\;\sum_{n}% |m_1,\ldots,m_{a-1},n,m_{a+1},\ldots m_N\rangle}% \nonumber\\% &&\hspace{-0.5cm}\langle m_1,\ldots,m_{a-1},n,m_{a+1},\ldots,m_N|\times|0\rangle\langle 0|_{\mathrm{Field}}% \label{2.6}%\end{aligned}$$ which selects in the atomic Hilbert subspace the entire set of the states where any $j$th of $N-1$ atoms populates a Zeeman sublevel $|m_j\rangle$ in its ground state and one specific $a$th atom (with $a$ running from $1$ to $N$ and $j\neq a$) populates a Zeeman sublevel $|n\rangle$ of its excited state. The field subspace is projected onto its vacuum state and the operator $\tilde{\hat{R}}(E)$ can be further considered as a matrix operator acting only in atomic subspace. The elements of the $T$ matrix can be directly expressed by the resolvent operator as follows: $$\begin{aligned} \lefteqn{T_{g'\mathbf{e}'\mathbf{k}',g\,\mathbf{e\,k}}(E)=\frac{2\pi\hbar\sqrt{\omega'\omega}}{{\cal V}}% \sum_{b,a=1}^{N}\;\sum_{n',n}}% \nonumber\\% &&\hspace{1 cm}(\mathbf{d}\mathbf{e}')_{n'm'_b}^{*}(\mathbf{d}\mathbf{e})_{nm_a}% \mathrm{e}^{-i\mathbf{k}'\mathbf{r}_b+i\mathbf{k}\mathbf{r}_a}% \nonumber\\% &&\langle\ldots m'_{b-1},n',m'_{b+1}\ldots |\tilde{\hat{R}}(E)% |\ldots m_{a-1},n,m_{a+1}\ldots \rangle% \nonumber\\% &&\label{2.7}%\end{aligned}$$ This performs a generalization of the well-known Kramers-Heisenberg formula [@BerstLifshPitvsk] for the scattering of a photon by a many-particle system consisting of atomic dipoles. The selected specific matrix element runs all the possibilities when the incoming photon is absorbed by any $a$th atom and the outgoing photon is emitted by any $b$th atom of the ensemble, including the possible coincidence $a=b$. The initial atomic state is given by $|g\rangle\equiv|m_1,\ldots,m_N\rangle$ and the final atomic state is given by $|g'\rangle\equiv|m'_1,\ldots,m'_N\rangle$. The projected resolvent operator contributing to Eq. (\[2.7\]) is defined in the the Hilbert subspace of a finite size with dimension $d_eN\,d_g^{N-1}$, where $d_e$ is the degeneracy of the atomic excited state and $d_g$ is the degeneracy of its ground state. The matrix elements of operator $\tilde{\hat{R}}(E)$ can be linked with the $N$-particle causal Green’s function of atomic subsystem via the following Laplace-type integral transformation: $$\begin{aligned} \lefteqn{\langle\ldots m'_{b-1},n',m'_{b+1}\ldots |\tilde{\hat{R}}(E)% |\ldots m_{a-1},n,m_{a+1}\ldots \rangle}% \nonumber\\% &&\times\delta\left(\mathbf{r}'_1-\mathbf{r}_1\right)\ldots\delta\left(\mathbf{r}'_b-\mathbf{r}_b\right)\ldots% \delta\left(\mathbf{r}'_a-\mathbf{r}_a\right)\ldots\delta\left(\mathbf{r}'_N-\mathbf{r}_N\right)% \nonumber\\% &&=-\frac{i}{\hbar}\int_0^{\infty}dt\,\,\exp\left[+\frac{i}{\hbar}E\,t\right]\,% \nonumber\\% &&G^{(N)}\left(1',t;\ldots ;b',t;\ldots ;N',t|1,0;\ldots ;a,t;\ldots;N,0\right)% \label{2.8}%\end{aligned}$$ where on the right side we denoted $j=m_j,\mathbf{r}_j$ (for $j\neq a$) and $j'=m'_j,\mathbf{r}'_j$ (for $j'\neq b$), and for specific atoms $a=n,\mathbf{r}_a$ and $b'=n',\mathbf{r}'_b$. Here $\mathbf{r}_j=\mathbf{r}'_j$, for any $j=1\div N$, is the spatial location of $j$th atom, which is assumed to be conserved in the scattering process. This circumstance is expressed by the sequence of $/delta$ functions in Eq. (\[2.8\]). The causal Green’s function is given by the vacuum expectation value of the following chronologically ($T$)-ordered product of atomic second quantized $\Psi$ operators introduced in the Heisenberg representation $$\begin{aligned} \lefteqn{\hspace{-0.5cm}G^{(N)}\left(1',t'_1;\ldots ;b',t'_b;\ldots ;N',t'_N|1,t_1;\ldots ;a,t_a;\ldots;N,t_N\right)}% \nonumber\\% &&=\langle T \Psi_{m'_1}(\mathbf{r}'_1,t'_1)\ldots\Psi_{n'}(\mathbf{r}'_b,t'_b)\ldots% \Psi_{m'_N}(\mathbf{r}'_N,t'_N)% \nonumber\\% &&\Psi_{m_N}^{\dagger}(\mathbf{r}_N,t_N)\ldots% \Psi_{n}^{\dagger}(\mathbf{r}_a,t_a)\ldots\Psi_{m_1}^{\dagger}(\mathbf{r}_1,t_1)\rangle,% \nonumber\\% \label{2.9}%\end{aligned}$$ where $\Psi_{\ldots}(\ldots)$ and $\Psi_{\ldots}^{\dagger}(\ldots)$ are respectively the annihilation and creation operators for an atom in a particular state and coordinate. All the creation operators in this product contribute to transform (\[2.8\]) while being considered at initial time “$0$” and all the annihilation operators are considered at a later time $t>0$. That allows us to ignore effects of either bosonic or fermionic quantum statistics associated with atomic subsystem as far as we neglect any possible overlap in atomic locations and consider the atomic dipoles as classical objects randomly distributed in space. We ordered operators in Eq. (\[2.9\]) in such a way that in the fermionic case (under the anticommutation rule) and without interaction it generates the product of independent individual single-particle Green’s functions associated with each atom and with positive overall sign. The perturbation theory expansion of the $N$-particle Green’s function (\[2.9\]) can be visualized by the series of the diagrams in accordance with the standard rules of the vacuum diagram technique; see Ref. [@BerstLifshPitvsk]. After rearrangement the diagram expansion can be transformed to the following generalized Dyson equation: $$\scalebox{1.0}{\includegraphics*{eq2.10.eps}}% \label{2.10}$$ where the long straight lines with arrows correspond with individual causal single-particle Green’s functions of each atom in the ensemble such that the first term on the right side performs the graph image of nondisturbed $N$-particle propagator (\[2.9\]). The dashed block edged by short lines with arrows is the complete collective $N$-particle Green’s function dressed by the interaction. In each diagram block of equation (\[2.10\]) we indicated by $a,b,c$ (running from $1$ to $N$) the presence of one specific input as well as an output line associated with the single excited state equally shared by all the atoms of the ensemble. The sum of tight diagrams, which cannot be reduced to the product of lower order contributions linked by nondisturbed atomic propagators, builds a block of so-called self-energy part $\Sigma$. The diagram equation (\[2.10\]) in its analytical form performs an integral equation for $G^{(N)}(\ldots)$. With its transformation to the energy representation (\[2.8\]) the integral equation can be recomposed to the set of algebraic equations for the matrix of the projected resolvent operator $\tilde{\hat{R}}(E)$, which can be further numerically solved. The crucial requirement for this is the knowledge of the self-energy part (quasi-energy operator acting in the atomic subspace), which as we show below can be approximated by the lower orders in expansion of the perturbation theory. The self-energy part -------------------- In the lower order of perturbation theory the self-energy part consists of two contributions having single-particle and double-particle structures. Each specific line in the graph equation (\[2.10\]) associated with excitation of an $a$th atom generates the following irreducible self-energy diagram: $$\begin{aligned} \lefteqn{\scalebox{1.0}{\includegraphics*{eq2.11.eps}}} \nonumber\\% &&\Rightarrow\sum_{m}\int\frac{d\omega}{2\pi} d^{\mu}_{n'm}d^{\nu}_{mn}% iD^{(E)}_{\mu\nu}(\mathbf{0},\omega)% \nonumber\\% &&\times\frac{1}{E-\hbar\omega-E_m+i0}% \equiv\Sigma^{(a)}_{n'n}(E),% \label{2.11}\end{aligned}$$ which is analytically decoded with applying transformation (\[2.8\]) in the energy representation. Here the internal wavy line expresses the causal-type vacuum Green’s function of the chronologically ordered polarization components of the field operators $$iD^{(E)}_{\mu\nu}(\mathbf{R},\tau)=\left\langle T\hat{E}_{\mu}(\mathbf{r}',t')% \hat{E}_{\nu}(\mathbf{r},t)\right\rangle,% \label{2.12}%$$ which depends only on difference of its arguments $\mathbf{R}=\mathbf{r}'-\mathbf{r}$ and $\tau=t'-t$ and has the following Fourier image: $$\begin{aligned} \lefteqn{D^{(E)}_{\mu\nu}(\mathbf{R},\omega)=\int_{-\infty}^{\infty} d\tau\,\mathrm{e}^{i\omega\tau}% D^{(E)}_{\mu\nu}(\mathbf{R},\tau)}% \nonumber\\% &=&-\hbar\frac{|\omega|^3}{c^3}\left\{i\frac{2}{3}h^{(1)}_0\left(\frac{|\omega|}{c}R\right)\delta_{\mu\nu}\right.% \nonumber\\% &&\left.+\left[\frac{X_{\mu}X_{\nu}}{R^2}-\frac{1}{3}\delta_{\mu\nu}\right]% ih^{(1)}_2\left(\frac{|\omega|}{c}R\right)\right\};% \label{2.13}%\end{aligned}$$ see Ref. [@BerstLifshPitvsk]. Here $h^{(1)}_L(\ldots)$ with $L=0,2$ are the spherical Hankel functions of the first kind. As follows from Eq. (\[2.11\]) the Green’s function (\[2.13\]) contributes in that expression in a self-interacting form with spatial argument $\mathbf{R}\to\mathbf{0}$. As a consequence the expression (\[2.11\]) becomes non-converging in the limit $R\to 0$ and the integration over $\omega$ is nonconverging. Part of nonconverging terms should be associated with the longitudinal self-contact interaction. These terms are compensated by the dipolar self-action; see Eq. (\[2.3\]) and the related remark given above. The residual nonconvergency has radiative nature and demonstrates general incorrectness of the Lamb-shift calculation in assumptions of the long-wavelength dipole approximation. Finally we follow the standard renormalization rule, $$\begin{aligned} \Sigma^{(a)}_{n'n}(E)&=&\Sigma^{(a)}(E)\delta_{n'n},% \nonumber\\% \Sigma^{(a)}(E)&\approx&\Sigma^{(a)}(\hbar\omega_0)=\hbar\Delta_{\mathrm{L}}-i\hbar\frac{\gamma}{2},% \label{2.14}%\end{aligned}$$ where $\Delta_{\mathrm{L}}\to\infty$ is incorporated into the physical energy of the atomic state. To introduce the single-atom natural decay rate $\gamma$ we applied the Wigner-Weiskopf pole approximation and substituted the energy $E=\hbar\omega_k+E_g$ by its near resonance mean estimate $E\approx E_n$ with assumption that the atomic ground state is the zero-energy level such that $E_g=\sum_{j=1}^{N}E_{m_j}=E_m=0$. Then energy of the excited state is given by $E_n=\hbar\omega_0$, where $\omega_0$ is the transition frequency. In the lower order of perturbation theory, the double-particle contribution to the self-energy part consists of two complementary diagrams: $$\begin{aligned} \lefteqn{\scalebox{1.0}{\includegraphics*{eq2.15.eps}}} \nonumber\\% &&\hspace{-0.5cm}\Rightarrow\int\frac{d\omega}{2\pi} d^{\mu}_{n'm}d^{\nu}_{m'n}% iD^{(E)}_{\mu\nu}(\mathbf{R}_{ab},\omega)% \nonumber\\% &&\hspace{-0.5cm}\times\frac{1}{E-\hbar\omega-E_m-E_{m'}+i0}% \equiv\Sigma^{(ab+)}_{m'n';nm}(E) \label{2.15}\end{aligned}$$ and $$\begin{aligned} \lefteqn{\scalebox{1.0}{\includegraphics*{eq2.16.eps}}} \nonumber\\% &&\hspace{-0.5cm}\Rightarrow\int\frac{d\omega}{2\pi} d^{\mu}_{n'm}d^{\nu}_{m'n}% iD^{(E)}_{\mu\nu}(\mathbf{R}_{ab},\omega)% \nonumber\\% &&\frac{1}{E+\hbar\omega-E_n-E_{n'}+i0}% \equiv\Sigma^{(ab-)}_{m'n';nm}(E), \label{2.16}\end{aligned}$$ which are responsible for the excitation transfer from atom $a$ to atom $b$ separated by a distance $R_{ab}$. The vector components of the dipole matrix elements $d^{\nu}_{m'n}$ and $d^{\mu}_{n'm}$ are related with atoms $a$ and $b$ respectively. In the pole approximation $E\approx E_n=\hbar\omega_0$ the $\delta$ function features dominate in the spectral integrals (\[2.15\]) and (\[2.16\]) and the sum of both the terms gives $$\begin{aligned} \Sigma^{(ab)}_{m'n';nm}(E)&\approx&\Sigma^{(ab+)}_{m'n';nm}(\hbar\omega_0)+% \Sigma^{(ab-)}_{m'n';nm}(\hbar\omega_0)% \nonumber\\% &=&\frac{1}{\hbar}\,d^{\mu}_{n'm}d^{\nu}_{m'n}\,D^{(E)}_{\mu\nu}(\mathbf{R}_{ab},\omega_0).% \label{2.17}%\end{aligned}$$ The derived expression has clear physical meaning. For nearly located atoms the real component of the double-particle contribution to the self-energy part reproduces the static interaction between two atomic dipoles. Its imaginary component is responsible for formation of cooperative dynamics of the excitation decay in the entire radiation process. For long distances, when the atomic dipoles are separated by the radiation zone, this term describes radiation interference between any pair of two distant atoms, which weakly reduces with the interatomic separation. For short distances or in a dense sample the cooperative effects become extremely important and the scattering process becomes strongly dependent on a particular atomic configuration. It is a challenging problem to further improve the self-energy part by taking into consideration the higher orders of the perturbation theory expansion. Here we only substantiate the validity and sufficiency of the lower order approximation for the considered configuration. The main physical reason for this is weakness of interaction. This justifies ignoring of any deviation from free dynamics of atomic variables on a short time scale associated with the light retardation on distances of a few wavelengths. That yields main cooperation in the radiative dynamics among neighboring dipoles which can effectively interact via static longitudinal electric field. The diagram (\[2.16\]), in contrast with (\[2.15\]), is mostly important for evaluation of the static interaction such that in this graph the field propagator preferably links the points with coincident times on atomic lines. As a consequence, the presence of such diagram fragments as a part of any irreducible diagrams in higher orders would make the overall contribution small and negligible just because the static dipole-dipole interaction only weakly affects the dipoles’ dynamics during the short retardation time, which can be roughly estimated by the wave period $2\pi/\omega_0$. For the same reason we can ignore any vertex-type corrections to the diagram (\[2.14\]). Another part of the self-energy diagrams in higher orders can be associated with correction of the static interaction for itself. If the atomic system were as dense as the atoms were separated by a distance comparable with atomic size (much shorter than the radiation wavelength) then the description of the static interaction in the simplest dipole model would be inconsistent and insufficient. This correction is evidently ignorable for atomic ensemble with a density of a few atoms in a volume scaled by the cubic radiation wavelength. In this case the higher order static corrections are negligible as far as the dipole-dipole interaction is essentially less than the internal transition energy. As we can finally see, for the considered atomic systems, the self-energy part is correctly reproducible by the introduced lower order contributions. Results and discussion ====================== Cooperative scattering from the system of two atoms --------------------------------------------------- Let us apply the developed theory to the calculation of the total cross section for the process of light scattering from the system consisting of two atoms. We consider two complementary examples where the scattering atoms have different but similar Zeeman state structure. In the first example we consider V-type atoms, which have $F_0=0$ total angular momentum in the ground state and $F=1$ total angular momentum in the excited state. Such atoms are the standard objects for discussion of the Dicke problem see Ref. [@BEMST], and each atom performs a “two-level” energy system sensitive to the vector properties of light. In an alternative example we consider the $\Lambda$-type atoms, which can be also understood as overturned “two-level” system, which have $F_0=1$ total angular momentum in the ground state and $F=0$ total angular momentum in the excited state. For the latter example in the scattering scenario we assume the initial population by atoms of a particular Zeeman sublevel of the ground state, which has highest projection of the angular momentum. Both the excitation schemes and transition diagrams in the laboratory reference frame are displayed in Fig. \[fig1\]. ![(Color online) The excitation diagram of “two-level” V-type atom (left) and overturned “two-level” $\Lambda$-type atom (right). In both the configurations the light scattering is considered for the left-handed $\sigma_{-}$ polarization mode. The $\Lambda$-atom populates the Zeeman sublevel with the highest angular momentum projection.[]{data-label="fig1"}](figure1.eps) In Figs. \[fig2\]-\[fig4\] we reproduce the spectral dependencies of the total cross section for a photon scattering from the system consisted of two atoms separated by different distances $R$ and for different spatial orientations. The variation of interatomic separation from $R=10\lambdabar$ (independent scatterers) to $R=0.5\lambdabar$ (strongly dependent scatterers) transforms the scattering process from its independent to cooperative dynamics. In the plotted graphs the frequency spectra, reproduced as function of the frequency detuning $\Delta=\omega-\omega_0$ of the probe frequency $\omega$ from the nondisturbed atomic resonance $\omega_0$, are scaled by the natural radiation decay rate of a single atom $\gamma$, which is significantly different for $\Lambda$- and V-type energy configurations, such that $\gamma(\Lambda)=3\gamma(V)$. As a consequence the near-field effects responsible for the resonance structure of the resolvent operator and the cross section manifest more perceptibly for the V-type atoms, which are traditionally considered in many discussions of the Dicke system in literature. In the symmetric collinear excitation geometry, when the internal reference frame coincides with the laboratory frame see Fig. \[fig2\] the left-handed $\sigma_{-}$ excitation channel shown in Fig. \[fig1\] is only allowed for either V- or $\Lambda$-type transition schemes. In such a symmetric configuration the interatomic interaction via the longitudinal as well as via the radiative transverse fields splits the excitation transition in two resonance lines. For the case of the V-type excitation the observed resonances demonstrate either superradiant or subradiant nature. This is an evident indicator of the well-known Dicke effect of either cooperative or anticooperative contribution of the atomic dipoles into the entire radiation and scattering processes; see Refs. [@ChTnDRGr; @BEMST]. For the system of two $\Lambda$-type atoms separated by the same distances the atomic line also splits in two resonances, but they are less resolved and have relatively comparable line widths. The spectral widths indicate a slight cooperative modification, which is much weaker effect than in the case of V-type atoms. The physical reason of that is the contribution of the Raman scattering channels, which are insensitive to the effects of dependent scattering. ![(Color online) Spectral dependencies of the total cross section for a photon scattering from the system of two “two-level” V-type atoms (upper panel) and $\Lambda$-type atoms (lower panel) in the collinear excitation geometry; see inset. In the case of V-type atoms, in accordance with predictions of the Dicke model [@ChTnDRGr; @BEMST], the observed resonances demonstrate either super- or subradiant behavior when interatomic separation $R$ becomes shorter. In the case of $\Lambda$-type atoms the resonances are less resolved and both have a line width comparable with atomic natural decay rate.[]{data-label="fig2"}](figure2.eps) If both the atoms are located in the wavefront plane of the driving field, as shown in Fig. \[fig3\], the spectral dependence of the cross section is also described by two resonance features. With referring to the excitation scheme defined in the laboratory frame see Fig. \[fig1\] in the specific planar geometry the double-particle self-energy part (\[2.17\]) can couple only the states $|1,\pm 1\rangle$ related to either upper (V-type) or lower ($\Lambda$-type) atomic levels. As a consequence the resolvent operator $\tilde{\hat{R}}(E)$ has a block structure and only its 4 $\times$ 4 block, built in subspace $|0,0\rangle_1|1,\pm 1\rangle_2,\, |1,\pm 1\rangle_1|0,0\rangle_2$, can actually contribute to the scattering process. We subscribed the states by the atomic number $a,b=1,2$. The eigenstates of this matrix have different parities $g$ (even) and $u$ (odd) reflecting their symmetry or antisymmetry to transposition of the atomic state; see Ref. [@LaLfIII].[^1]. The observed resonances can be associated with two even-parity states symmetrically sharing the single excitation in the system of two atoms. Such selection rule is a consequence of the evident configuration symmetry of the system, shown in inset of Fig. \[fig3\], to its rotation on any angle around the $z$ axis such that the allowed transition amplitude should be insensitive to the atoms’ positions. In contrast to the collinear geometry case in the planar geometry both the resonances have identical shapes and line widths. It is also interesting that for this specific excitation geometry both the atomic systems of either V- or $\Lambda$-type demonstrate similar spectral behavior. ![(Color online) Same as in Fig. \[fig2\] but for planar excitation geometry. In both the excitation schemes for either V- or $\Lambda$-type atoms there is a symmetric resonance structure; see the text.[]{data-label="fig3"}](figure3.eps) In general for random orientation of the diatomic system shown in Fig. \[fig4\] there are four resonances. These resonances can be naturally specified in the internal reference frame, where the quantization axis is directed along the internuclear axis, via following the standard definitions of diatomic molecule terms; see Ref. [@LaLfIII]. There are two $\Sigma_g$ and $\Sigma_u$ terms of different parity and two doubly degenerate $\Pi_g$ and $\Pi_u$ terms, which also have different parities. Here the defined terms are associated with the symmetry of the self-energy part and specified by the transition type in the internal frame such that the transition dipole moment can have either $0$ projection ($\Sigma$ term) or $\pm 1$ projection ($\Pi$ term). For random orientation all these resonances can be excited and in case of the V-type atoms the odd-parity resonances have subradiant nature and the even-parity ones are superradiant. In contrast in the case of the $\Lambda$-type atoms the observed resonances are less resolved and have comparable widths; two of them have rather small amplitudes (see the lower panel of Fig. \[fig4\]). The previous configurations with the collinear and planar excitation geometries respectively correspond to the excitations of the $\Pi_g$ and $\Pi_u$, and $\Sigma_g$ and $\Pi_g$, resonance pairs. Summarizing the results, we can point out that all the plotted dependencies demonstrate significant difference in the cooperative scattering dynamics resulting from the similar quantum systems shown in Fig. \[fig1\]. ![(Color online) Same as in Fig. \[fig2\] but for random excitation geometry. For V-type atoms there are two superradiant and two subradiant resonances. For $\Lambda$-type atoms the four resonances are less resolved and have line widths comparable with a single-atom natural decay rate.[]{data-label="fig4"}](figure4.eps) Cooperative scattering from a collection of $\Lambda$-type atoms randomly distributed in space ---------------------------------------------------------------------------------------------- Evaluation of the resolvent operator for the situation of a many-particle system is a challenging task and its solution depends on the type of transition driven in the atomic system. For V-type atoms the problem can be solved even for a macroscopic atomic ensemble since the number of equations rises linearly with the number of atoms; see the relevant estimate given in Sec. \[II.B\]. In Ref. [@SKKH] the transformation of light scattering on macroscopic atomic ensemble consisting of V-type atoms were analyzed as functions of the sample density. Particularly, the authors demonstrated how the smooth spectral dependence of the cross section, observed in the limit of dilute and weakly disordered distribution of atomic scatterers, would transform to the random speckle resonance structure in the case of strongly disordered and dense distribution containing more than one atom in the cubic wavelength. The presence of narrow sub-radiant Dicke-type resonance modes revealed a microcavity structure built up in an environment of randomly distributed atomic scatterers that can be posed as a certain analog of Anderson-type localization of light. Our analysis in the previous section indicates that in the example of the $\Lambda$-type atoms the subradiant modes are not manifestable and such a system would be not suitable for observation of the localization effects. For coherent mechanisms of the quantum memory, which we keep in mind as a most interesting implication, the existence of the localization regime would be useful but not a crucially important feature of the light propagation process. However, the spectral profile of the scattering cross section and its dependence on the atomic density and sensitivity to the level of disorder are very important, for example, for further consideration of an EIT-based memory scheme. Below we consider an example of the atomic system consisted of five $\Lambda$-type atoms, which is described by the $405\times 405$ square matrix of the resolvent operator $\tilde{R}(E)$. With evident provisoes but at least qualitatively the system can be considered as having many particles and can show a tendency toward macroscopic behavior. We show how the scattering process is modified when the configuration is made more dense and how this corresponds with the description of the problem in terms of the macroscopic Maxwell theory. In macroscopic description the atomic system can be approximated by a homogeneous dielectric sphere of a small radius, which scatters light in accordance with the Rayleigh mechanism; see Ref. [@BornWolf]. We fix the parameters of the dielectric sphere by the same density of atoms as we have in the compared microscopic random distribution. The calculation of the dielectric susceptibility were made similarly to that done earlier in Ref. [@SKKH] and we will publish the calculation details elsewhere. The key point of our numerical analysis is to verify the presence of the Zeeman structure, which manifests itself via the Raman scattering channels in the observed total scattering cross section. In Fig. \[fig5\] we show how the scattering cross section is modified with varying atomic density $n_0$, scaled by the light bar wavelength $\lambdabar$, from $n_0\lambdabar^3=0.1$ (dilute configuration) to $n_0\lambdabar^3=1$ (dense configuration). There are two reference dependencies shown in these plots and indicated by dashed and solid black curves. The dashed curve is the spectral profile of single-atom cross section $\sigma_0=\sigma_0(\Delta)$ multiplied by the number of atomic scatterers $N=5$. The solid black curve is evaluated via the self-consistent macroscopic Maxwell description and reproduces the scattering cross section for the Rayleigh particle performed by a small dielectric sphere. Other dependencies subsequently show the results of microscopic calculations of the scattering cross section: (green \[dashed light gray\]) for a particular random configuration (visualized in insets) and (red \[dash-dotted dark gray\]) the microscopic spectral profiles averaged over many random configurations. The upper panel of Fig. \[fig5\] relates to the low-density (i.e., dilute configuration or weak disorder) regime, which is insensitive to any specific location of atomic scatterers in space. Indeed the exact result evaluated with the microscopic model is perfectly reproducible by the simplest approximation of the cross section by the sum of partial contributions of all five atoms considered as independent scatterers. This confirms the traditional vision of light propagation through a multiparticle atomic ensemble as through the system of independent scatterers, which are in background of many practical scenarios of interaction of atomic systems with external fields. The Raman channel manifests in the scattering process, a direct consequence of the Zeeman degeneracy of the atomic ground state. In contrast, the central and bottom panels of Fig. \[fig5\] show how the scattering process is modified in the situation of high density and strong disorder when the near-field effects are manifestable. The system evidently demonstrates cooperative behavior and the scattering mechanism becomes extremely sensitive to any specific distribution of the scatterers in space. The spectral profile is described by several resonances, and locations, amplitudes, and widths are unique for each specific configuration. However, there is a certain tendency to compromise the microscopically calculated scattering profile with the rough macroscopic prediction. The latter keeps only the Rayleigh channel as observable in the self-consistent macroscopic model of the scattering process. It is interesting that for any configurations, created randomly in the spatial distribution of the atomic scatterers, one of the observed resonances is preferably located near the vicinity of the zero detuning $\Delta\sim 0$. As a consequence, after the configuration averaging, the system demonstrates scattering characteristics qualitatively similar to those reproduced by the macroscopic model. Application to atomic memory problem ------------------------------------ The considered system of $\Lambda$-type atoms has certain potential for light-assisted coherent redistribution of atoms in the ground state initiated by simultaneous action of strong control and weak signal modes, that is, for realization of atomic memory protocol. Let us discuss the applicability and advantage of such a dense configuration of atoms for realization of light storage in atomic memories. At present most of the experiments and the supporting theoretical discussions operate with dilute configurations of atoms either confined with MOT at low temperature or existing in a warm vapor phase; see Refs. [@PSH; @Simon; @SpecialIssueJPhysB]. For such systems the standard conditions for realization of either EIT- or Raman-based storage schemes require an optical depth of around hundreds such that the macroscopic ensemble would typically consist of billions of atoms. The optimization of the memory protocol for the parameters of optical depth, pulse shape, etc., has been the subject of many discussions in literature, see Ref. [@NGPSLW] and references therein. There would be an evident advantage in developing the memory unit with fewer atoms but with the same optical depth of the sample. This immediately readdresses the basic problem of cooperative light scattering by a dense system of the $\Lambda$-type configured atoms. The presented microscopic analysis of the scattering process in such systems shows that in the strong disorder regime the spectral profile of the cross section is generally described by rather complicated and randomized resonance structure contributed by both the longitudinal and transverse self-energy interaction parts of the resolvent operator. This spectrum is unique for each particular configuration of the atomic scatterers and has only slight signature of original nondisturbed atomic spectrum. This circumstance is a direct consequence of the complicated cooperative dynamics, which reflects a microcavity nature of light interaction with a strongly disordered atomic ensemble. To determine possible implications of our results to the problem of atomic memories, we should extend the presented calculations toward the ensembles consisting of a macroscopic number of atoms. Such an extension seems not so straightforward since the number of contributing equations rises exponentially with the number of atoms and certain simplifying approximations are evidently needed. In this sense our calculations of the scattering cross section performed for a small collection of atoms can be considered a precursor to calculation of the transmittance coefficient, which would be a key characteristic in the macroscopic description of the problem. Our calculations indicate preferable contribution of the Rayleigh mechanism in the overall cooperative scattering process for a density and disorder level near the Ioffe-Regel bound $n_0\lambdabar^3\sim 1$. It is important that in this case one of the absorption resonances is located in the spectral domain near the zero detuning for any atomic configurations and provides the desirable conditions for further observation of the EIT phenomenon. The presence of the control mode, tuned at this predictable resonance point and applied in any “empty” arm of the $\Lambda$ scheme see Fig. \[fig1\], would make the atomic sample transparent for a signal pulse. Due to controllable spectral dispersion the signal pulse could be delayed and effectively converted into the long-lived spin coherence. Realization of this scheme requires essentially fewer atoms than for dilute ensembles prepared in warm vapors and in MOT experiments. Roughly for a fixed optical depth $b_0\sim n_0\lambdabar^2L$, where $L$ is the sample length, and for $n_0\lambdabar^3\ll 1$, the required number of atoms, allowing for diffraction losses, should be more than $b_0^2/n_0\lambdabar^3$. This number can be minimized if we approach the dense configuration $n_0\lambdabar^3\sim 1$ and make the near field effects manifestable. We are currently working on a self-consistent modification of the presented calculation scheme to make it applicable for a multiatomic ensemble and then to describe the problem in a macroscopic limit. This can be done if we take into consideration the near-field effects only for the neighboring atoms separated by a distance of wavelength. For the intermediate densities with $n_0\lambdabar^3\sim 1$ we can soften our original estimate, given in Sec. \[II.B\], for the number of equations to be solved, and can expect that the actual number would be scaled as $d_eN\,d_g^{n-1}$. Here $n-1\sim n_0\lambdabar^3$ performs the varying parameter denoting the number of the neighboring atoms, which have near field interfering with a selected specific atom. Our preliminary analysis shows that such a calculation algorithm should demonstrate a rapidly converging series with increasing $n$ and would allow us to include the control mode in the entire calculation procedure. Such a modification of the performed calculation scheme would be practically important and generally interesting for better understanding the microscopic nature of a $\Lambda$-type optical interaction in macroscopic atomic systems existing in a strong disorder regime. Summary ======= In this paper we have studied the problem of light scattering on a collection of atoms with degenerate structure of the ground state, which cooperatively interact with the scattered light. We have discussed the difference for the scattering process between such system of atoms and well-known object of the Dicke problem, performing an ensemble of two-level V-type atoms. The investigation is specifically focused toward understanding principle aspects of the scattering processes that can occur and how they vary as the atomic density is varied from low values to levels where the mean separation between atoms is on the order of the radiation wavelength. For both the $\Lambda$- and V-type systems the spectral profile of the scattering cross section strongly depends on the particular atomic spatial configuration. However, in the case of the degenerate ground state, the presence of Raman scattering channels washes out visible signature of the super- and subradiant excitation modes in the resolvent spectrum, which are normally resolved in the system consisted of two-level atoms. We have discussed advantages of the considered system in the context of its possible implications for the problem of light storage in a macroscopic ensemble of dense and ultracold atoms and we point out that the quantum memory protocol can be effectively organized with essentially fewer atoms than in the dilute configuration regime. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== We thank Elisabeth Giacobino, Igor Sokolov, Ivan Sokolov, and Julien Laurat for fruitful discussions. The work was supported by RFBR 10-02-00103, by the CNRS-RFBR collaboration (CNRS 6054 and RFBR 12-02-91056) and by Federal Program “Scientific and Scientific-Pedagogical Personnel of Innovative Russia on 2009-2013” (Contract No. 14.740.11.1173 ). [10]{} K. Hammerer, A. S[ø]{}rensen, and E. Polzik, Rev. Mod. Phys. **82**, 1041 (2010). C. Simon et al, Eur. Phys. J. D **58**, 1 (2010). J. Phys B: At. Mol. Opt. Phys **45** \#12 (2012), special issue on quantum memory. I. Novikova, A.V. Gorshkov, D.F. Phillips, A.S. S[ø]{}rensen, M.D. Lukin, and R.L. Walsworth, Phys. Rev. Lett. **98** 243602 (2007); I. Novikova, N.B. Phillips, A.V. Gorshkov, Phys. Rev. A **78** 021802(R) (2008). K.S. Choi, H. Deng, J. Laurat, and H.J. Kimble, Nature (London) **452** 67 (2008). J. Simon, H. Tanji, J.K. Thompson, V. Vuletić, Phys. Rev. Lett. **98** 183601 (2007). S. Du, P. Kolchin, C. Belthangady, G. Y. Yin, and S. E. Harris, Phys. Rev. Lett. **100** 183603 (2008) M. Scherman, O.S. Mishina, P. Lombardi, J. Laurat, E. Giacobino, Optics Express **20**, 4346 (2012); O.S. Mishina, M. Scherman, P. Lombardi, J. Ortalo, D. Felinto, A.S. Sheremet, A. Bramati, D.V. Kupriyanov, J. Laurat, and E. Giacobino, Phys. Rev. A **83** 053809 (2011). L.S. Froufe-P[é]{}rez, W. Guerin, R. Carminati, R. Kaiser Phys. Rev. Lett. **102** 173903 (2009); W. Guerin, N. Mercadier, F. Michaud, D. Brivio, L.S. Froufe-P[é]{}rez, R. Carminati, V. Eremeev, A. Goetschy, S.I. Skipetrov, R. Kaiser, J. of Opt. **12** 024002 (2010). L.V. Gerasimov, I.M. Sokolov, R.G. Olave, M.D. Havey, J. Opt. Soc. Am. B **28** 1459 (2011); L.V. Gerasimov, I.M. Sokolov, R.G. Olave, M.D. Havey, J. Phys. B: At. Mol. Opt. Phys. **45** 124012 (2012). R. Kaiser, J. Mod. Opt. **56** 2082 (2009). S. Balik, M.D. Havey, I.M. Sokolov, D.V. Kupriyanov, Phys. Rev. A **79** 033418 (2009); I.M. Sokolov, D.V. Kupriyanov, R.G. Olave, M.D. Havey J. Mod. Opt. **57** 1833 (2010). M.G. Benedict, A.M. Ermolaev, V.A. Malyshev, I.V. Sokolov, and E.D. Trifonov, *Super-radiance: Multiatomic coherent emission* (Institute of Physics Publishing, Techno House, Redcliffe Way, Bristol BS1 6NX UK, 1996). E. Akkermans and G. Montambaux, *Mesoscopic Physics of Electrons and Photons* (Cambridge University Press, Cambridge, 2007). A. Grubellier, Phys. Rev. A **15**, 2430 (1977). M. Rusek, J. Mostowski, and A. Orlowski, Phys. Rev. A **61** 022704 (2000 ); F.A. Pinheiro, M. Rusek, A. Orlowski, B.A. van Tiggelen, Phys. Rev. E **69** 026605 (2004). A. Gero, E. Akkermans, Phys. Rev. A **75** 053413 (2007); E. Akkermans, A. Gero, R. Kaiser, Phys. Rev. Lett. **101** 103602 (2008). I.M. Sokolov, M.D. Kupriyanova, D.V. Kupriyanov, M.D. Havey Phys. Rev. A **79** 053405 (2009). C. Cohen-Tannoudji, J. Dupont-Roc, G. Grynberg *Atom-Photon Interactions. Basic Processes and Applications* (John Wiley, New York, 1992). V.B. Beresteskii, E.M. Lifshits, L.P. Pitaevskii, *Course of Theoretical Physics: Quantum Electrodynamics* (Oxford: Pergamon Press, Oxford, 1981). L.D. Landau E.M. Lifshits *Course of Theoretical Physics: Quantum Mechanics* (Pergamon Press, Oxford, 1981). M. Born, E. Wolf, *Principles of Optics* (Pergamon Press, Oxford 1964). [^1]: By “parity” we mean symmetry of the self-energy part to the transposition of atoms. This is similar to the parity definition for homonuclear diatomic molecules in chemistry; see Ref. [@LaLfIII]
--- abstract: 'In this paper, we recall our renormalized quantum Q-system associated with representations of the Lie algebra $A_r$, and show that it can be viewed as a quotient of the quantum current algebra $U_q(\n[u,u^{-1}])\subset U_q(\widehat{\sl}_2)$ in the Drinfeld presentation. Moreover, we find the interpretation of the conserved quantities in terms of Cartan currents at level 0, and the rest of the current algebra, in a non-standard polarization in terms of generators in the quantum cluster algebra.' address: - 'PDF: Department of Mathematics, University of Illinois MC-382, Urbana, IL 61821, U.S.A. e-mail: philippe@illinois.edu' - 'RK: Department of Mathematics, University of Illinois MC-382, Urbana, IL 61821, U.S.A. e-mail: rinat@illinois.edu' author: - Philippe Di Francesco - Rinat Kedem bibliography: - 'refs.bib' title: 'Quantum Q systems: From cluster algebras to quantum current algebras' --- Introduction ============ An extended quantum Q system ============================ Proofs {#proofsec} ====== The quantum affine algebra ========================== Discussion/Conclusion =====================
--- abstract: | From formal and practical analysis, we identify new challenges that self-adaptive systems pose to the process of quality assurance. When tackling these, the effort spent on various tasks in the process of software engineering is naturally re-distributed. We claim that all steps related to testing need to become self-adaptive to match the capabilities of the self-adaptive system-under-test. Otherwise, the adaptive system’s behavior might elude traditional variants of quality assurance. We thus propose the paradigm of scenario coevolution, which describes a pool of test cases and other constraints on system behavior that evolves in parallel to the (in part autonomous) development of behavior in the system-under-test. Scenario coevolution offers a simple structure for the organization of adaptive testing that allows for both human-controlled and autonomous intervention, supporting software engineering for adaptive systems on a procedural as well as technical level. -adaptive system, software engineering, quality assurance, software evolution author: - Thomas Gabor$^1$ - Marie Kiermeier$^1$ - Andreas Sedlmeier$^1$ - Bernhard Kempter$^2$ - Cornel Klein$^2$ - Horst Sauer$^2$ - Reiner Schmid$^2$ - Jan Wieghardt$^2$ bibliography: - 'references.bib' title: | Adapting Quality Assurance\ to Adaptive Systems:\ The Scenario Coevolution Paradigm --- at (current page.south) ; Introduction ============ Until recently, the discipline of software engineering has mainly tackled the process through which humans develop software systems. In the last few years, current break-throughs in the fields of artificial intelligence and machine learning have enabled new possibilities that have previously been considered infeasible or just too complex to tap into with “manual” coding: Complex image recognition, natural language processing, or decision making as it is used in complex games are prime examples. The resulting applications are pushing towards a broad audience of users. However, as of now, they are mostly focused on non-critical areas of use, at least when implemented without further human supervision. Software artifacts generated via machine learning are hard to analyze, causing a lack of trustworthiness for many important application areas. We claim that in order to reinstate levels of trustworthiness comparable to well-known classical approaches, we need not essentially reproduce the principles of classical software test but need to develop a new approach towards software testing. We suggest to develop a system and its test suite in a competitive setting where each sub-system tries to outwit the other. We call this approach *scenario coevolution* and attempt to show the necessity of such an approach. We hope that trust in that dynamic (or similar ones) can help to build a new process for quality assurance, even for hardly predictable systems. Following a top-down approach to the issue, we start in Section \[sec:formal\] by introducing a formal framework for the description of systems. We augment it to also include the process of software and system development. Section \[sec:related-work\] provides a short overview on related work. From literature review and practical experience, we introduce four core concepts for the engineering of adaptive systems in Section \[sec:concepts\]. In order to integrate these with our formal framework, Section \[sec:scenarios\] contains an introduction of our notion of scenarios and their application to an incremental software testing process. In Section \[sec:applications\] we discuss which effect scenario coevolution has on a selection of practical software engineering tasks and how it helps implement the core concepts. Finally, Section \[sec:conclusion\] provides a short conclusion. Formal Framework {#sec:formal} ================ In this section we introduce a formal framework as a basis for our analysis. We first build upon the framework described in [@holzl2011towards] to define adaptive systems and then proceed to reason about the influence of their inherent structure on software architecture. Describing Adaptive Systems --------------------------- We roughly adopt the formal definitions of our vocabulary related to the description of systems from [@holzl2011towards]: We describe a system as an arbitrary relation over a set of variables. \[def:system\] Let $I$ be a (finite or infinite) set, and let $\mathcal{V} = (V_i)_{i \in I}$ be a family of sets. A *system* of type $\mathcal{V}$ is a relation $S$ of type $\mathcal{V}$. Given a System $S$, an element $s \in S$ is called the state of the system. For practical purposes, we usually want to discern various parts of a system’s state space. For this reason, parts of the system relation of type $\mathcal{V}$ given by an index set $J \subseteq I$, i.e., $(V_j)_{j \in J}$, may be considered *inputs* and other parts given by a different index set may be considered *outputs* [@holzl2011towards]. Formally, this makes no difference to the system. Semantically, we usually compute the output parts of the system using the input parts. We introduce two more designated sub-spaces of the system relation: *situation* and *behavior*. These notions correspond roughly to the intended meaning of inputs and outputs mentioned before. The situation is the part of the system state space that fully encapsulates all information the system has about its state. This may include parts that the system does have full control over (which we would consider counter-intuitive when using the notion of “input”). The behavior encapsulates the parts of the system that can only be computed by applying the system relation. Likewise, this does *not* imply that the system has full control over the values. Furthermore, a system may have an *internal state*, which is parts of the state space that are neither included in the situation nor in the behavior. When we are not interested in the internal space, we can regard a system as a mapping from situations to behavior, written $S = X \stackrel{Z}{\leadsto} Y$ for situations $X$ and behaviors $Y$, where $Z$ is the internal state of the system $S$. Using these notions, we can more aptly define some properties on systems. Further following the line of thought presented in [@holzl2011towards], we want to build systems out of other systems. At the core of software engineering, there is the principle of re-use of components, which we want to mirror in our formalism. Let $S_1$ and $S_2$ be systems of types $\mathcal{V}_1 = (V_{1,i})_{i \in I_1}$ and $\mathcal{V}_2 = (V_{2,i})_{i \in I_2}$, respectively. Let $\mathcal{R}(\mathcal{V})$ be the domain of all relations over $\mathcal{V}$. A *combination operator* $\otimes$ is a function such that $S_1 \otimes S_2 \in \mathcal{R}(\mathcal{V})$ for some family of sets $\mathcal{V}$ with $V_{1,1}, ..., V_{1,m}, V_{2,1}, ..., V_{2,n} \in \mathcal{V}$.[^1] The application of a combination operator is called *composition*. The arguments to a combination operator are called *components*. Composition is not only important to model software architecture within our formalism, but it also defines the formal framework for interaction: Two systems interact when they are combined using a combination operator $\otimes$ that ensures that the behavior of (at least) one system is recognized within the situation of (at least) another system. Let $S = S_1 \otimes S_2$ be a composition of type $\mathcal{V}$ of systems $S_1$ and $S_2$ of type $\mathcal{V}_1$ and $\mathcal{V}_2$, respectively, using a combination operator $\otimes$. If there exist a $V_1 \in \mathcal{V}_1$ and a $V_2 \in \mathcal{V}_2$ and a relation $R \in V_1 \times V_2$ so that for all states $s \in S$, $(proj(s, V_1), proj(s, V_2)) \in R$, then the components $S_1$ and $S_2$ interact with respect to $R$. We can model an open system $S$ as a combination $S = C \otimes E$ of a core system $C$ and its environment $E$, both being modeled as systems again. Hiding some of the complexity described in [@holzl2011towards], we assume we have a logic $\mathfrak{L}$ in which we can express a system goal $\gamma$. We can always decide if $\gamma$ holds for a given system, in which case we write $S \models \gamma$ for $\gamma(S) = \top$. Based on [@holzl2011towards], we can use this concept to define an adaptation domain: \[def:adaptation-domain\] Let $S$ be a system. Let $\mathcal{E}$ be a set of environments that can be combined with $S$ using a combination operator $\otimes$. Let $\Gamma$ be a set of goals. An *adaptation domain* $\mathcal{A}$ is a set $\mathcal{A} \subseteq \mathcal{E} \times \Gamma$. $S$ can adapt to $\mathcal{A}$, written $S \Vdash \mathcal{A}$ iff for all $(E, \gamma) \in \mathcal{A}$ it holds that $S \otimes E \models \gamma$. \[def:adaptation-space\] Let $\mathcal{E}$ be a set of environments that can be combined with $S$ using a combination operator $\otimes$. Let $\Gamma$ be set of goals. An *adaptation space* $\mathfrak{A}$ is a set $\mathfrak{A} \subseteq \mathfrak{P}(\mathcal{E}, \Gamma)$. We can now use the notion of an adaptation space to define a preorder on the adaptivity of any two systems. \[def:adaptation\] Given two systems $S$ and $S'$, $S'$ is at least as adaptive as $S$, written $S \sqsubseteq S'$ iff for all adaptation spaces $\mathcal{A} \in \mathfrak{A}$ it holds that $S \Vdash \mathcal{A} \Longrightarrow S' \Vdash \mathcal{A}$. Both Definitions \[def:adaptation-domain\] and \[def:adaptation-space\] can be augmented to include soft constraints or optimization goals. This means that in addition to checking against boolean goal satisfaction, we can also assign each system $S$ interacting with an environment $E$ a *fitness* $\phi(S \otimes E) \in F$, where $F$ is the type of fitness values. We assume that there exists a preorder $\preceq$ on $F$, which we can use to compare two fitness values. We can then generalize Definition \[def:adaptation-domain\] and \[def:adaptation-space\] to respect these optimization goals. \[def:adaptation-domain-opt\] Let $S$ be a system. Let $\mathcal{E}$ be a set of environments that can be combined with $S$ using a combination operator $\otimes$. Let $\Gamma$ be a set of Boolean goals. Let $F$ be a set of fitness values and $\preceq$ be a preorder on $F$. Let $\Phi$ be a a set of fitness functions with codomain $F$. An *adaptation domain* $\mathcal{A}$ is a set $\mathcal{A} \subseteq \mathcal{E} \times \Gamma \times \Phi$. $S$ can adapt to $\mathcal{A}$, written $S \Vdash \mathcal{A}$ iff for all $(E, \gamma, \phi) \in \mathcal{A}$ it holds that $S \otimes E \models \gamma$. Note that in Definition \[def:adaptation-domain-opt\] we only augmented the data structure for adaptation domains but did not actually alter the condition to check for the fulfillment of an adaptation domain. This means that for an adaptation domain $\mathcal{A}$, a system needs to fulfill all goals in $\mathcal{A}$ but is not actually tested on the fitness defined by $\phi$. We could define a fitness threshold $f$ we require a system $S$ to surpass in order to adapt to $\mathcal{A}$ in the formalism. But such a check, written $f \preceq \phi(S \otimes E)$, could already be included in the Boolean goals if we use a logic that is expressive enough. Instead, we want to use the fitness function as soft constraints: We expect the system to perform as well as possible on this metric, but we do not (always) require a minimum level of performance. However, we can use fitness to define a fitness preorder on systems: \[def:optimization\] Given two systems $S$ and $S'$ as well as an adaptation space $\mathcal{A}$, $S'$ is at least as optimal as $S$, written $S \preceq_\mathcal{A} S'$, iff for all $(E, \gamma, \phi) \in \mathcal{A}$ it holds that $\phi(S \otimes E) \preceq \phi(S' \otimes E)$. \[def:adaptation-opt\] Given two systems $S$ and $S'$, $S'$ is at least as adaptive as $S$ with respect to optimization, written $S \sqsubseteq^* S'$ iff for all adaptation domains $\mathcal{A} \in \mathfrak{A}$ it holds that $S \Vdash \mathcal{A} \Longrightarrow S' \Vdash \mathcal{A}$ and $S \preceq_\mathcal{A} S'$. Note that so far our notions of adaptivity and optimization are purely extensional, which originates from the black box perspective on adaptation assumed in [@holzl2011towards]. Constructing Adaptive Systems ----------------------------- We now shift the focus of our analysis a bit away from the question “When is a system adaptive?” towards the question “How is a system adaptive?”. This refers to both questions of software architecture (i.e., which components should we use to make an adaptive system?) and questions of software engineering (i.e., which development processes should we use to develop an adaptive system?). We will see that with the increasing usage of methods of artificial intelligence, design-time engineering and run-time adaptation increasingly overlap [@wirsing2015software]. \[def:adaptation-sequence\] A series of $|I|$ systems $\mathcal{S} = (S_i)_{i\in I}$ with index set $I$ with a preorder $\leq$ on the elements of $I$ is called an *adaptation sequence* iff for all $i, j \in I$ it holds that $i \leq j \Longrightarrow S_i \sqsubseteq^* S_j$ Note that we used adaptation with optimization in Definition \[def:adaptation-sequence\] so that a sequence of systems $(S_i)_{i\in I}$ that each fulfill the same hard constraints ($\gamma$ within a singleton adaptation space $\mathfrak{A} = \{\{(E, \gamma, \phi)\}\}$) can form an adaptation sequence iff for all $i, j \in I$ it holds that $i \leq j \Longrightarrow \phi(S_i \otimes E) \preceq \phi(S_j \otimes E)$. This is the purest formulation of an optimization process within our formal framework.[^2] Such an adaptation sequence can be generated by continuously improving a starting system $S_0$ and adding each improvement to the sequence. Such a task can both be performed by a team of human developers or standard optimization algorithms as they are used in artificial intelligence. Only in the latter case, we want to consider that improvement happening within our system boundaries. Unlike the previously performed black-box analysis of systems, the presence of an optimization algorithm within the system itself does have implications for the system’s internal structure. We will thus switch to a more “grey box” analysis in the spirit of [@bruni2012conceptual]. \[def:self-adaptation\] A system $S_0$ is called *self-adaptive* iff the sequence $(S_i)_{i \in \mathbb{N}, i < n}$ for some $n \in \mathbb{N}$ with $S_i = S_0 \otimes S_{i-1}$ for $0 < i < n$ and some combination operator $\otimes$ is an adaptation sequence. Note that we could define the property of self-adaptation more generally by again constructing an index set on the sequence $(S_i)$ instead of using $\mathbb{N}$, but chose not to do so to not further clutter the notation. For most practical purposes, the adaptation is going to happen in discrete time steps anyway. It is also important to be reminded that despite its notation, the combination operator $\otimes$ does not need to be symmetric and likely will not be in this case, because when constructing $S_0 \otimes S_{i-1}$ we usually want to pass the previous instance $S_{i-1}$ to the general optimization algorithm encoded in $S_0$.[^3] Furthermore, it is important to note that the constant sequence $(S)_{i \in \mathbb{N}}$ is an adaptation sequence according to our previous definition and thus every system is self-adaptive with respect to a combination operator $X \otimes Y =_\text{def} X$. However, we can construct non-trivial adaptation sequence using partial orders $\sqsubset$ and $\prec$ instead of $\sqsubseteq$ and $\preceq$. As these can easily be constructed, we do not further discuss their definitions in this paper. In [@holzl2011towards] a corresponding definition was already introduced for $\sqsubset$. The formulation of the adaptation sequence used to prove self-adaptivity naturally implies some kind of temporal structure. So basing said structure around $\mathbb{N}$ implies a very simple, linear and discrete model of time. More complex temporal evolution of systems is also already touched upon in [@holzl2011towards]. As noted, there may be several ways to define such a temporal structure on systems. We refer to related and future work for a more intricate discussion on this matter. So, non-trivial self-adaptation does imply some structure for any self-adaptive system $S$ of type $\mathcal{V} = (V_i)_{i \in I}$: Mainly, there needs to be a subset of the type $\mathcal{V}' \subseteq \mathcal{V}$ that is used to encode the whole relation behind $S$ so that the already improved instances can sufficiently be passed on to the general adaptation mechanism. For a general adaptation mechanism (as we previously assumed to be part of a system) to be able to improve a system’s adaptivity, it needs to be able to access some representation of its goals and its fitness function. This provides a grey-box view of the system. We remember that we assumed we could split a system $S$ into situation $X$, internal state $Z$ and behavior $Y$, written $S = X \stackrel{Z}{\leadsto} Y$. If $S$ is self-adaptive, it can form a non-trivial adaptation sequence by improving on its goals or its fitness. In the former case, we can now assume that there exists some relation $G \subseteq X \cup Z$ so that $S \models \gamma \iff G \models \gamma$ for a fixed $\gamma$ in a singleton-space adaptation sequence. In the latter case, we can assume that there exists some relation $F \subseteq X \cup Z$ so that $\phi(S) = \phi(F)$ for a fixed $\phi$ in a singleton-space adaptation sequence. Obviously, when we want to construct larger self-adaptive systems using self-adaptive components, the combination operator needs to be able to combine said sub-systems $G$ and/or $F$ as well. In the case where the components’ goals and fitnesses match completely, the combination operator can just use the same sub-system twice. However, including the global goals or fitnesses within each local component of a system does not align with common principles in software architecture (such as encapsulation) and does not seem to be practical for large or open systems (where no process may ensure such a unification). Thus, constructing a component-based self-adaptive system requires a combination operator that can handle potentially conflicting goals and fitnesses. We again define such a system for a singleton adaptation space $\mathfrak{A} = \{\{(E, \gamma, \phi)\}\}$ and leave the generalization to all adaptation spaces out of the scope of this paper. \[def:mas\] Given a system $S = S_1 \otimes ... \otimes S_n$ that adapts to $\mathcal{A} = \{(E, \gamma, \phi)\}$. Iff for each $1 \leq i \leq n$ with $i, n \in \mathbb{N}, n > 1$ there is an adaptation domain $\mathcal{A}_i = \{(E_i, \gamma_i, \phi_i)\}$ so that (1) $E_i = E \otimes S_1 \otimes ... \otimes S_{i-1} \otimes S_{i+1} \otimes ... \otimes S_n$ and (2) $\gamma_i \neq \gamma$ or $\phi_i \neq \phi$ and (3) $S_i$ adapts to $\mathcal{A}_i$, then $S$ is a *multi-agent system* with agents $S_1, ..., S_n$. For practical purposes, we usually want to use the notion of multi-agent systems in a transistive way, i.e., we can call a system a multi-agent system as soon as any part of it is a multi-agent system according to Definition\[def:mas\]. Formally, $S$ is a multi-agent system if there are systems components $S', R$ so that $S = S' \otimes R$ and $S'$ is a multi-agent system. We argue that this transitivity is not only justified but a crucial point for systems development of adaptive systems: Agents tend to utilize their environment to fulfill their own goals and can thus “leak” their goals into other system components. Not that Condition (2) of Definition \[def:mas\] ensures that not every system constructed by composition is regarded a multi-agent system; it is necessary to feature agents with (at least slightly) differing adaptation properties. For the remainder of this paper, we will apply Definition \[def:mas\] “backwards”: Whenever we look at a self-adaptive system $S$, whose goals or fitnesses can be split into several sub-goals or sub-fitnesses we can regard $S$ as a multi-agent system. Using this knowledge, we can apply design patterns from multi-agent systems to all self-adaptive systems without loss of generality. Furthermore, we need to be aware that especially if we do not explicitly design multi-agent coordination between different sub-goals, such a coordination will be done implicitly. Essentially, there is no way around generalizing software engineering approaches for self-adaptive systems to potentially adversarial components. Related Work {#sec:related-work} ============ Many researchers and practitioners in recent years have already been concerned about the changes necessary to allow for solid and reliable software engineering processes for (self-)adaptive systems. Central challenges were collected in [@salehie2009self], where issues of quality assurance are already mentioned but the focus is more on bringing about complex adaptive behavior in the first place. The later research roadmap of [@de2013software] puts a strong focus on interaction patterns of already adaptive systems (both between each other and with human developers) and already dedicates a section to verification and validation issues, being close in mind to the perspective of this work. We fall in line with the roadmap further specified in [@bures2015software; @belzner2016software; @bures2017software]. While this work largely builds upon [@holzl2011towards], there have been other approaches to formalize the notion of adaptivity: [@oreizy1999architecture] discusses high-level architectural patterns that form multiple inter-connected adaptation loops. In [@arcaini2015modeling] such feedback loops are based on the MAPE-K model [@kephart2003vision]. While these approaches largely focus on the formal construction of adaptive systems, there have also been approaches that assume a (more human-centric or at least tool-centric) software engineering perspective [@elkhodary2010fusion; @andersson2013software; @gabor2016simulation; @weyns2017software]. We want to discuss two of those on greater detail: In the results of the *ASCENS* (Autonomous Service Component ENSembles) project [@wirsing2015software], the interplay between human developers and autonomous adaptation has been formalized in a life-cycle model featuring separate states for each the development progress of each respective feedback cycle. Classical software development tasks and self-adaptation (as well as self-monitoring and self-awareness) are regarded as equally powerful contributing mechanisms for the production of software. Both can be employed in junction to steer the development process. In addition, ASCENS built upon a (in parts) similar formal notion of adaptivity [@bruni2012conceptual; @nicola2014formal] and sketched a connection between adaptivity in complex distributed systems and multi-goal multi-agent learning [@holzl2015reasoning]. *ADELFE* (Atelier de Développement de Logiciels à Fonctionnalité Emergente) is a toolkit designed to augment current development processes to account for complex adaptive systems [@bernon2003tools; @bernon2005engineering]. For this purpose, ADELFE is based on the Rational Unified Process (RUP) [@kruchten2004rational] and comes with tools for various tasks of software design. From a more scientific point of view, ADELFE is also based on the theory of adaptive multi-agent systems. For ADELFE, multi-agent systems are used to derive a set of stereotypes for components, which ease modeling for according types of systems. It thus imposes stronger restrictions on system design than our approach intends to. Besides the field of software engineering, the field of artificial intelligence research is currently (re-)discovering a lot of the same issues the discipline of of engineering for complex adaptive systems faced: The highly complex and opaque nature of machine learning algorithms and the resulting data structures often forces black-box testing and makes possible guarantees weak. When online learning is employed, the algorithm’s behavior is subject to great variance and testing usually needs to work online as well. The seminal paper [@amodei2016concrete] provides a good overview of the issues. When applying artificial intelligence to a large variety of products, rigorous engineering for this kind of software seems to be one of the major necessities lacking at the moment. Core Concepts of Future Software Engineering {#sec:concepts} ============================================ Literature makes it clear that one of the main issues of the development of self-adapting systems lies with *trustworthiness*. Established models for checking systems (i.e., verification and validation) do not really fit the notion of a constantly changing system. However, these established models represent all the reason we have at the moment to trust the systems we developed. Allowing the system more degrees of freedom thus hinders the developers’ ability to estimate the degree of maturity of the system they design, which poses a severe difficulty for the engineering progress, when the desired premises or the expected effects of classical engineering tasks on the system-under-development are hard to formulate. To aid us control the development/adaptation progress of the system, we define a set of *principles*, which are basically patterns for process models. They describe the changes to be made in the engineering process for complex, adaptive systems in relation to more classical models for software and systems engineering. \[con:parallelism\] The system and its test suite should develop in parallel from the start with controlled moments of interchange of information. Eventually, the test system is to be deployed alongside the main system so that even during runtime, on-going online tests are possible [@calinescu2012self]. This argument has been made for more classical systems as well and thus classical software test is, too, no longer restricted to a specific phase of software development. However, in the case of self-learning systems, it is important to focus on the evolution of test cases: The capabilities of the system might not grow as experienced test designers expect them to compared to systems entirely realized by human engineering effort. Thus, it is important to conceive and formalize how tests in various phases relate to each other. \[con:antagonism\] Any adaptive systems must be subject to an equally adaptive test. Overfitting is a known issue for many machine learning techniques. In software development for complex adaptive systems, it can happen on a larger scale: Any limited test suite (we expect our applications to be too complex to run a complete, exhaustive test) might induce certain unwanted biases. Ideally, once we know about the cases our system has a hard time with, we can train it specifically for these situations. For the so-hardened system the search mechanism that gave us the hard test cases needs to come up with even harder ones to still beat the system-under-test. Employing autonomous adaptation at this stage is expected to make that arms race more immediate and faster than it is usually achieved with human developers and testers alone. \[con:automated\] Since the realization of tasks concerning adaptive components usually means the application of a standard machine learning process, a lot of the development effort regarding certain tasks tends to shift to an earlier phase in the process model. The most developer time when applying machine learning techniques, e.g., tends to be spent on gathering information about the problem to solve and the right setup of parameters to use; the training of the learning agent then usually follows one of a few standard procedures and can run rather automatically. However, preparing and testing the component’s adaptive abilities might take a lot of effort, which might occur in the design and test phase instead of the deployment phase of the system life-cycle. \[con:general\] To provide room for and exploit the system’s ability to self-adapt, many artifacts produced by the engineering process tend to become more general in nature, i.e., they tend to feature more open parameters or degrees of freedom in their description. In effect, in the place of single artifacts in a classical development process, we tend to find families of artifacts or processes generating artifacts when developing a complex adaptive system. As we assume that the previously only static artifact is still included in the set of artifacts available in its place now, we call this shift “generalization” of artifacts. Following this change, many of the activities performed during development shift their targets from concrete implementations to more general artifact, i.e., when building a test suite no longer yields a series of runnable test cases but instead produces a test case generator. When this principle is broadly applied, the development activities shift towards “meta development”. The developers are concerned with setting up a process able to find good solutions autonomously instead of finding the good solutions directly. Scenarios {#sec:scenarios} ========= We now want to include the issue of testing adaptive systems in our formal framework. We recognize that any development process for systems following the principles described in Section \[sec:formal\] produces two central types of artifacts: The first one is a system $S = X \stackrel{Z}{\leadsto} Y$ with a specific desired behavior $Y$ so that it manages to adapt to a given adaptation space. The second is a set of situations, test cases, constraints, and checked properties that this system’s behavior has been validated against. We call artifacts of the second type by the group name of *[scenarios]{}*. \[def:scenario\] Let $S = X \stackrel{Z}{\leadsto} Y$ be a system and $\mathcal{A} = \{(E, \gamma, \phi)\}$ a singleton adaptation domain. A tuple $c = (X, Y, g, f), g \in \{\top, \bot \}, f \in \text{cod}(\phi)$ with $g = \top \iff S \otimes E \models \gamma$ and $f = \phi(S \otimes E)$ is called *scenario*.[^4] Semantically, scenarios represent the experience gained about the system’s behavior during development, including both successful ($S \vDash \gamma$) and unsuccessful ($S \nvDash \gamma$) test runs. As stated above, since we expect to operate in test spaces we cannot cover exhaustively, the knowledge about the areas we did cover is an important asset and likewise result of the systems engineering process. Effectively, as we construct and evolve a system $S$ we want to construct and augment a set of scenarios $C = \{c_1, ..., c_n\}$ alongside with it. $C$ is also called a *scenario suite* and can be seen as a toolbox to test $S$’s adaptation abilities with respect to a fixed adaptation domain $\mathcal{A}$. While formally abiding to Definition \[def:scenario\], scenarios can be encoded in various ways in practical software development, such as: #### Sets of data points of expected or observed behavior. Given a system $S' = X' \leadsto Y'$ whose behavior is desirable (for example a trained predecessor of our system or a watchdog component), we can create scenarios $(X', Y', g', f')$ with $g' = \top \iff S' \otimes E_i \models \gamma_i$ and $f' = \phi_i(S' \otimes E_i)$ for an arbitrary amount of elements $(E_i, \gamma_i, \phi_i)$ of an adaptation domain $\mathcal{A} = \{(E_1, \gamma_1, \phi_1), ..., (E_n, \gamma_n, \phi_n)\}$. #### Test cases the system mastered. In some cases, adaptive systems may produce innovative behavior before we actively seek it out. In this cases, it is helpful to formalize the produced results once they have been found so that we can ensure that the system’s gained abilities are not lost during further development or adaptation. Formally, this case matches the case for “observed behavior” described above. However, here the test case $(X, Y, g, f)$ already existed as a scenario, so we just need to update $g$ and $f$ (with the new and better values) and possibly $Y$ (if we want to fix the observed behavior). #### Logical formulae and constraints. Commonly, constraints can be directly expressed in the adaptation domain. Suppose we build a system against an adaptation domain $\mathcal{A} = \{(E_1, \gamma_1, \phi_1), ..., (E_n, \gamma_n, \phi_n)\}$. We can impose a hard constraint $\zeta$ on the system in this domain by constructing a constrained adaptation domain $\mathcal{A'} = \{(E_1, \gamma_1 \land \zeta, \phi_1), ..., (E_n, \gamma_n \land \zeta, \phi_n)\}$ given that the logic of $\gamma_1, ..., \gamma_n, \zeta$ meaningfully supports an operation like the logical “and” $\land$. Likewise a soft constraint $\psi$ can be imposed via $\mathcal{A'} = \{(E_1, \gamma_1, \max(\phi_1, \psi), ), ..., \allowbreak(E_n, \gamma_n, \max(\phi_n, \psi))\}$ given the definition of the operator $\max$ that trivially follows from using the relation $\preceq$ on fitness values. Scenarios $(X', Y', g', f')$ can then be generated against the new adaptation domain $\mathcal{A}$ by taking pre-existing scenarios $(X, Y, g, f)$ and setting $X' = X, Y' = Y, g = \top, f = \psi((X \leadsto Y) \otimes E)$. #### Requirements and use case descriptions (including the system’s degree of fulfilling them). If properly formalized, a requirement or use case description contains all the information necessary to construct an adaptation domain and can thus be treated as the logical formulae in the paragraph above. However, use cases are in practical development more prone to be incomplete views on the adaptation domain. We thus may want to stress the point that we do not need to update all elements of an adaptation domain when applying a constraint, i.e., when including a use case. We can also just add the additional hard constraint $\zeta$ or soft constraint $\psi$ to some elements of $\mathcal{A}$. #### Predictive models of system properties. For the most general case, assume that we have a prediction function $p$ so that $p(X) \approx Y$, i.e., the function can roughly return the behavior $S = X \leadsto Y$ will or should show given $X$. We can thus construct the predicted system $S' = X \leadsto p(X)$ and construct a scenario $(X, p(X), g, f)$ with $g = \top \iff S' \otimes E \models \gamma$ and $f = \phi(S' \otimes E)$. #### All of these types of artifacts will be subsumed under the notion of [[scenarios]{}]{}. We can use them to further train and improve the system and to estimate its likely behavior as well as to perform tests (and ultimately verification and validation activities). *[[Scenario]{}]{} coevolution* describes the process of developing a set of scenarios to test a system during the system-under-tests’s development. Consequently, it needs to be designed and controlled as carefully as the evolution of system behavior [@arcuri2007coevolving; @fraser2013whole]. Let $c_1 = (X_1, Y_1, g_1, f_1)$ and $c_2 = (X_2, Y_2,\allowbreak g_1, f_2)$ be scenarios for a system $S$ and an adaptation domain $\mathcal{A}$. Scenario $c_2$ is *at least as hard* as $c_1$, written $c_1 \leq c_2$, iff $g_1 = \top \implies g_2 = \top$ and $f_1 \leq f_2$. Let $C = \{c_1, ..., c_m\}$ and $C' = \{c_1', ..., c_n'\}$ be sets of scenarios, also called scenarios suites. Scenario suite $C'$ is *at least as hard* as $C$, written $C \sqsubseteq C'$, iff for all scenarios $c \in C$ there exists a scenario $c'\in C'$ so that $c \leq c'$. \[def:scenario-sequence\] Let $\mathcal{S} = (S_i)_{i\in I}, I = \{1, ..., n\}$ be an adaptation sequence for a singleton adaptation space $\mathfrak{A} = \{\mathcal{A}\}$. A series of sets $\mathcal{C} = (C_i)_{i \in I}$ is called a scenario sequence iff for all $i \in I, i < n$ it holds that $C_i$ is a scenario suite for $S_i$ and $\mathcal{A}$ and $C_i \sqsubseteq C_{i+1}$. We expect each phase of development to further alter the set of [[scenarios]{}]{} just as it does alter the system behavior. The [[scenarios]{}]{} produced and used at a certain phase in development must match the current state of progress. Valid [[scenarios]{}]{} from previous phases should be kept and checked against the further specialized system. When we do not delete any [[scenarios]{}]{} entirely, the continued addition of [[scenarios]{}]{} will ideally narrow down allowed system behavior to the desired possibilities. Eventually, we expect all activities of system test to be expressible as the generation or evaluation of scenarios. New scenarios may simply be thought up by system developers or be generated automatically. Finding the right [[scenarios]{}]{} to generate is another optimization problem to be solved during the development of any complex adaptive system. [[Scenario]{}]{} evolution represents a cross-cutting concern for all phases of system development. Treating [[scenarios]{}]{} as first-class citizen among the artifacts produced by system development thus yields changes in tasks throughout the whole process model. Applications of Scenario Coevolution {#sec:applications} ==================================== Having both introduced a formal framework for adaptation and the testing of adaptive systems using scenarios, we show in this section how these frameworks can be applied to aid the trustworthiness of complex adaptive systems for practical use. Criticality Focus ----------------- It is very important to start the scenario evolution process alongside the system evolution, so that at each stage there exists a set of scenarios available to test the system’s functionality and degree of progress (see Concept \[con:parallelism\]). This approach mimics the concept of agile development where between each sprint there exists a fully functional (however incomplete) version of the system. The ceoncept of [[scenario]{}]{} evolution integrates seamlessly with agile process models. In the early phases of development, the common artifacts of requirements engineering, i.e., formalized requirements, serve as the basis for the scenario evolution process. As long as the adaptation space $\mathfrak{A}$ remains constant (and with it the system goals), system development should form an adaptation sequence. Consequently, scenario evolution should then form a scenario sequence for that adaptation sequence. This means (according to Definition \[def:scenario-sequence\]), the scenario suite is augmented with newly generated scenarios (for new system goals or just more specialized subgoals) or with scenarios with increased requirements on fitness.[^5] Ideally, the scenario evolution process should lead the learning components on the right path towards the desired solution. The ability to re-assign fitness priorities allows for an arms race between adaptive system and scenario suite (see Concept \[con:antagonism\]). #### Augmenting Requirements. Beyond requirements engineering, it is necessary to include knowledge that will be generated during training and learning by the adaptive components. Mainly, recognized scenarios that work well with early version of the adaptive system should be used as checks and tests when the system becomes more complex. This approach imitates the optimization technique of importance sampling on a systems engineering level. There are two central issues that need to be answered in this early phase of the development process: - Behavior Observation: How can system behavior be generated in a realistic manner? Are the formal specifications powerful enough? Can we employ human-labeled experience? - Behavior Assessment: How can the quality of observed behavior be adequately assessed? Can we define a model for the users’ intent? Can we employ human-labeled review? #### Breaking Down Requirements. A central task of successful requirements engineering is to split up the use cases in atomic units that ideally describe singular features. In the dynamic world, we want to leave more room for adaptive system behavior. Thus, the requirements we formulate tend to be more general in notion. It is thus even more important to split them up in meaningful ways in order to derive new sets of scenarios. The following design axes (without any claim to completeness) may be found useful to break down requirements of adaptive systems: - Scope and Locality: Can the goal be applied/checked locally or does it involve multiple components? Which components fall into the scope of the goal? Is emergent system behavior desirable or considered harmful? - Decomposition and Smoothness: Can internal (possibly more specific) requirements be developed? Can the overall goal be composed from a clear set of subgoals? Can the goal function be smoothened, for example by providing intermediate goals? Can subgoal decomposition change dynamically via adaptation or is it structurally static? - Uncertainty and Interaction: Are all goals given with full certainty? Is it possible to reason about the relative importance of goal fulfillment for specific goals a priori? Which dynamic goals have an interface with human users or other systems? Adaptation Cooldown ------------------- We call the problem domain available to us during system design the *off-site domain*. It contains all [[scenarios]{}]{} we think the system might end up in and may thus even contain contradicting [[scenarios]{}]{}, for example. In all but the rarest cases, the situations one single instance of our system will face in its operating time will be just a fraction the size of the covered areas of the off-site domain. Nonetheless, it is also common for the system’s real-world experience to include [[scenarios]{}]{} not occurring in the off-site domain at all; this mainly happens when we were wrong about some detail in the real world. Thus, the implementation of an adaptation technique faces a problem not unlike the *exploration/exploitation dilemma* [@vcrepinvsek2013exploration], but on a larger scale: We need to decide, if we opt for a system fully adapted to the exact off-site domain or if we opt for a less specialized system that leaves more room for later adaptation at the customer’s site. The point at which we stop adaptation happening on off-site [[scenarios]{}]{} is called the off-site adaptation border and is a key artifact of the development process for adaptive systems. In many cases, we may want the system we build to be able to evolve beyond the exact use cases we knew about during design time. The system thus needs to have components capable of *run-time* or *online adaptation*. In the wording of this work, we also talk about *on-site adaptation* stressing that in this case we focus on adaptation processes that take place at the customer’s location in a comparatively specific domain instead of the broader setting in a system development lab. Usually, we expect the training and optimization performed on-site (if any) to be not as drastic as training done during development. (Otherwise, we would probably have not specified our problem domain in an appropriate way.) As the system becomes more efficient in its behavior, we want to gradually reduce the amount of change we allow. In the long run, adaptation should usually work at a level that prohibits sudden, unexpected changes but still manages to handle any changes in the environment within a certain margin. The recognized need for more drastic change should usually trigger human supervision first. \[def:adaptation-sequence-spaces\] Let $S$ be a system. A series of $|I|$ adaptation spaces $\mathbb{A} = (\mathfrak{A}_i)_{i\in I}$ with index set $I$ with a preorder $\leq$ on the elements of $I$ is called an *adaptation domain sequence* iff for all $i, j \in I, i \leq j$ it holds that: $S$ adapts to $\mathfrak{A}_j$ implies that $S$ adapts to $\mathfrak{A}_i$. System development constructs an adaptation space sequence (c.f. Concept \[con:general\]), i.e., a sequence of increasingly specific adaptation domains. Each of those can be used to run an adaptation sequence (c.f. Definition \[def:adaptation-sequence\]) and a scenario sequence (c.f. Definition \[def:scenario-sequence\], Concept \[con:antagonism\]) to test it. For the gradual reduction of the allowed amount of adaptation for the system we use the metaphor of a “cool-down” process: The adaptation performed on-site should allow for less change than off-site adaptation. And the adaptation allowed during run-time should be less than what we allowed during deployment. This ensures that decisions that have once been deemed right by the developers are hard to change later by accident or by the autonomous adaptation process. Eternal Deployment ------------------ For high trustworthiness, development of the test cases used for the final system test should be as decoupled from the on-going scenario evolution as possible, i.e., the data used in both processes should overlap as little as possible. Of course, following this guideline completely results in the duplication of a lot of processes and artifacts. Still, it is important to accurately keep track of the influences on the respective sets of [[scenarios]{}]{}. A clear definition of the off-site adaptation border provides a starting point for when to branch off a [[scenario]{}]{} evolution process that is independent of possible [[scenario]{}]{}-specific adaptations on the system-under-test’s side. Running multiple independent system tests (cf. ensemble methods [@dietterich2000ensemble; @hart2017constructing]) is advisable as well. However, the space of available independently generated data is usually very limited. For the deployment phase, it is thus of key importance to carry over as much information as possible about the genesis of the system we deploy into the run-time, where it can be used to look up the traces of observed decisions. The reason to do this now is that we usually expect the responsibility for the system to change at this point: Whereas previously, any system behavior was overseen by the developers who could potentially backtrack any phenomenon to all previous steps in the system development process, now we expect on-site maintenance to be able to handle any potential problem with the system in the real world, requiring more intricate preparation for maintenance tasks (c.f. Concept \[con:automated\]). We thus need to endow these new people with the ability to properly understand what the system does and why. Our approach follows the vision of *eternal system design* [@nierstrasz2008change], which is a fundamental change in the way to treat deployment: We no longer ship a single artifact as the result of a complex development process, but we ship an image of the process itself (cf. Concept \[con:general\]). As a natural consequence, we can only ever add to an eternal system but hardly remove changes and any trace of them entirely. Using an adequate combination operator, this meta-design pattern is already implemented in the way we construct adaptation sequences (c.f. Definition \[def:adaptation-sequence\]): For example, given a system $S_i$ we could construct $S_{i+1} = X \stackrel{Z}{\leadsto} Y$ in a way so that $S_i$ is included in $S_{i+1}$’s internal state $Z$. As of now, however, the design of eternal systems still raises many unanswered questions in system design. We thus resort to the notion of [[scenarios]{}]{} only as a sufficient system description to provide explanatory power at run-time and recommend to apply standard “destructive updates” to all other system artifacts. Conclusion {#sec:conclusion} ========== We have introduced a new formal model for adaptation and test processes using our notion of scenarios. We connected this model to concrete challenges and arising concepts in software engineering to show that our approach of scenario coevolution is fit to tackle (a first few) of the problems when doing quality assurance for complex adaptive systems. As already noted throughout the text, a few challenges still persist. Perhaps most importantly, we require an adequate data structure both for the coding of systems and for the encoding of test suites and need to prove the practical feasibility of an optimization process governing the software development life-cycle. For performance reasons, we expect that some restrictions on the general formal framework will be necessary. In this work, we also deliberately left out the issue of meta-processes: The software development life-cycle can itself be regarded as system according to Definition \[def:system\]. While this may complicate things at first, we also see potential in not only developing a process of establishing quality and trustworthiness but also a generator for such processes (akin to Concept \[con:general\]). Systems with a high degree of adaptivity and, among those, systems employing techniques of artificial intelligence and machine learning will become ubiquitous. If we want to trust them as we trust engineered systems today, the methods of quality assurance need to rise to the challenge: Quality assurance needs to adapt to adaptive systems! [^1]: In [@holzl2011towards], there is a more strict definition on how the combination operator needs to handle the designated inputs and outputs of its given systems. Here, we opt for a more general definition. [^2]: Strictly speaking, an optimization *process* would further assume there exists an optimization relation $o$ from systems to systems so that for all $i, j \in I$ it holds that $i \leq j \Longrightarrow o(S_i, S_j)$. But for simplicity, we consider the sequence of outputs of the optimization process a sufficient representation of the whole process. [^3]: Constructing a sequence $S_i := S_{i-1} \otimes S_{i-1}$ might be viable formulation as well, but is not further explored in this work. [^4]: If we are only interested in the system’s performance and not *how* it was achieved, we can redefine a scenario to leave out $Y$. [^5]: Note that every change in $\mathfrak{A}$ starts new sequences.
--- abstract: | FPGAs have found increasing adoption in data center applications since a new generation of high-level tools have become available which noticeably reduce development time for FPGA accelerators and still provide high-quality results. There is, however, no high-level benchmark suite available, which specifically enables a comparison of FPGA architectures, programming tools, and libraries for HPC applications. To fill this gap, we have developed an OpenCL-based open-source implementation of the HPCC benchmark suite for Xilinx and Intel FPGAs. This benchmark can serve to analyze the current capabilities of FPGA devices, cards, and development tool flows, track progress over time, and point out specific difficulties for FPGA acceleration in the HPC domain. Additionally, the benchmark documents proven performance optimization patterns. We will continue optimizing and porting the benchmark for new generations of FPGAs and design tools and encourage active participation to create a valuable tool for the community. author: - bibliography: - 'bibliography/meyer20\_sc.bib' - 'bibliography/IEEEabrv.bib' title: Evaluating FPGA Accelerator Performance with a Parameterized OpenCL Adaptation of the HPCChallenge Benchmark Suite --- FPGA, OpenCL, High Level Sythesis, HPC benchmarking Introduction ============ In , benchmarks are an important tool for performance comparison across systems. They are designed to stress important system properties or generate workloads that are similar to relevant applications for the user. Especially in acquisition planning they can be used to define the desired performance of the acquired system before it is built. Since it is a challenging task to select a set of benchmarks to cover all relevant device properties, benchmark suites can help by providing a pre-defined mix of applications and inputs, for example SPEC CPU [@SPEC-CPU] and  [@HPCCIntroduction]. There is an ongoing trend towards heterogeneity in , complementing by accelerators, as indicated by the Top 500 list [@top500]. From the top 10 systems in the list, seven are equipped with different types of accelerators. Nevertheless, to get the best matching accelerator for a new system, a tool is needed to measure and compare the performance across accelerators. For well-established accelerator architectures like , there are already standardized benchmarks like SPEC ACCEL [@SPECACCEL]. For , that are just emerging as accelerator architecture for data centers and HPC, existing benchmarks do not focus on and miss to measure highly relevant device properties. Similar to the compiler for applications, the framework consisting of and takes a very important role to achieve performance on an . The framework translates the accelerator code (denoted as *kernel*), most commonly from , to intermediate languages, organizes the communication with the underlying , performs optimizations and synthesizes the code to create executable configurations (bitstreams). Hence, the framework has a big impact on the used resources and the maximum kernel frequency, which might vary depending on the kernel design. An benchmark suite for should capture this impact and, for comparisons, must not be limited to a single framework. One of the core aspects of is communication. Some cards offer a new approach for scaling with their support for direct communication to other cards without involving the host CPU. Such technology is already used in first applications [@MLNetwork; @Sano-Multi-FPGA-Stencil] and research has started to explore the best abstractions and programming models for inter-FPGA communication [@SMI; @FPGAEthernet]. Thus, communication between FPGAs out of an framework is another essential characteristic that a benchmark suite targeting should consider. In this paper, we propose *HPCC FPGA*, an benchmark suite for using the applications of the benchmark suite. The motivation for choosing HPCC is that it is well-established for CPUs and covers a small set of applications that evaluate important memory access and computing patterns that are frequently used in HPC applications. Further, the benchmark also characterizes the HPC system’s network bandwidth allowing to extrapolate to the performance of parallel applications. Specifically, we make the following contributions in this paper: 1. We provide FPGA-adapted kernel implementations along with corresponding host code for setup and measurements for all benchmark applications. 2. We provide configuration options for the kernels that allow adjustments to resources and architecture of the target and board without the need to change the code manually. 3. We evaluate the execution of these benchmarks on different FPGA families and boards with Intel and Xilinx FPGAs and show the benchmarks can capture relevant device properties. 4. We make all benchmarks and the build system available as open-source on GitHub to encourage community contributions. The remainder of this paper is organized as follows: In Section \[sec:related-work\], we give an overview of existing benchmark suites. In Section \[sec:hpcc-benchmark\], we introduce the benchmarks in in more detail and briefly discuss the contained benchmarks and the configuration options provided for the base runs. In Section \[sec:evaluation\] we build the benchmarks for different architectures and evaluate the results to show the potential of the proposed configurable base runs. In Section \[sec:discussion\], we evaluate the global memory system of the boards in more detail and give insights into experienced problems and the potential of the benchmarks to describe the performance of boards and the associated frameworks. Finally, in Section \[sec:conclusion\], we draw conclusions and outline future work. Related Work {#sec:related-work} ============ There already exist several benchmark suites for and their frameworks. Most benchmark suites like Rodinia [@Rodinia], OpenDwarfs [@OpenDwarfs-first] or SHOC [@SHOC] are originally designed with GPUs in mind. Although both GPU and can be programmed using , the design of the compute kernels has to be changed and optimized specifically for to achieve good performance. In the case of Rodinia this was done [@RodiniaFPGA] for a subset of the benchmark suite with a focus on different optimization patterns for the Intel FPGA (then Altera) SDK for OpenCL. In contrast, to port OpenDwarfs to FPGAs, Feng et al. [@OpenDwarfs-first] employed a research OpenCL synthesis tool that instantiates GPU-like architectures on FPGAs. With Rosetta [@Rosetta], there also exists a benchmark suite that was designed targeting using the Xilinx HLS tools from the start. It focuses on typical FPGA streaming applications from the video processing and machine learning domains. The CHO [@CHO] benchmark targets more fundamental FPGA functionality and includes kernels from media processing and cryptography and the low-level generation of floating-point arithmetic through OpenCL, using the Altera SDK for OpenCL. The mentioned benchmarks often lack possibilities to adjust the benchmarks to the target architecture easily. Modifications have to be done manually in the kernel code, sometimes many different kernel variants are proposed or the kernels are not optimized at all, making it difficult to compare results for different . A benchmark suite that takes a different approach is Spector [@Spector]. It makes use of several optimization parameters for every benchmark, which allows modification and optimization of the kernels for a architecture. The kernel code does not have to be manually changed, and optimization options are restricted by the defined parameters. Nevertheless, the focus is more on the research of the design space than on performance characterization. To our best knowledge, there exists no benchmark suite for with a focus on characteristics at the point of writing. All of the mentioned benchmark suites lack a way to measure the inter- communication capability of recent high-end . In some of the benchmarks, the investigated input sizes are small enough to fit into local memory resources of a single . Since actual applications are highly parallel and require effective communication, an focused benchmark must also evaluate the characteristics of the communication network. HPC Challenge Benchmarks for FPGA {#sec:hpcc-benchmark} ================================= Benchmark Execution and Evaluation {#sec:evaluation} ================================== Further Findings and Investigations {#sec:discussion} =================================== Conclusion {#sec:conclusion} ========== In this paper, we proposed *HPCC FPGA*, a novel benchmark suite for . Therefore, we provide configurable base implementations and host codes for all benchmarks of the well-established benchmark suite. We showed that the configuration options allow the generation of efficient benchmark kernels for Xilinx and Intel using the same source code without manual modification. We executed the benchmarks on up to three with four different memory setups and compared the results with simple performance models. Most benchmarks showed a high-performance efficiency when compared to the models. Nevertheless, the evaluation showed that the base implementations are often unable to utilize the available resources on an board fully. Hence, it is important to discuss the base implementations and configuration options with the community to create a valuable and widely accepted performance characterization tool for . We made the code open-source and publicly available to simplify and encourage contributions to future versions of the benchmark suite. Acknowledgements {#acknowledgements .unnumbered} ================ The authors gratefully acknowledge the support of this project by computing time provided by the Paderborn Center for Parallel Computing (PC2). We also thank Xilinx for the donation of an Alveo U280 card, Intel for providing a PAC D5005 loaner board and access to the reference design BSP with SVM support, and the Systems Group at ETH Zurich as well as the Xilinx Adaptive Compute Clusters (XACC) program for access to their Xilinx FPGA evaluation system.
--- abstract: | This paper is dedicated to the study of the interaction between dynamical systems and percolation models, with views towards the study of viral infections whose virus mutate with time. Recall that $r$-bootstrap percolation describes a deterministic process where vertices of a graph are infected once $r$ neighbors of it are infected. We generalize this by introducing [*$F(t)$-bootstrap percolation*]{}, a time-dependent process where the number of neighbouring vertices which need to be infected for a disease to be transmitted is determined by a percolation function $F(t)$ at each time $t$. After studying some of the basic properties of the model, we consider smallest percolating sets and construct a polynomial-timed algorithm to find one smallest minimal percolating set on finite trees for certain $F(t)$-bootstrap percolation models.\ author: - 'Yuyuan Luo$^{a}$ and Laura P. Schaposnik$^{b,c}$' bibliography: - 'Schaposnik\_Percolation.bib' title: Minimal percolating sets for mutating infectious diseases --- Introduction ============ The study infectious diseases though mathematical models dates back to 1766, where Bernoulli developed a model to examine the mortality due to smallpox in England [@modeling]. Moreover, the germ theory that describes the spreading of infectious diseases was first established in 1840 by Henle and was further developed in the late 19th and early 20th centuries. This laid the groundwork for mathematical models as it explained the way that infectious diseases spread, which led to the rise of compartmental models. These models divide populations into compartments, where individuals in each compartment have the same characteristics; Ross first established one such model in 1911 in [@ross] to study malaria and later on, basic compartmental models to study infectious diseases were established in a sequence of three papers by Kermack and McKendrick [@kermack1927contribution] (see also [@epidemiology] and references therein). In these notes we are interested in the interaction between dynamical systems and percolation models, with views towards the study of infections which mutate with time. The use of stochastic models to study infectious diseases dates back to 1978 in work of J.A.J. Metz [@epidemiology]. There are many ways to mathematically model infections, including statistical-based models such as regression models (e.g. [@imai2015time]), cumulative sum charts (e.g. [@chowell2018spatial]), hidden Markov models (e.g. [@watkins2009disease]), and spatial models (e.g. [@chowell2018spatial]), as well as mechanistic state-space models such as continuum models with differential equations (e.g. [@greenhalgh2015disease]), stochastic models (e.g. [@pipatsart2017stochastic]), complex network models (e.g. [@ahmad2018analyzing]), and agent-based simulations (e.g. [@hunter2019correction] – see also [@modeling] and references therein). Difficulties when modeling infections include incorporating the dynamics of behavior in models, as it may be difficult to access the extent to which behaviors should be modeled explicitly, quantify changes in reporting behavior, as well as identifying the role of movement and travel [@challenges]. When using data from multiple sources, difficulties may arise when determining how the evidence should be weighted and when handling dependence between datasets [@challenges2]. In what follows we shall introduce a novel type of dynamical percolation which we call [*$F(t)$-bootstrap percolation*]{}, though a generalization of classical bootstrap percolation. This approach allows one to model mutating infections, and thus we dedicate this paper to the study some of its main features. After recalling classical $r$-bootstrap percolation in Section \[intro\], we introduce a percolating function $F(t)$ through which we introduce a dynamical aspect the percolating model, as described in Definition \[fperco\]. [**Definition.**]{} Given a function $F(t): \mathbb{N}\rightarrow \mathbb{N}$, we define an [*$F(t)$-bootstrap percolation model*]{} on a graph $G$ with vertices $V$ and initially infected set $A_0$ as the process which at time $t+1$ has infected set given by $$\begin{aligned} A_{t+1} = A_{t} \cup \{v \in V : |N(v) \cap A_t| \geq F(t)\}, \end{aligned}$$ where $N(v)$ denotes the set of neighbouring vertices to $v$, and we let $A_\infty$ be the final set of infected vertices once the percolation process has finished. In Section \[time\] we study some basic properties of this model, describe certain (recurrent) functions which ensure the model percolates, and study the critical probability $p_c$. Since our motivation comes partially from the study of effective vaccination programs which would allow to contain an epidemic, we are interested both in the percolating time of the model, as well as in minimal percolating sets. We study the former in Section \[time2\], where by considering equivalent functions to $F(t)$, we obtained bounds on the percolating time in Proposition \[propo8\]. Finally, in Section \[minimal\] and Section \[minimal2\] we introduce and study smallest minimal percolating sets for $F(t)$-bootstrap percolation on (non-regular) trees. This leads to one of our main results in Theorem \[teo1\], where we describe an algorithm for finding the smallest minimal percolating sets. Lastly, we conclude the paper with a comparison in Section \[final\] of our model and algorithm to the model and algorithm considered in [@percset] for clasical bootstrap percolation, and analyse the effect of taking different functions within our dynamical percolation. Background: bootstrap percolation and SIR models {#intro} ================================================ Bootstrap percolation was introduced in 1979 in the context of solid state physics in order to analyze diluted magnetic systems in which strong competition exists between exchange and crystal-field interactions [@density]. It has seen applications in the studies of fluid flow in porous areas, the orientational ordering process of magnetic alloys, as well as the failure of units in a structured collection of computer memory [@applications]. Bootstrap percolation has long been studied mathematically on finite and infinite rooted trees including Galton-Watson trees (e.g. see [@MR3164766]). It better simulates the effects of individual behavior and the spatial aspects of epidemic spreading, and better accounts for the effects of mixing patterns of individuals. Hence, communicative diseases in which these factors have significant effects are better understood when analyzed with cellular automata models such as bootstrap percolation [@automata], which is defined as follows. For $n\in \mathbb{Z}^+$, we define an [*$n$-bootstrap percolation model*]{} on a graph $G$ with vertices $V$ and initially infected set $A_0$ as the process in which at time $t+1$ has infected set given by $$\begin{aligned} A_{t+1} = A_{t} \cup \{v \in V : |N(v) \cap A_t| \geq n\}. \end{aligned}$$ Here, as before, we denoted by $N(v)$ the set of neighbouring vertices to $v$. In contrast, a [*SIR Model*]{} relates at each time $t$ the number of susceptible individuals $S(t)$ with the number of infected individuals $I(t)$ and the number of recovered individuals $R(t)$, by a system of differential equations – an example of a SIR model used to simulate the spread of the dengue fever disease appears in [@dengue]. The SIR models are very useful for simulating infectious diseases; however, compared to bootstrap percolation, SIR models do not account for individual behaviors and characteristics. In these models, a fixed parameter $\beta$ denotes the average number of transmissions from an infected node in a time period. In what follows we shall present a dynamical generalization of the above model, for which it will be useful to have an example to establish the comparisons. ![Depiction of $2$-bootstrap percolation, where shaded vertices indicated infected nodes. []{data-label="first"}](Fig1.png) Consider the (irregular) tree with three infected nodes at time $t=0$, given by $A_0=\{2,4,5\}$ as shown in Figure \[first\]. Then, through $2$-bootstrap percolation at time $t=1$, node $3$ becomes infected because its neighbors $4$ and $5$ are infected at time $t=0$. At time $t=2$, node $1$ becomes infected since its neighbors $2$ and $3$ are infected at time $t=1$. Finally, note that nodes $6,7,8$ cannot become infected because they each have only $1$ neighbor, yet two or more infected neighbors are required to become infected. Time-dependent Percolation {#time} =========================== The motivation of time-dependent percolation models appears since the rate of spread of diseases may change over time. In the SIR models mentioned before, since $\beta$ is the average number of transmissions from an infected node in a time period, $1/\beta$ is the time it takes to infect a node. If we “divide the work" among several neighbors, then $1/\beta$ is also the number of infected neighbors needed to infect the current node. Consider now an infection which would evolve with time. This is, instead of taking the same number of neighbours in $r$-bootstrap percolation, consider a percolation model where the number of neighbours required to be infected for the disease to propagate changes with time, following the behaviour of a function $F(t)$ which can be set in terms of a one-parameter family of parameters $\beta$ to be $F(t) := \ceil[bigg]{\frac{1}{\beta(t)}}$. We shall say a function is a [*percolation function*]{} if it is a function $F: I \rightarrow \mathbb{Z}^+$ where $I$ is an initial segment of $\mathbb{N}$ that we use in a time-dependent percolation process, and which specifies the number of neighbors required to percolate to a node at time $t$. \[fperco\]Given a function $F(t): \mathbb{N}\rightarrow \mathbb{N}$, we define an [*$F(t)$-bootstrap percolation model*]{} on a graph $G$ with vertices $V$ and initially infected set $A_0$ as the process in which at time $t+1$ has infected set given by $$\begin{aligned} A_{t+1} = A_{t} \cup \{v \in V : |N(v) \cap A_t| \geq F(t)\}. \end{aligned}$$ Here, as before, we denoted by $N(v)$ the set of neighbouring vertices to $v$, and we let $A_\infty$ be the final set of infected vertices once the percolation process has finished. One should note that $r$-bootstrap percolation can be recovered from $F(t)$-bootstrap percolation by setting the percolation function to be the constant $F(t) = r$. It should be noted that, unless otherwise stated, the initial set $A_0$ is chosen in the same way as in $r$-bootstrap percolation: by randomly selecting a set of initially infected vertices with probability $p$, for some fixed value of $p$ which is called the [*probability of infection*]{}. If there are multiple percolation functions and initially infected sets in question, we may use the notation $A^{F }_{t}$ to denote the set of infected nodes at time $t$ percolating under the function $F(t)$ with $A_0$ as the initially infected set. In particular, this would be the case when implementing the above dynamical model to a multi-type bootstrap percolation such as the one introduced in [@gossip]. In order to understand some basic properties of $F(t)$-bootstrap percolation, we shall first focus on a single update function $F(t)$, and consider the critical probability $p_c$ of infection for which the probability of percolation is $\frac{1}{2}$. \[propo1\] If $F(t)$ equals its minimum for infinitely many times $t$, then the critical probability of infection $p_c$ for which the probability of percolation is 1/2, is given by the value of the critical probability in $m$-bootstrap percolation, for $m:=\min_t F(t)$. When considering classical bootstrap percolation, note that the resulting set $A_\infty^r$ of $r$-bootstrap percolation is always contained by the resulting set $A_\infty^n$ of $n-$bootstrap percolation provided $n\leq r$. Hence, setting the value $m:=\min_t F(t)$, the resulting $A_\infty^F$ set of $F(t)$-bootstrap percolation will be contained in $A_\infty^m$. Moreover, since any vertex in $A_t^F$ for $t$ such that $F(t)=m$ remains in the set the next time for which $F(t)=m$, and since there are infinitely many times $t$ such that $F(t)=m$, we know that the final resulting set $A_\infty^m$ of $m$-bootstrap percolation is contained in the final resulting set $A_\infty^F$ of $F(t)$-bootstrap percolation. Then the resulting set of $m$-bootstrap percolation and $F(t)$-bootstrap percolation need to be identical, and hence the critical probability for $F(t)$-bootstrap percolation is that of $m$-bootstrap percolation. As we shall see later, different choices of the one-parameter family $\beta(t)$ defining $F(t)$ will lead to very different dynamical models. A particular set up arises from [@viral], which provides data on the time-dependent rate of a specific virus spread, and through which one has that an interesting family of parameters appears by setting $$\beta(t) = \left(b_0-b_f\right)\cdot\left(1-k\right)^t+b_f,$$ where $b_0$ is the initial rate of spread, $b_f$ is the final rate of spread, and $0<k<1$. Then at time $t$, the number of infected neighbors it takes to infect a node is $$F(t):=\ceil[Bigg]{\frac{1}{\left(b_0-b_f\right)\cdot\left(1-k\right)^t+b_f}}.$$ In this case, since $\beta(t)$ tends to $b_f$, and $\frac{1}{\beta}$ tends to $\frac{1}{b_f}$, one cans see that there will be infinitely many times $t$ such that $F(t) = \ceil[Bigg]{\frac{1}{b_f}}$. Hence, in this setting from Proposition \[propo1\], the critical probability will be same as that of a $r$-bootstrap percolation where $r=\ceil[Bigg]{\frac{1}{b_f}}$. Percolation Time {#time2} ================ Informally, [*percolation time*]{} is the time it takes for the percolation process to terminate, with regards to a specific initially infected set of a graph. In terms of limits, recall that the final percolating set is defined as $$\begin{aligned} A_\infty:=\lim_{t\rightarrow \infty} A_t,\label{mas}\end{aligned}$$ and thus one may think of the percolation time as the smallest time $t$ for which $A_t=A_\infty$. By considering different initial probabilities of infection $p$ which determine the initially infected set $A_0$, and different percolation functions $F(t)$ one can see that the percolation time of a model can vary drastically. To illustrate this, in Figure \[second\] we have plotted the percentage of nodes infected with two different initial probabilities and four different percolation functions. The model was ran $10^3$ times for each combination on random graphs with $10^2$ nodes and $300$ edges. ![ Percentage of nodes infected at time $t$ for $F(t)$-bootstrap percolation with initial probability $p$, on graphs with $100$ nodes and $300$ edges.[]{data-label="second"}](chart2.png) In the above settings of Figure \[second\], one can see that all the models stabilize by time $10$, implying that the percolation time is less than or equal to $10$. Generally, understanding the percolation time is useful in determining when the disease spreading has stabilized. In what follows, we find a method to generate an upper bound on the percolation time given a specific graph and function. Formally, we define the [*percolation time*]{} $t_*$ as the minimum $$t_*:=\min_t \{~t~|~A_{t+1} = A_t~\}.$$ Expanding on the notation of , we shall denote by $A_\infty^\gamma$ the set of nodes infected by percolating the set $A_0$ on the graph with percolation function $\gamma(t)$, and we shall simply write $A_\infty$ when the percolation function $\gamma(t)$ is clear from context or irrelevant. Moreover, we shall say that two percolation functions $F_1: I_1 \rightarrow \mathbb{Z}^+$ and $F_2: I_2 \rightarrow \mathbb{Z}^+$ are [*equivalent*]{} for the graph $G$ if for all initially infected sets $A_0$, one has that $$A^{F_1}_\infty=A^{F_2}_\infty.$$ This equivalence relation can be understood through the lemma below, which uses an additional function $\gamma(t)$ to relate two percolation functions $F_0$ and $F_0'$ if $F_0'$ can be intuitively “generated” by removing some values of $F_0$. This removal procedure is further specified in this lemma. Given two subsets $I_1$ and $I_2$ of $\mathbb{N}$, we say a function $\gamma: I_1 \rightarrow I_2 \cup \{-1\}$ is a [*nice function*]{} if it is surjective and - it is injective on $\gamma^{-1}(I_2)$; - it is increasing on $\gamma^{-1}(I_2)$; - it satisfies $\gamma(a) \leq a$ or $\gamma(a)=-1$. Given $I_1,I_2\subset \mathbb{N}$, let $F(t)$ be any percolation function with domain $I_1$, and define the percolation function $F'(t)$ with domain $I_2$ as $F'(t) := F(\gamma^{-1}(t))$ for $\gamma(t)$ a nice function. Then, for any fixed initially infected set $A_0$ and $t \in I_2$, one has that $$\begin{aligned} A^{F'}_{t} \subseteq A^{F}_{\gamma^{-1}(t)}.\label{mas11}\end{aligned}$$ We first show that $F'(t)$ is well-defined. Since the domain of $F'(t)$ is $I_2$, we have that $t\in I_2$ and thus $\gamma^{-1}(t)$ is a valid expression. Moreover, $\gamma^{-1}(t)$ exists because $\gamma$ is surjective, and it is unique since $I_2$ is an initial segment of $\mathbb{N}$ and hence $t \neq -1$. Furthermore, for any $a,b \in I_1$, if $\gamma(a) = \gamma(b) \neq -1$, then $a=b$. Since the domain of $\gamma$ is $I_1$, then $\gamma^{-1}(t) \in I_1$. This means that $\gamma^{-1}(t)$ is in the domain of $F(t)$ and thus one has that $F'(t)$ is defined for all $t\in I_2$. We shall now prove the result in the lemma by induction. Since $\gamma^{-1}(0)=0$ and the initially infected sets for the models with $F(t)$ and $F'(t)$ are the same, it must be true that $A^{F' }_{0} \subseteq A^{F }_{0}$, and in particular, $A^{F' }_{0} = A^{F }_{0} = A_0.$ In order to perform the inductive step, suppose that for some $t \in I_2$ and $t+1 \in I_2$, one has $A^{F' }_{t} \subseteq A^{F }_{\gamma^{-1}(t)}$. Moreover, suppose there is a node $n$ such that $n \in A^{F' }_{t+1}$ but $n \notin A^{F }_{\gamma^{-1}(t+1)}$. Then, this means that there exists a neighbor $n'$ of $n$ such that $n' \in A^{F' }_{t}$ but $n' \notin A^{F }_{\gamma^{-1}(t+1)-1}$. Indeed, otherwise this would imply that the set of neighbors of $n$ infected prior to the specified times are the same for both models, and since $F'(t+1) = F(\gamma^{-1}(t+1))$ for $t \in I_2$, and thus $n$ would be infected in both or neither models. From the above, since $t < t+1$ one must have $\gamma^{-1}(t) < \gamma^{-1}(t+1)$, and thus $$\gamma^{-1}(t) \leq \gamma^{-1}(t+1)-1.$$ Moreover, since $n' \notin A^{F }_{\gamma^{-1}(t+1)-1}$, then $n' \notin A^{F }_{\gamma^{-1}(t)}$. However, we assumed $n' \in A^{F' }_{t}$, and since $A^{F' }_{0} \subseteq A^{F }_{0}$, we have a contradiction, so it must be true that the sets satisfy $A^{F' }_{t+1} \subseteq A^{F }_{\gamma^{-1}(t+1)}$. Thus we have proven that for any initially infected set $A_0$ and $t \in I_2$, one has that is satisfied for all $t\in I_2$. Through the above lemma we can further understand when an $F(t)$-percolation process finishes in the following manner. Given a percolation function $F(t)$ and a fixed time $t \in \mathbb{N}$, let $t_p<t$ be such that $F(t_p) < F(t)$, and suppose there does not exist another time $t_i \in \mathbb{N}$ where $t_p < t_i <t$ such that $F(t_i) < F(t)$. Suppose further that we use this percolation function on a graph with $\ell$ vertices. Then, if $|\{t_i~|~F(t_i)=F(t)\}|>\ell$, then there are no nodes that becomes infected at time $t$. Suppose some node $n$ is infected at time $t$. Then, this would imply that all nodes are infected before time $t$. We can show this using contradiction: suppose there exists $m$ nodes $n_i$ that there are not infected by time $t$. Then we know that there exists at least $m$ of $t_j \in \mathbb{N}$ such that $t_p < t_j < t$, for which $F(t_j) = F(t)$ and such that there is no node infected at $t_j$. Matching each $n_i$ with some $t_j$ and letting $t_k \in \mathbb{N}$ be such that $t_j < t_k \leq t$, one can see that there is some node infected at $t_k$, and $F(t_k) = F(t)$. Moreover, this implies that there is no $t_x \in \mathbb{N}$ such that $t_j < t_x < t_k$ and such that there is some node infected at $t_x$ and $F(t_x) = a$. We know such a $t_k$ exists because there is a node infected at time $t$. From the above, for each $n_i$ there are two cases: either the set of nodes infected by $t_j$ is the same as the set of nodes infected by $t_k$, or there exists node $p$ in the set of nodes infected by $t_k$ but not in the set of nodes infected by its $t_j$. We have a contradiction for the first case: there must be a node infected at time $t_j$ is this is the case, as the set of infected nodes are the same as time $t_k$, so the first case is not possible. So the second case must hold for all $m$ of $n_i$’s. But then, the second case implies that there is a node infected between $t_j$ and $t_k$. This means that at least $m$ additional nodes are infected, adding to the at least $\ell-m$ nodes infected at $t_i$ such that $F(t_i) = a$ and there is a node infected at $t_i$, we have at least $\ell-m+m=\ell$ nodes infected before $t$. But if all $\ell$ nodes are infected before $t$, this would mean there are no nodes to infect at time $t$, so $n$ does not exist. Intuitively, the above lemma tells us that given a fixed time $t_0$ and some $t>t_0$, if $F(t) = \ell$ is the smallest value the function takes on after the time $t_0$, and $F(t)$ has already taken on that value more than $\ell$ times, for $\ell$ the number of nodes in the graph, then there will be no nodes that will be infected at that time and the value is safe to be “removed”. The removal process is clarified in the next proposition, where we define an upper bound of percolation time on a specified tree and function $F(t)$. \[propo8\]Let $G$ be a regular tree of degree $d$ and $\ell$ vertices. Given a percolation function $F(t)$, define the functions $F'(t)$ and $\gamma: \mathbb{N} \rightarrow \mathbb{N} \cup \{-1\}$ by setting: - $F'(0) := F(0)$, and $\gamma(0) := 0$. - Suppose the least value we have not considered $F(t)$ at is $a$, and let $b$ be the least value where $F'(b)$ has not yet been defined. If $F(a)$ has not yet appeared $\ell$ times since the last time $t$ such that $F(t) < F(a)$ and $F(a) \leq d$, then set $F'(b) := F(a)$, and let $\gamma(a)=b$. Otherwise, $\gamma(a)=-1$. The function $F'(t)$ is equivalent to $F(t)$. \[P1\] Intuitively, the function $\gamma$ constructed above is mapping the index associated to $F(t)$ to the index associated to $F'(t)$. If omitted, then it is mapped to $-1$ by $\gamma$. To prove the proposition, we will prove that $P_{F(t)}(A) = P_{F'(t)}(A)$. Suppose we have a node $n$ in $P_{F(t)}(A)$, and it is infected at time $t_0$. Suppose $F(t_0) = a$ for some $a \in \mathbb{Z}^+$, and let $t_{prev}$ be the largest integer $t_{prev} < a$ such that $F(t_{prev}) < a$. Suppose further that $t_0$ is the $m$th instance such that $F(t) = a$ for some $t$. Moreover, if $m > v$, there cannot be any node infected at time $t_0$ under $F(t)$, and thus it follows that $m \leq v$. But if $m \leq v$, then $\gamma(t_0) \neq -1$ and therefore all nodes that are infected under $F(t)$ became infected at some time $t$ where $\gamma(t_0) \neq -1$. Recall that $A_0^{F} = A_0^{F'}$, and suppose for some $n$ such that $\gamma(n)\neq -1$, one has that $A_n^{F} = A_{\gamma(n)}^{F'}$. We know that for any $n < t < \gamma^{-1}(\gamma(n)+1), \gamma(t) = -1$, so nothing would be infected under $F(t)$ after time $n$ but before $\gamma^{-1}(\gamma(n)+1)$. This means that the set of previously infected nodes at time $\gamma^{-1}(\gamma(n)+1)-1$ is the same as the set of nodes infected before time $n$ leading to $$A_n^{F} = A_{\gamma^{-1}(\gamma(n)+1)-1}^{F'}.$$ Then, since $F(\gamma^{-1}(\gamma(n)+1)) = F'(\gamma(n)+1)$ and the set of previously infected nodes for both are $A_n^{F}$, we know that $A_{n+1}^{F} = A_{\gamma(n+1)}^{F'}$. Thus, for any time $n'$ in the domain of $F'(t)$, there exist a corresponding time $n$ for percolation under $F(t)$ such that the infected set at time $n$ under $F(t)$ and the infected set at time $n'$ under $F'(t)$ are the same, and thus $A_\infty^{F} = A_\infty^{F'}$. From the above Proposition \[P1\] we can see two things: the upper bound on the percolation time is the time of the largest $t$ such that $F'(t)$ is defined, and we can use this function in an algorithm to find the smallest minimal percolating set since $F(t)$ and $F'(t)$ are equivalent. Moreover, an upper bound on the percolation time can not be obtained without regards to the percolation function: suppose we have such an upper bound $b$ on some connected graph with degree $d$ and with $1$ node initially infected and more than $1$ node not initially infected. Then, if we have percolation function $F(t)$ such that $F(t) = d+1$ for all $t\in \mathbb{N} \leq b$ and $F(m)=1$ otherwise, we see that there will be nodes infected at time $b+1$, leading to a contradiction. Suppose the degree of a graph is $d$. Define a sequence $a$ where $a_1 = d$ and $a_{n+1} = (a_n+1)d$. Then the size of the domain of $F'(t)$ in Proposition \[P1\] is $\Sigma^{d}_{i=1}a_n$. \[ll\] Suppose each value do appear exactly $d$ times after the last value smaller than it appears. To count how large the domain can be, we start with the possible $t$s such as $F'(t)=1$s in the function; there are $d$ of them as $1$ can maximally appear $d$ times. Note that this is equal to $a_1$. Now, suppose we have already counted all the possible $t$s when $F'(t) < n+1$, for $1 leq n < d$, which amounted to $a_{n}$. Then, there can be maximally $d$ instances at the between the appearance of each $t$ when $F'(t) < n$ as well as before and after all such appearances, so there are $a_{n}+1$ places where $F'(t)=n$ can appear. Thus there are maximally $(a_{n}+1)d$ elements $t$ in the domain such that $F'(t) = n+1$. Summing all of them yields $\Sigma^{d}_{i=1}a_n$, the total number of elements in the domain. From Proposition \[P1\], for some $F(t)$, $A_0$ and $n$, one has $A^{F}_{\gamma^{-1}(n)} = A^{F'}_{n}$. Then if $A_\infty^{F'}$ is reached by time $\Sigma^{d}_{i=1}a_n$, the set must be infected by time $\gamma^{-1}(\Sigma^{d}_{i=1}a_n)$. Hence, in this setting an upper bound of $F(t)$ percolating on a graph with $d$ vertices can be found by taking $\gamma^{-1}(\Sigma^{d}_{i=1}a_n)$, as defined in Lemma \[ll\]. Minimal Percolating Sets {#minimal} ======================== When considering percolations within a graph, it is of much interest to understand which subsets of vertices, when infected, would lead to the infection reaching the whole graph. A [*percolating set*]{} of a graph $G$ with percolation function $F(t)$ is a set $A_0$ for which $A_\infty^F=G$ at a finite time. A [*minimal percolating set*]{} is a percolating set $A$ such that if any node is removed from $A$, it will no longer be a percolating set. A natural motivation for studying minimal percolating sets is that as long as we keep the number of individuals infected to less than the size of the minimal percolating set, we know that the entire population will not be decimated. Bounds on minimal percolating sets on grids and other less regular graphs have extensively been studied. For instance, it has been shown in [@Morris] that for a grid $[n]^d$, there exists a minimal percolating set of size $4n^2/33 + o(n^2)$, but there does not exist one larger than $(n + 2)^2/6$. In the case of trees, [@percset] gives an algorithm that finds the largest and smallest minimal percolating sets on trees. However, the results in the above papers cannot be easily extended to the dynamical model because it makes several assumptions such as $F(t) \neq 1$ that do not necessarily hold in the dynamical model. \[ex2\]An example of a minimal percolating set with $F(t)=t$ can be seen in Figure \[ex1\] (a). In this case, the minimal percolating set has size 3. Indeed, we see that if we take away any of the red nodes, the remaining initially infected red nodes would not percolate to the whole tree, and thus they form a minimal percolating set; further, there exists no minimal percolating sets of size 1 or 2, thus this is the smallest minimal percolating set. It should be noted that minimal percolating sets can have different sizes. For example, another minimal percolating set with $5$ vertices appears in Figure \[ex1\] (b). ![(a) In this tree, having nodes $2,4,5$ infected (shaded in red) initially is sufficient to ensure that the whole tree is infected. (b) This minimal percolating set shaded in red is of size $5$.[]{data-label="ex1"}](Fig8.jpg) In what follows we shall work with general finite trees $T(V,E)$ with set of vertices $V$ and set of edges $E$. In particular, we shall consider the smallest minimal percolating sets in the following section. Algorithms for Finding Smallest Minimal Percolating Set {#minimal2} ======================================================= Consider $F(t)$-bootstrap percolation on a tree $T(V,E)$ with initially infected set $A_0\subset V$. As before, we shall denote by $A_t$ be the set of nodes infected at time $t$. For simplicity, we shall use here the word “infected” synonymously with “infected”. In order to build an algorithm to find smallest percolating sets, we first need to introduce a few definitions that will simplify the notation at later stages. We shall denote by $L(a)$ the largest time $t$ such that $a \leq F(t),$ and if there does not exist such a time $t$, then set $L(a)=\infty$. Similarly, define $B(a)$ as the smallest time $t$ such that $a \leq F(t)$, and if such a time $t$ does not exist, set $B(a)=\infty$. Given $a,b\in \mathbb{N}$, if $a<b$ then $L(a) \geq L(b)$. Indeed, this holds because if a node can be infected to with $b$ neighbors, it can with $a$ neighbors where $a<b$. Note that in general, a smallest percolating set $A_0$ must be a minimal percolating set. To see this, suppose not. Then there exists some $v$ in $A_0$ such that $A_0 -\{v\}$ percolates the graph. That means that $A_0 -\{v\}$, a smaller set that $A_0$, is a percolating set. However, since $A_0$ is a smallest percolating set, we have a contradiction. Hence, showing that a percolating set $A_0$ is the smallest implies that $A_0$ is a minimal percolating set. The first algorithm that comes to mind is to try every case. There are $2^n$ possible sets $A_0$, and for each set we much percolate $A_0$ on $T$ to find the smallest percolating set. This amounts to an algorithm of complexity $O(t2^n)$ where $t$ is the upper bound on the percolation time. In what follows we shall describe a polynomial-timed algorithm to find the smallest minimal percolating set on $T(V,E)$, described in Theorem \[teorema\]. For this, we shall introduce two particular times associated to each vertex in the graph, and formally define what isolated vertices are. For each node $v$ in the graph, we let $t_a(v)$ be the time when it is infected, and $t_*(v)$ the time when it is last allowed to be infected; Moreover, when building our algorithm, each vertex will be allocated a truth value of whether it needs to be further considered. A node $v$ is said to be [*isolated*]{} with regards to $A_0$ if there is no vertex $w\in V$ such that $v$ becomes infected when considering $F(t)$-bootstrap percolation with initial set $A_0 \cup \{w\}$. From the above definition, a node is isolated with regards to a set if it is impossible to infect it by adding one of any other node to that set that is not itself. Building towards the percolating algorithm, we shall consider a few lemmas first. If a node cannot be infected by including a neighbor in the initial set, it is isolated. \[L1\] From Remark \[L1\], by filling the neighbor in the initial set, we either increased the number of neighbors infected to a sufficient amount, or we expanded the time allowed to percolate with fewer neighbors so that percolation is possible. We explore these more precisely in the next lemma, which gives a quick test to see whether a vertex is isolated. \[L3\] Let $v$ be an uninfected node such that not all of its $n$ neighbors are in set $A_0$. Define function $$\begin{aligned} N:\{0,1,...,n\} \rightarrow \mathbb{Z}\label{NN}\end{aligned}$$ where $N(i)$ is the smallest time when $i$ of the neighbors of node $v$ is infected, and set $N(0)=0$. Then, a vertex $v$ is isolated iff there exists no $i$ such that $$F(t) \leq i+1~ {\rm for~ some~} t \in (N(i),t_*].$$ Suppose $s\in N(v)\cap A_0$. Then, if there exists $i$ such that $F(t) \leq i+1$ for some $t \in (N(i),t_*]$, using $A_0 \cup \{s\}$ as the initially infected set allows percolation to happen at time $t$ since there would be $i+1$ neighbors infected at each time $N(i)$. Thus with contrapositive, the forward direction is proven. Let $v$ be not isolated, and $v \in P(A_0 \cup \{s\})$ for some neighbor $s$ of $v$. Then there would be $i+1$ neighbors infected at each time $N(i)$. Moreover, for $v$ being to be infected, the $i+1$ neighbors must be able to fill $v$ in the allowed time, $(N(i),t_*]$. Thus there exists $N(i)$ such that $F(t) \leq i+1$ for some $t \in (N(i),t_*]$. With contrapositive, we proved the backwards direction. Note that if a vertex $v$ is uninfected and $N(v)\subset A_0$, then the vertex must be isolated. In what follows we shall study the effect of having different initially infected sets when studying $F(t)$-bootstrap percolation. \[L2\] Let $Q$ be an initial set for which a fixed vertex $v$ with $n$ neighbours is isolated. Denoting the neighbors of $v$ be $s_1, s_2,...,s_n$, we let the times at which they are infected be $t_1^Q, t_2^Q,\ldots,t_n^Q$. Here, if for some $1\leq i \leq n$, the vertex $s_i$ is not infected, then set $t_i^Q$ to be some arbitrarily large number. Moreover, consider another initial set $P$ such that the times at which $s_1, s_2,..., s_n$ are infected are $t_1^P, t_2^P,\ldots,t_n^P$ satisfying $$\begin{aligned} t_i^Q=&t_i^P&~{\rm for }~ i\neq j;\nonumber\\ t_j^Q \leq& t_j^P&~{\rm for }~ i= j,\nonumber \end{aligned}$$ for some $1 \leq j \leq n$. If $v \notin P$, then the vertex $v$ must be isolated with regards to $P$ as well. Consider $N_Q(i)$ as defined in for the set $Q$, and $N_P(i)$ the corresponding function for the set $P$. Then it must be true that for all $k \in \{0,1,...,n\}$, one has that $N_Q(k) \leq N_P(k)$. Indeed, this is because with set $P$, each neighbor of $v$ is infected at or after they are with set $Q$. Then, from Lemma \[L3\], $v$ is isolated with regards to $Q$ so there is no $m$ such that $$F(t) \leq m+1{~\rm~ for~ some~ }~t \in (N_Q(m),t_*].$$ However, since $$N_Q(k) \leq N_P(k){~\rm~ for~ all~ }~k \in \{0,1,...,n\},$$ we can say that there is no $m$ such that $$F(t) \leq m+1{~\rm~ for~ some~ }~t \in (N_P(m),t_*]$$ as $(N_P(m),t_*] \subseteq (N_Q(m),t_*].$ Thus we know that $v$ must also be isolated with regards to $P$. \[D2\] Given a vertex $v$ which is not isolated, we define $t_p(v)\in (0,t_*]$ to be be the largest integer such that there exists $N(i)$ where $F(t_p) \leq i+1$. Note that in order to fill an isolated node $v$, one can fill it by filling one of its neighbors by time $t_p(v)$, or just add the vertex it to the initial set. Hence, one needs to fill a node $v_n$ which is either the parent ${\rm par}(v_n)$, a child ${\rm chi}(v_n)$, or itself. Let $v\notin A_0$ be an isolated node $v$. To achieve percolation, it is always better (faster) to include $v$ in $A_0$ than attempting to make $v$ unisolated. It is possible to make $v$ isolated by including only descendants of $v$ in $A_0$ since we must include less than $deg(v)$ neighbors. But we know that if given the choice to include a descendant or a $v$ to the initial set, choosing $v$ is absolutely advantageous because the upwards percolation achieved by $v$ infected at some positive time is a subset of upwards percolation achieved by filling it at time $0$. Thus including $v$ to the initial set is superior. The above set up can be understood further to find which vertex needs to be chosen to be $v_n$. Consider a vertex $v\notin A_0$. Then, in finding a node $u$ to add to $A_0$ so that $v \in A_\infty$ for the initial set $A_0 \cup \{u\}$ and such $A_\infty$ is maximized, the vertex $v_n$ must be the parent ${\rm par}(v)$ of $v$. Filling $v$ by time $t_*(v)$ already ensures that all descendants of $v$ will be infected, and that all percolation upwards must go through the parent ${\rm par}(v)$ of $v$. This means that filling any child of $v$ in order to fill $v$ (by including some descendant of $v$ in $A_0$) we obtain a subset of percolation if we include the parent ${\rm par}(v)$ of $v$ in $A_0$. Therefore, the parent ${\rm par}(v)$ of $v$ or a further ancestor needs to be included in $A_0$, which means $v_n$ needs to be the parent ${\rm par}(v)$ of $v$. Note that given a node $v\notin A_0$, if we fill its parent ${\rm par}(v)$ before $t_p(v)$, then the vertex will be infected. We are now ready for our main result, which improves the naive $O(t2^n)$ bound for finding minimal percolating sets to $O(tn)$, as discussed further in the last section. \[teorema\]\[teo1\] To obtain one smallest minimal percolating set of a tree $T(V,E)$ with percolation function $F(t)$, proceed as follows: - Step 1. initialize tree: for each node $v$, set $t_*(v)$ to be some arbitrarily large number, and set it to true for needing to be considered. - Step 2. percolate using current $A_0$. Save the time $t_a$’s at which the nodes were infected. Stop the algorithm if the set of nodes that are infected equals the set $V$. - Step 3. consider a node $v$ that is furthest away from the root, and if there are multiple such nodes, choose the one that is isolated, if it exists. - if $v$ is isolated or is the root, add $v$ to $A_0$. - otherwise, set $t_*({\rm par}(v))=t_p(v)-1$ (as Definition \[D2\]) if it is smaller than the current $t_*({\rm par}(v))$ of the parent. Set $v$ as considered. - Step 4. go to step 2. After the process has finished, the resulting set $A_0$ is one of the smallest minimal percolating set. The proof of the theorem, describing the algorithm through which one can find a smallest percolating set, shall be organized as follows: we will first show that the set $A_0$ constructed through the steps of the theorem is a minimal percolating set, and then show that it is the smallest such set. In order to see that $A_0$ is a minimal percolating set, we first need to show that $A_0$ percolates. In step 3, we have included all isolated nodes, as well as the root if it wasn’t infected already, in $A_0$ and guaranteed to fill all other nodes by guaranteeing that their parents will be infected by their time $t_p$. Showing that $A_0$ is a minimal percolating set is equivalent to showing that if we remove any node from $A_0$, it will not percolate to the whole tree. Note that in the process, we have only included isolated nodes in $A_0$ other than the root. This means that if any node $v_0$ is removed from $A_0$, it will not percolate to $v_0$ because we only fill nodes higher than $v_0$ after considering $v_0$ and since turning a node isolated requires filling at least one node higher and one descendant of $v_0$, it cannot be infected to after removing it from $A_0$. Moreover, if the root is in $A_0$, since we considered the root last, it is implied that the rest of $A_0$ does not percolate to root. Thus, $A_0$ is a minimal percolating set. Now we show that the set $A_0$ constructed through the algorithm is of the smallest percolating size by contradiction using Lemma \[L2\]. For this, suppose there is some other minimal percolating set $B$ for which $|B|\leq |A|$. Then, we can build an injection $A_0$ to $B$ in the following manner: iteratively consider the node $a$ that is furthest from the root and $a \in A_0$ that hasn’t been considered, and map it to a vertex $b_0$ which is itself or one of its descendants of $b$ where $b \in B$. We know that such a $b_0$ must exist by induction. We first consider the case where $a$ has no descendant in $A$. Then, if the vertex $b\in B$ and $b$ is a descendant of $a$, we map $a$ to $b$. Now suppose there is no node $b$ that is a descendant of $a$ where $b \in B$. Then, $a \in B$ because otherwise $a$ would be isolated with regards to $B$ as well, by Lemma \[L2\]. This means that we can map $a$ to $a$ in this case. Now we can consider the case where all the descendants $d$ of $a$ such that $d \in A:=A_0$ has been mapped to a node $b_d\in B$ where $b_d$ is $d$ or a descendant of $d$. If there is such a $b\in B$, then $b$ is a descendant of $a$, and thus no nodes in $A$ have been matched to $b$ yet, allowing us to map $a$ to $b$. Now suppose there is no such $b\in B$. This means that there is no $b\in B$ such that all of the descendants of $a$ are descendants of $b$. Then, all nodes in $B$ that are descendants of $a$ is either some descendant of $a\in A$ or some descendant of a descendant of $a$ in $A$. This means that percolating $B$, the children of $a$ will all be infected at later times than when percolating $A$, and by Lemma \[L2\], one has that $a \in B$ because $a$ would be isolated with regards to $B$. So in this case, we can map $a$ to $a$. The map constructed above is injective because each element of $B$ has been mapped to not more than once. Since we constructed an injective function from the set generated by the algorithm $A_0$ to a smaller minimal percolating set $B_0$, we have a contradiction because $A_0$ then must be the same size or larger than $B_0$. Thus, the set generated from the algorithm must be a smallest minimal percolating set. From Theorem \[teo1\] one can find the smallest minimal percolating set on any finite tree. Moreover, it gives an intuition for how to think of the vertices of the graph: in particular, the property of “isolated” is not an absolute property, but a property relative to the set of nodes that has been infected before it. This isolatedness is easy to define and work with in trees since each node has at most one parent. Moreover, a similar property may be considered in more general graphs and we hope to explore this in future work. Below we shall demonstrate the algorithm of Theorem \[teo1\] with an example. We will preform the algorithm on the tree in Example \[ex2\], with percolating function $F(t)=t$. We first initialize all the nodes, setting their time $t_*$ to some arbitrarily large number, represented as $\infty$ in Figure \[inf1\] below. ![(a)-(c) show the first three updates through the algorithm in Theorem \[teo1\], where the vertices considered at each time are shaded and each vertex is assigned the value of $t_*$. []{data-label="inf1"}](Fig4.png) Percolating the empty set $A_0$, the resulting infected set is empty, as shown in Figure \[inf1\] (a). We then consider the furthest node from root. None of them are isolated, so we can consider any; we begin by considering node $6$ in the labelling of Figure \[ex1\] of Example \[ex2\]. It is not isolated, so we set the $t_*$ of the parent to $t_p-1=0$, as can be seen in Figure \[inf1\] (b). Then we consider another node furthest from the root, and through the algorithm set the $t_*$ of the parent to $t_p-1=0$, as can be seen in Figure \[inf1\] (c). The following steps of the algorithm are depicted in Figure \[inf2\] below. ![ (a)-(b) show the updates 4-5 through the algorithm. (c) shows the set $A_0$ in red, and the infected vertices in blue. []{data-label="inf2"}](Fig5.png) As done in the first three steps of Figure \[inf1\], we consider the next furthest node $v$ from the root, and by the same reasoning as node $6$, set the $t_*{\rm par}(v)$ of the parent to $t_*{\rm par}(v)=1$, as can be seen in Figure \[inf2\] (a). Now we consider node $4$: since it is isolated, so we fill it in as in Figure \[inf2\] (b). The set of nodes infected can be seen in Figure \[inf2\] (c). We then consider node $5$, the furthest node from the root not considered yet. Since it is not isolated, change the $t_*{\rm par}(v)$ of its parent to $t_p(v)-1=0$, as in Figure \[inf3\] (a). ![(a)-(c) show the updates through the algorithm in Theorem \[teo1\] after setting $A_0$ to be as in Figure \[inf2\].[]{data-label="inf3"}](Fig6.png) Then we consider node $3$, which is isolated, so we include it in $A_0$. The infected nodes as a result of percolation by this $A_0$ is shown as red vertices in Figure \[inf3\] (c). In order to finish the process, consider the vertex $v=2$ since it is the furthest away non-considered node. It is not isolated so we change the $t_*{\rm par}(v)$ of its parent to $t_p(v)-1=0$, as shown in Figure \[inf4\] (a). Finally, we consider the root: since it is isolated, we include it in our $A_0$ as seen in Figure \[inf4\] (b). Finally, percolating this $A_0$ results in all nodes being infected as shown in Figure \[inf4\] (c), and thus we stop our algorithm. ![Final steps of the algorithm.[]{data-label="inf4"}](Fig7.png) Through the above algorithm, we have constructed a smallest minimal percolating set shown as red vertices in Figure \[inf4\] (c), which is of size $3$. Comparing it with Example \[ex2\], we see that the minimal percolating set in that example is indeed the smallest, also with $3$ elements. Finally, it should be noted that in general the times $t_p$ for each node could be different from each other and are not the same object. From the above example, and its comparison with Example \[ex2\], one can see that a graph can have multiple different smallest minimal percolating sets, and the algorithm finds just one. In the algorithm of Theorem \[teo1\], one minimizes the size of a minimal percolating set , relying on the fact that as long as a node is not isolated, one can engineer its parent to become infected so as to infect the initial node. The motivation of the definition of isolated stems from trying to find a variable that describes whether a node is still possible to become infected by infecting its parent. Because the algorithm is on trees, we could define isolation to be the inability to be infected if we add only one node. Concluding remarks {#final} ================== In order to show the relevance of our work, we shall conclude this note with a short comparison of our model with those existing in the literature.\ [**Complexity.**]{} Firstly we shall consider the complexity of the algorithm in Theorem \[teo1\] to find the smallest minimal percolating set on a graph with $n$ vertices. To calculate this, suppose $t$ is the upper bound on percolation time; we have presented a way to find such an upper bound in the previous sections. In the algorithm, we first initialize the tree, which is linear timed. Steps $2$ and $3$ are run at most $n$ times as there can only be a total of $n$ unconsidered nodes. The upper bound on time is $t$, so steps 2 will take $t$ to run. Determining whether a node is isolated is linear timed, so determining isolated-ness of all nodes on the same level is quadratic timed, and doing the specifics of step 3 is constant timed. Thus the algorithm is $O(n+n(t+n^2)) = O(tn + n^3) = O(tn)$, much better than then $O(t2^n)$ complexity of the naive algorithm.\ [**Comparison on perfect trees.**]{} Finally, we shall compare our algorithm with classical $r$-bootstrap percolation. For this, in Figure \[comp\] we show a comparison of sizes of the smallest minimal percolating sets on perfect trees of height $4$, varying the degree of the tree. Two different functions were compared: one is constant and the other is quadratic. We see that the time-dependent bootstrap percolation model can be superior in modelling diseases with time-variant speed of spread, for that if each individual has around $10$ social connections, the smallest number of individuals needed to be infected in order to percolate the whole population has a difference of around $10^3$ between the two models. [**Comparison on random trees.**]{} We shall conclude this work by comparing the smallest minimal percolating sets found through our algorithm and those constructed by Riedl in [@percset]. In order to understand the difference of the two models, we shall first consider in Figure \[comp1\] three percolating functions $F(t)$ on random trees of different sizes, where each random tree has been formed by beginning with one node, and then for each new node $i$ we add, use a random number from $1$ to $i-1$ to determine where to attach this node. In the above picture, the size of the smallest minimal percolating set can be obtained by multiplying the size of the minimal percolating set by the corresponding value of $n$. In particular, one can see how the exponential function requires an increasingly larger minimal percolating set in comparison with polynomial percolating functions. [**Comparison with [@percset].**]{} To compare with the work of [@percset], we shall run the algorithm with $F(t)=2$ (leading to 2-bootstrap percolation as considered in [@percset]) as well as linear-timed function on the following graph: With our algorithm, we see that nodes $2$, $3$ and $5$ are isolated respectively, and when we add them to the initial set, all nodes become infected. Thus the smallest minimal percolating set with our algorithm has size $3$. Riedl provided an algorithm for the smallest minimal percolating sets in trees for $r$-bootstrap percolation in [@percset] that runs in linear time. We shall describe his algorithm generally to clarify the comparisons we will make. Riedl defined a trailing star or trailing pseudo-star as a subtree with each vertex being of distance at most $1$ or $2$ away, respectively, from a certain center vertex that is connected to the rest of the tree by only one edge. Then, the first step of Riedl’s algorithm is a reduction procedure that ensures every non-leaf has degree at least $r$: intuitively, one repeatedly finds a vertex with degree less than $r$, include it to the minimal percolating set, remove it and all the edges attached to it, and for each of the connected components, add a new node with degree $1$ connected to the node that was a neighbor of the node we removed. Then, the algorithm identifies a trailing star or pseudo-star, whose center shall be denoted by $v$ and its set of leaves by $L$. Letting the original tree be $T$, if the number of leafs on $v$ is less than $r$, then set $T'=T \setminus (v \cup L)$; otherwise, set $T'=T\setminus L$. Recursively set $A'$ as the smallest minimal percolating set of $T'$ under $r$-bootstrap percolation. Then, the smallest minimal percolating set for $T$ is $A' \cup L$ if $|L|<r$ and $A' \cup L \setminus v$ otherwise. Using Riedl’s algorithm, we first note that there is a trailing star centered at $3$ with $2$ leaves. Removing the leaf, there is a trailing star at $1$ with $1$ leaf. Removing $1$ and $2$, we have one node left, which is in our $A'$. Adding the leaves back and removing $3$, we have an $A_0$ of $2,3$ and $5$, a smallest minimal percolating set. Thus the smallest minimal percolating set with Riedl’s algorithm also has size $3$, as expected. We shall now compare our algorithm to that of Riedl. A key step in Riedl’s algorithm, which is including the leaves of stars and pseudo-stars in the final minimal percolating set, assumes that these leaves cannot be infected as it is assumed that $r > 1$. However, in our algorithm, we consider functions that may have the value of $1$ somewhere in the function, thus we cannot make that assumption. Further, in $r$-bootstrap percolation, time of infection of each vertex does not need to be taken into account when calculating the conditions for a node to be infected as that $r$ is constant, whereas in the time-dependent case, it is necessary: suppose a node has $n$ neighbors, and there is only one $t$ such that $F(t) \leq n$, so all neighbors must be infected by time $n$ in order for $n$ to become infected.\ [**Concluding remarks.**]{} The problem our algorithm solves is a generalization of Riedl’s, for that it finds one smallest minimal percolating set for functions including constant ones. It has higher computational complexity for that it is not guaranteed for an unisolated node to be infected once one other neighbor of it is infected without accounting for time limits. Finally, we should mention that the work presented in previous sections could be generalized in several directions and, in particular, we hope to develop a similar algorithm for largest minimal percolating set; and study the size of largest and smallest minimal percolating sets in lattices.  \ [**Acknowledgements.**]{} The authors are thankful to MIT PRIMES-USA for the opportunity to conduct this research together, and in particular Tanya Khovanova for her continued support, to Eric Riedl and Yongyi Chen for comments on a draft of the paper, and to Rinni Bhansali and Fidel I. Schaposnik for useful advice regarding our code. The work of Laura Schaposnik is partially supported through the NSF grants DMS-1509693 and CAREER DMS 1749013, and she is thankful to the Simons Center for Geometry and Physics for the hospitality during part of the preparation of the manuscript. This material is also based upon work supported by the National Science Foundation under Grant No. DMS- 1440140 while Laura Schaposnik was in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2019 semester.
--- author: - 'Daisuke Kadoh,' - Katsumasa Nakayama bibliography: - 'Refs.bib' title: Direct computational approach to lattice supersymmetric quantum mechanics --- We would like to thank Yoshinobu Kuramashi, Yoshifumi Nakamura, Shinji Takeda, Yuya Shimizu, Yusuke Yoshimura, Hikaru Kawauchi, and Ryo Sakai for valuable comments on TNR formulations which are closely related with this study. D.K also thank Naoya Ukita for encouraging our study. This work is supported by JSPS KAKENHI Grant Numbers JP16K05328 and the MEXT-Supported Program for the Strategic Research Foundation at Private Universities Topological Science (Grant No. S1511006).
--- abstract: 'Haptic devices have been employed to immerse users in VR environments. In particular, hand and finger haptic devices have been deeply developed. However, this type of devices occlude the hand detection by some tracking systems, or in other tracking systems, it is uncomfortable for the users to wear two hand devices (haptic and tracking device). We introduce RecyGlide, which is a novel wearable forearm multimodal display at the forearm. The RecyGlide is composed of inverted five-bar linkages and vibration motors. The device provides multimodal tactile feedback such as slippage, a force vector, pressure, and vibration. We tested the discrimination ability of monomodal and multimodal stimuli patterns in the forearm, and confirmed that the multimodal stimuli patterns are more recognizable. This haptic device was used in VR applications, and we proved that it enhances VR experience and makes it more interactive.' author: - Juan Heredia - Jonathan Tirado - Vladislav Panov - Miguel Altamirano Cabrera - 'Kamal Youcef-Toumi' - Dzmitry Tsetserukou bibliography: - 'sample-base.bib' title: - 'Recyglide : A Wearable Multi-modal Stimuli Haptic Display aims to Improve User VR Immersion' - ' RecyGlide : A Forearm-worn Multi-modal Haptic Display aimed to Improve User VR Immersion' --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003121.10003125.10011752&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Haptic devices&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; ![image](3im.jpg){width="\textwidth"} \[fig:teaser\] Introduction ============ Recently, several VR applications have been proposed in many fields: medicine, design, marketing, etc. , and to improve user immersion new methods or instruments are need. Haptics provides a solution, and enhances user experience using stimuli [@Han:2018:HAM:3281505.3281507]. Most of the haptics devices are located in the palm or fingers. However, in some cases the haptic device position is a problem because VR Hand Tracking Systems need a free hand (Leapmotion) or a hand instrument (HTC) to recognize the position. Therefore, we propose a novel multi-modal haptic display located in the forearm. Various haptic devices have been developed in the forearm with mono-modal stimulus [@Dobbelstein:2018:MSM:3267242.3267249; @Moriyama:2018:DWH:3267782.3267795]. However, they have a persistent problem: users have more difficulty perceiving a stimulus in the forearm. The forearm is a not advantageous zone since it does not have as many nerves as the palm or fingertips have. Consequently, our device produces multi stimuli for improving user perceptions. Furthermore, multimodal stimuli experiments have been employed on the user’s hands, where the results show improvements in patterns recognition [@10.1007/978-981-13-3194-7_33]. RecyGlide is a novel forearm haptic display to provide multimodal stimuli. RecyGlide consists of one inverted five-bar linkage 2-DoF system installed parallel to the radius, which produce a sliding force along the user’s forearm. The second stimulus is vibration; two vibration motors are placed in the device’s edges (see Fig. 2 (b)). The schematic and 3D representation are shown in Fig 2. The user study aims to identify the advantages of multimodal stimuli use in comparison with monomodal sensations. The experiment consists of multimodal and monomodal pattern recognition by users. In our hypothesis, two tactile channels could improve the perception of patterns. Device Development ================== $RecyGlide$ provides the sensation of vibration, contact at one point, and sliding at the user’s forearm. The location of the haptic contact point is determined using kinematics model of inverted five-bar linkages, inspired by $LinkTouch$ technology [@tsetserukou2014]. The second stimulus is generated by two vibration motors located on the extreme sides of the device. The two types of stimuli allow creating different patterns at the forearm. Also, it could be used to interact with the VR environment, e.g., the sensation of submerging in liquids, the felling of an animal movement in the forearm, or information delivery about the relative location. The device has 2-DOF defined by the action of two servomotors. The slippage stimulus is produced by the movement of the motors in the same direction. Conversely, the movement of the motors in the opposite direction generates the force stimulus. The derivation of the previous stimuli creates other stimuli like temperature or pressure. $RecyGlide$ location is convenient for hand tracking systems. Commercial tracking systems have different principles to achieve their objective. However, the use of a device in hand decreases the performance. In the case of a visual track, if the haptic device is located in hand, the typical shape of the hand changes producing poor tracking performance. For systems with trackers like HTC, it is uncomfortable for the user to wear two devices in hand. The device has been designed to adapt ergonomically to the user’s forearm, allowing free movement of the hand when working in the virtual reality environment. The technical characteristics are listed in the Table \[char\]. The table shows the type of servomotors,and also the weight, the material of the device. ---------------------- ----------------- Motors Hitec HS-40 Weight $\ 95 g$ Material PLA and TPU 95A Max. normal force at contact point $2\ N$ ---------------------- ----------------- : Technical specification of RecyGlide.[]{data-label="char"} RecyGlide is electronically composed by an Arduino MKR 1000, servomotors and vibration motors. This Arduino model helps for IoT applications because of its wifi module. The Arduino is in charge of signal generation to the motors and communication wifi TCP/IP to the computer. The device maintains constant wifi communication with the computer throws a python script. A virtual socket transmits the data from python to the VR application in Unity. In the experiment, a TCP/ IP server in Arduino was designed. The server provides direct access to the device, avoiding the use of the computer. This utility allows generating apps for cellphones too. User Study ========== The objective of the following experiment is to analyze the user’s perception and recognition of patterns when mononodal and multimodal stimuli are rendered on the forearm, and to determine if the multimodal stimuli increase the user’s perception of the contact point position. The first user experience is a contact stimulus over the cutaneous area by the sliding action of the contact point generated by the inverted five-bar linkages device. The second user experience implements stimuli by the combination of vibrating motors and the sliding of the contact point generated by the inverted five-bar linkages device. The results of the experiment will help to understand if the multimodal stimuli improves the perception the position generated by *RecyGlide* device. Experimental Design ------------------- For the execution of this experiment, a pattern bank of 6 different combinations between displacement and vibratory signals was designed and is shown in the Fig. \[fig:patterns\]. In the first group of patterns, A (Small Distance (SD): progress of 25%), B (Medium Distance (MD): progress of 50%) and C (Large Distance (LG): progress of 75%), where monomodal stimuli are delivered; the second group, D (Small Distance with vibration (SDV): progress of 25%), E (Medium Distance with vibration (MDV): progress of 50%) and F (Large Distance with vibration (LDV): progress of 75%), includes multimodal stimuli.The vibration is delivered progressively according to the position of the contact point: while the position of the contact point is nearest to the edge where the vibration motor is located, the vibration frequency is higher; when the contact point is located at the middle distance, the vibration is the same in both vibration motors. The higher frequency used in the vibration motors is equal to $500 Hz$. The sliding speed of the contact point over the skin is constant and has a value of $23 mm/s$. Experimental Setup ------------------ The user was asked to sit in front of a desk, and to wear the $RecyGlide$ device on the right forearm. The device was connected to one Arduino MKR1000. From the python console, the six patterns were sent to the micro-controller throw TCP/IP communication. To reduce the external stimuli, the users wear headphones with white noise. A physical barrier interrupted the vision of the users to their right arm. Before each section of the experiment, a training session was conducted, where all the patterns were delivered to the users five times. Six participants volunteering complete the experiments, two women and four men, with an average age of 27 years old. Each pattern was delivered on their forearm five times in a random order. Results ------- To summarize the data obtained in the experiment, a confusion matrix is tabulated in the Table \[confussion\]. The diagonal term of the confusion matrix indicates the percentage of correct responses of subjects. -- --------------------------------- -------- -------- -------- -------- -------- --------- SD MD LD SDV MDV LDV Small Distance (SD) **77** 20 3 0 0 0 Medium Distance (MD) 0 **80** 20 0 0 0 Large Distance (LD) 3 3 **93** 0 0 0 Small Distance Vibration (SDV) 0 0 0 **97** 0 3 Medium Distance Vibration (MDV) 0 0 0 3 **93** 3 Large Distance Vibration (LDV) 0 0 0 0 0 **100** -- --------------------------------- -------- -------- -------- -------- -------- --------- : Confusion Matrix for Patterns Recognition.[]{data-label="confussion"} The results of the experiment revealed that the mean percent correct scores for each subject averaged over all six patterns ranged from 80 to 96.7 percent, with an overall group mean of 90.6 percent of correct answers. Table 1 shows that the distinctive patterns LDV and SDV have higher percentages of recognition 100 and 97, respectively. On the other hand, patterns MD and SD have lower recognition rates of 80 and 77 percent, respectively. For most participants, it was difficult to recognize pattern SD, which usually was confused with pattern MD. Therefore, it is prove the requirement of more distinctive tactile stimuli (vibration) to improve the recognition rate. The ANOVA results showed a statistically significant difference in the recognition of different patterns (F(5, 30) = 3.2432, p = 0.018532$ <$ 0.05). The paired t-tests showed statistically significant differences between the SD and SDV (p=0.041$<$0.05), and statistical difference between MD and MDV (p=0.025$<$0.05). This results confirm our hypothesis, that the multimodal stimuli improves the perception of the humans in the forearm for short distances. However, the results of paired t-tests between the long distances with and without vibration do not reveal significant differences, thus for long distances perceptions, the multimodal stimuli are not required. Applications ============ For the demonstration of RecyGlide, some applications were developed using the game engine Unity 3D. The SubmergingHand application clearly illustrate how the device improves the user immersion in a virtual reality environment. The device provides a sensation of the liquid level by the position of the contact point in the forearm. The viscosity of the liquid is represented by the normal force applied in the contact point. When the liquid is viscous, the applied force is high and the vibration motors work in a higher frequency. The application is shown in the Fig. \[fig:sumerg\]. The application “Boundaries Recognition” is an example of how our device helps to perceive the VR environment. Habitually VR users go forward the object boundaries and don’t respect the physics limits of static objects in the scene, such as walls and tables. RecyGlide informs the boundary collision of the hand tracker by activating the vibration motors and moving the contact point to one of the two sides, which is call collision side. These two applications are basic examples, that can be applied in more complex ones. Additionally, the patterns can represent the current state of some variable or environment characteristic; for example, they can communicate the tracker battery status, the selected environment or the distance to an objective, etc. Conclusions ----------- We proposed a new haptic device in the forearm and made some experiments with it. Due to the results of the experiment, we demonstrated that multimodal stimuli patrons are easily recognizable. Therefore we consider that the device is suitable for communicating some VR messages through the use of patterns. Though the forearm is not an advantageous area, the device has excellent performance and improves VR realism and the user immersion. The feeling of submersion in liquids can be used in numerous applications such as summing simulators, medical operations, games, etc. Boundaries collision detection is useful for all kind of VR applications because it is a constant problem in VR environments.
--- author: - | Malika Chassan$^{1,2}$ , Jean-Marc Azaïs$^1$,\ Guillaume Buscarlet$^3$, Norbert Suard$^2$\ [ $(^1)$ Institut de Mathématiques de Toulouse, Université Toulouse 3, France]{}\ [ $(^2)$ CNES, Toulouse, France]{}\ [ $(^3)$ Thales Alenia Space, Toulouse, France]{}\ [ malika.chassan@math.univ-toulouse.fr]{} bibliography: - 'biblio.bib' title: A proportional hazard model for the estimation of ionosphere storm occurrence risk --- Introduction ============ Severe magnetic storms are feared events for integrity and continuity of GPS-EGNOS navigation system and an accurate modeling of this phenomena is necessary. Our aim is to estimate the intensity of apparition of extreme magnetic storm per time unit (year). Our data set, retrieved from [@noaaKpap], consists of 80 years of registration of the so called 3 hours ap index (for “planetary amplitude”). The ap index quantifies the intensity of planetary geomagnetic activity, using data from 13 observatories. Although the equatorial region is not covered by these 13 observatories, they are spread all over the earth and the coverage of the ap index is rather global. The ap index is the linear transformation of the quasi log-scale index Kp, with the same sampling step of 3 hours. The Kp index, and hence the ap index, corresponds to a maximal variation of the magnetic field over a 3 hours period. See [@noaaKpap] for more details on geomagnetic indices. The ap index is available from 1932 to present but for our analysis we will use only the 7 complete solar cycles of the data set, from the 17th (on the general list) which starts on September 1933, to the 23th which ends on December 2008. There are other data available for the study of the ionosphere magnetic activity, each of them with advantages and disadvantages: - the aa index (for “antipodal amplitude”). Although this index is available since 1868, it is calculated from only two nearly antipodal geomagnetic stations in England and Australia. Thus, this indice does not take into account all the magnetic activity of the ionosphere. - the Dst (Disturbance storm time). This index is restricted to the equatorial magnetic perturbation (see Figure \[fig : carte\_obs\]). Moreover, there are only 57 years of registration available against 80 for the ap index. Nonetheless, this indice gets the advantage to be an unbounded integer contrary to the ap index which lies in a finite set of non consecutive positive integers (see Section \[section : difficulties\]). - the raw geomagnetic data are also available for many geomagnetic observatories. Oldest observations date back to 1883 for hourly values and to 1969 for 1 minute values. They consist of the measure by magnetometers of magnetic field variations. The disadvantage of this data is the presence of gaps in the recording (with gap lengths varying from one month to several years depending on the observatory). The principal disadvantage of these data is the quantity of pre-treatments required. Since all these index show a strong correlation rate [@Rifa], we chose to use only one indice to make our analyses. We opted for the ap index. The main advantage of this data set is the large amount of data. Moreover, there is no gap in the ap index, contrary to raw geomagnetic data. Finally, the ap index is more global than aa index or Dst, as one can see on Figure \[fig : carte\_obs\]. ![Positions of the observatories for the Dst ($\bigstar$) and the Kp/ap indices ().[]{data-label="fig : carte_obs"}](Fig1.jpg){width="8cm" height="5.5cm"} Intensive storms being scarce, classical statistical methods for probability estimation, as empirical frequency, are not precise enough. In many domains, the Extreme Value Theory (EVT) enables to estimate the probability of scarce extreme events. But, because of, among other things, the finite discrete form of our data and the obvious non stationary behavior, the EVT cannot be applied.\ In Section 2, we develop the arguments showing that a use of classical EVT is not achievable. In Section 3, we describe our new proportional hazard model. This is the main contribution of this paper. The description of parameter estimators could be found in Section 4. The Section 5 is dedicated to the presentation of applications to our data set. Difficulties to directly apply EVT {#section : difficulties} =================================== As said before, the first obstacle to direct application of EVT is the bounded discrete form of our data. The ap index varies in the set $\{$0, 2, 3, 4, 5, 6, 7, 9, 12, 15, 18, 22, 27, 32, 39, 48, 56, 67, 80, 94, 111, 132, 154, 179, 207, 236, 300, 400$\}$. The application of Extreme Value Theory assumes the continuity of the probability distribution and it is well known that EVT does not apply to discrete finite observations, see for example [@Anderson1970]. The fact that finite discrete data do not enter in the scope of the theory is not the only issue. Indeed, in case of peaks over threshold modeling, one has to choose a threshold. The choice of the optimal threshold is made analyzing the behavior of the parameters according to a threshold variation. This is, generally speaking, not possible with discrete data. For example, see the work of Cooley, Nychka and Naveau [@CooleyNaveau2007], where the low precision of the measure makes data almost discrete. Here, when the threshold grows up, one can observe a sawtooth behavior of parameters estimators and this makes the threshold selection troublesome.\ A second problem is that ap index data obviously show a non stationary pattern. It is well known that the sun activity follows cycles with a duration of about 11 years. Corresponding cycles are observable in ap index behavior and must be taken into account for model assessment. This behavior implies that the probability of a magnetic storm occurrence depends on the position into the cycle. See Figure \[Cycle2\], for example. ![The ap index during the first complete cycle (17$^{th}$ on the general list, from September 1933 to February 1944). The dotted vertical line represents the peak.[]{data-label="Cycle2"}](Fig2.jpg){width="8cm" height="6.5cm"} One can see the first complete solar cycle of the data set. Its middle is indicated by a vertical dotted line. One obviously remarks that strong storms (characterized by a high ap index level) occur principally during the second half of the cycle. Thus, it is not realistic to model this behavior by a standard stationary extreme value model (e.g. with constant parameters). A more efficient approach will be to include non-stationarity in parameters estimation. But once again, for this type of processes, there is no general theory allowing such a modeling. In various research fields like hydrology, non stationary extreme models are proposed. For example, see the work of Jonathan and Ewans [@Jonathan2011]. In this paper, authors want to model the seasonality of extreme waves in the gulf of Mexico. Occurrence rate and intensity of storm peak events vary with season. To model this seasonal effect, the authors have chosen to express the Generalized Pareto parameters as a function of seasonal degree using a Fourier form. But this approach supposes that the classical EVT can be applied, and this is not the case with the data set used in this paper. Model description ================= In this section, we give a precise definition of what we call a storm, describe data and pretreatments (mostly declustering and time warping). We also describe the model we built and its advantages. Storm definition, declustering ------------------------------ Ionospheric perturbations are classified in a standardized way using ap index, according to the Table 1: **Ionosphere Condition** **Kp-index** **ap index** -------------------------- -------------- ---------------- Quiet 0-1 &lt;7 Unsettled 2 7 to &lt;15 Active 3 15 to &lt;27 Minor storm 4 27 to &lt;48 Major storm 5 48 to &lt;80 Severe storm 6 80 to &lt;140 Large severe 7 140 to &lt;240 Extreme 8 240 to &lt;400 Extreme 9 $\geq$ 400 : Relation between Kp, ap and ionosphere activity \ \ We introduce a declustering process of the data in order to consider only one event with the highest intensity even through there are different periods of high intensity separated by less active ones (lower indices). See Chapter 5.3 in [@Coles2001] for example. This so called Runs Declustering process allows to precisely define what we consider as a storm. We have to set two parameters: - a *low level*, the threshold above which we consider that a storm begins (typically 111, 132 or 154); - the run length *r*, the minimal number of observations below the low level between two events for them to be consider independent. Thus, two exceedances of the low level separated by less than *r* measures will be consider to belong to the same cluster (same storm). Then, for each cluster, we define the storm level as the maximal level reached in the cluster. The first time when this maximum is attained is also saved, it represents the storm date. For a cluster, we define the length of the storm as the number of observations between the first up-crossing and the last down-crossing of the low level. Durations of magnetic storms are very variable, from 3 or 6 hours for an extreme storm (level 300 or 400) until 90 hours for a low level storm. But, due to this declustering, we consider only one time event (since only the first maximum occurrence time is saved). This is not incoherent since we focus on strong storms, which are brief compared to lower storms but it should be take into account for the probability of occurrence definition. Precisions on probability of occurrence {#section: precise_proba} --------------------------------------- As said before, a storm is now defined by three values: the maximal level, the first time when this maximum is attained and the length of the cluster. This modeling allows to estimate the probability: $$P_1(t) = \mathds P(\textrm{a storm of level 400 \textbf{begins} at time t})$$ And we want to know the probability: $$P_2(t) = \mathds P(\textrm{a storm of level 400 \textbf{ is ongoing} at time t})$$ In the whole data set, the level 400 is reached 29 times, but only 23 storms of level 400 are counted after the declustering. Among these 23 storms, 17 reach the level 400 only one time and 6 remain at this level two consecutive times. Hence, we can say that: $$\begin{array}{ll} P_2 (t) &= P_1(t) + P_1(t-1)\times \mathds P(\textrm{storm stays at the level 400 two times})\\ & \simeq P_1(t) \times (1 + \mathds P(\textrm{storm stays at the level 400 two times}))\\ & \simeq P_1(t)\times (1 + 6/23) \end{array}$$ Data description ---------------- After the declustering there are only 23 magnetic storms of level 400. There are not enough individuals to estimate their frequency as a function of the covariates. For the storms of level 300, one counts 44 events and this is still insufficient. Consequently, we have to use storms of lower levels to estimate the influence of each covariate and extrapolate these results to the extreme level. We will use all the storms of level greater or equal to the *low level* parameter defined in the declustering process to make estimations. For example, if the low level is 111, we call “high level storm” every storm of level 111, 132, 154, 179, 207, 236, 300 or 400. The “extreme level” will be only 400. The mean probability of occurrence for each high level is given in Table \[tab: freq\]. Level 111 132 154 179 207 236 300 400 --------------------------- ------ ------ ------ ------ ------ ------ ------ ------ Number of storm 182 158 103 84 51 57 44 23 Frequency $\times 10^{4}$ 7.99 6.93 4.52 3.69 2.24 2.50 1.93 1.01 Frequency in year$^{-1}$ 2.33 2.02 1.32 1.08 0.65 0.73 0.56 0.29 : Number of occurrences and frequency of storms by level[]{data-label="tab: freq"} \ Besides of the 3 hours ap index, we dispose of a covariate representing the solar activity of a cycle. This solar cycle activity characteristic is the maximum of the monthly Smoothed Sunspot Number (monthly SSN). For an easier interpretation of the results, this covariate will be centered. See [@nasaSSN] for more details on the sunspot number. The lengths of the cycles are also available, we call $D_j$ the length of the $j^{th}$ cycle. Time Warping ------------ The durations of the 7 complete solar cycles range from 9.7 to 12.6 years. Thus, in order to analyze all the 7 cycles together, a data warping is applied to each cycle: the position of a storm on a cycle is represented by a number between $-0.5$ and $0.5$ where $-0.5$ is the beginning of the cycle, 0.5 its end and 0 its middle (peak). In the Figure \[Cycle2Warping\], the dash-dotted line represents the warped time for the first complete solar cycle. ![The ap index during the first cycle. The dotted vertical line represents the peak and the dash-dotted line the warped time.[]{data-label="Cycle2Warping"}](Fig3.jpg){width="8cm" height="7cm"} Proportional hazard model ------------------------- The model we built is inspired by the Cox model. First introduced in epidemiology, the Cox model is a proportional hazard model which permits to express the instantaneous risk with respect to time and some covariates $(X_1,....,X_p)$. In epidemiology, these variables are risk factors as well as treatments. The instantaneous risk $\lambda(t,X_1,...X_p)$ is defined using the occurrence probability in an infinitesimal interval $$\mathds P \{ \textrm{there exists an event} \in [t,t+dt]\, \}= \lambda(t,X_1,...X_p)dt$$ In the Cox model, this instantaneous risk is a relative risk with respect to a reference risk $ \lambda_0(t)$, often related to a control treatment. The influence of the covariates is modeled by the exponential of a linear combination of them. That is to say: $$\lambda(t,X_1,...X_p) = \lambda_0(t)\exp (\sum_{i=1}^p \beta_i X_i)$$ where $\beta_i$ quantifies the influence of the $i^{th}$ covariate. For more details about the Cox model, see [@Aalen2008]. The model constructed here has undergone meaningful modifications from the Cox model: - an event (a storm occurrence) may occur several times within a cycle. Hence we use Poisson distributions instead of Bernoulli ones; - the variable $D_j$ is included as factor, thus the measurement unit is the number of events per time unit and not per cycle; - $\lambda_0(t)$ is not considered as a nuisance parameter but as a parameter to estimate; - the estimation is made using all the storms of high level and an extrapolation to the storms of extreme level 400 is applied using the parameter $P_{400}$, the probability that a high level storm grows into a storm of level 400. The utilization of this parameter assumes that the level reached by a high level storm does not depend on the instant of appearance. A chi-square independence test showed that this assumption is acceptable. For precisions on this test, see Appendix \[annexe: chi2\].\ Thus, in the model we developed, the number of observed storms (of high level) during the cycle $j$ at time $t$, called $N_j(t)$, is supposed to be a non-homogeneous Poisson process of intensity $\lambda_j(t)$ such as : $$\lambda_j(t) = \lambda_0(t) D_j \exp ( \beta X_j)$$ i.e. $$N_j([a,b]) \sim \mathcal P \left( \int_a^b \lambda_j (t)dt \right)$$ The basic intensity $\lambda_0(t)$ takes into account the fact that storms occurs more likely during the second half of the cycle. We want to estimate it. Note that only one covariate is used here, the solar activity index $X_j$ and that the parameter $\beta$ models its influence. A model extension ----------------- We have seen that there is a strong difference between the two halves of a solar cycle. Thus, we tried to implement a modified model, where the estimation was made separately on every half. Thus, the variable $D_j$ was replaced by $D_{j,1}$ and $D_{j,2}$, the lengths of the first and second half of the cycle, and then, $N_j(.)$ was a non-homogeneous Poisson process of intensity: $$\begin{array}{c} \lambda_{j,1}(t) = \lambda_0(t) D_{j,1} \exp ( \beta_1 X_j) \ \textrm{ if } t<0\\ \lambda_{j,2}(t) = \lambda_0(t) D_{j,2} \exp ( \beta_2 X_j) \ \textrm{ if } t\geq0 \end{array}$$ But the estimation in this model led to incoherent results. Indeed, because of the presence of a normalization constant different on each half (see Section \[section : lambda 0 chap\]), the basic intensity during the first half was higher than during the second one. Hence, this approach was abandoned. Estimation ========== $P_{400}$ and $\beta$ --------------------- Since $P_{400}$ is independent of the position in the cycle, the empirical frequency is used $$\widehat{P_{400}} = \frac{\# \{ \textrm{storms of level } 400 \}}{\# \{ \textrm{storms of level $\geq$ \textit{low level}} \}}$$ And noting $m=\# \{ \textrm{storms of level $\geq$ \textit{low level}} \}$ we get the corresponding $95\%$ confidence interval: $$P_{400} \in \left[ \widehat{P_{400}} \pm 1.96 \sqrt{\widehat{P_{400}}(1- \widehat{P_{400}})/m}\right]$$ For $\beta$, we use the fact that $$N_j = N_j([-0.5,0.5]) \sim \mathcal P \left( \left[ \int_{-1/2}^{1/2} \lambda_0 (s)ds \right] \ D_j \exp(\beta X_j) \right)$$ As in the Cox model, we verify the sufficiency of the statistic $N_j$ and $\beta$ is estimated by its maximum likelihood estimator in a Poisson generalized linear model. A confidence interval is also computed. All the details could be found in Appendix \[annexe\]. Basic intensity $\lambda_0(t)$ {#section : lambda 0 chap} ------------------------------ Here, we use a kernel estimator. Assuming that $\beta$ is known, we have : $$\widehat{\lambda_0 (t)} = K\displaystyle \sum_{j=1}^J \int_{-1/2}^{1/2} dN_j(t-s)\phi(s) = K \displaystyle \sum_{j=1}^J \int_{-1/2}^{1/2} N_j(t-s)\phi'(s)ds$$ where $J$ is the number of individuals (cycles) , $K$ a normalization constant and $\phi$ the kernel, verifying $\phi(\pm 1/2) =0$ (for the integration by parts) and $\int_{-1/2}^{1/2} \phi(s)ds =1$.\ The bias and the variance of this estimator are calculated using step functions and by passage to the limit. Let $\phi$ be a step function, $$\phi(s) = \sum_{i=1}^n a_i \mathbb 1 _{A_i}(s)$$ where the $A_i = [ t_i, t_{i+1} ] $ form a partition of $[ -1/2,1/2 ]$ (we can assume $t_i<t_{i+1}$ without loss of generality) and the $a_i$ are such that $\int_{-1/2}^{1/2} \phi(s) ds =1$. Then, for each $t \in [-1/2,1/2]$, $$\begin{array}{ll} \widehat{\lambda_0 (t)} &= \displaystyle \sum_{j=1}^J K \int_{-1/2}^{1/2} dN_j(t-s)\phi(s)\\ %voir sauv 9/01/13 pour les détails &= K \displaystyle{ \sum_{j=1}^J} \bigg\{ a_ 1 N_j([t-t_2,t-t_1]) + ...+ a_n N_j([t-t_{n+1},t-t_n]) \bigg\} \end{array}$$ Thus, since $N_j([a,b]) \sim \mathcal P \left( Q_i \, \int_a^b \lambda_0 (s)ds \right) \ $ with $Q_i = D_j \exp(\beta X_j)$ and since $\mathds E \mathcal P (\xi) = \mathds V \mathcal P (\xi) = \xi$, we get: $$\mathds E \, \widehat{\lambda_0 (t)} = K \displaystyle{ \sum_{j=1}^J} Q_j \int_{-1/2}^{1/2} \lambda_0 (s)\phi(t-s)ds$$\ Similarly, for the variance: $$\begin{array}{ll} \mathds V \, \widehat{\lambda_0 (t)} &= K^2 \displaystyle{ \sum_{j=1}^N} \bigg\{ a_1^2 \mathds V N_j([t-t_2,t-t_1]) + ...+ a_n^2 \mathds V N_j([t-t_{n+1},t-t_n]) \bigg\}\\ &=K^2 \displaystyle{ \sum_{j=1}^N} Q_j \int_{-1/2}^{1/2} \lambda_0 (s)\phi^2(t-s)ds \end{array}$$ In the case of a kernel concentrated around zero we obtain $$\mathds E \, \widehat{\lambda_0 (t)} \simeq K \displaystyle{ \sum_{j=1}^J} Q_j \lambda_0 (t)$$ Hence, the choice $K = 1/ \sum Q_j$ is convenient and then we get $$\mathds V \, \widehat{\lambda_0 (t)}\simeq \frac{1}{ \sum Q_j}\lambda_0(t) \int_{-1/2}^{1/2}\phi^2(s)ds$$ In practice we used for $\phi$ a Gaussian kernel, i.e. $$\phi(s) = \frac{1}{\sqrt{2\pi}h}\exp(-\frac{s^2}{2h^2})$$ with $h$ the band width parameter, determined later. Then, using the fact that $$\phi^2(s) = \frac{1}{2\sqrt\pi h} \phi(\sqrt 2 s)$$ where $\phi(\sqrt 2 s)$ is the density function of a normal distribution $\mathcal N \left( 0, (h/\sqrt 2 )^2 \right)$ we can say that for $h$ sufficiently small $$\int_{-1/2}^{1/2}\phi^2(s)ds \simeq \int_{-\infty}^{+\infty}\phi^2(s)ds = \frac{1}{2\sqrt\pi h}$$ In order to avoid edge effects, a periodization is applied before the estimation process. The band width parameter $h$ is chosen by cross-validation with minimization of the Integrated Square Error. See [@Bowman1984] or [@Hall1983] for more details.\ \ Remark: as indicated in Section \[section: precise\_proba\], the intensity estimated by the kernel method does not correspond to the intensity we want to evaluate. Indeed, the intensity we estimate correspond to the probability $P_1$ that a storm of level 400 begins at time $t$. Hence we apply a correction by multiplying $\widehat {\lambda_0(t)}$ by 29/23.\ Thus, we obtain the approximate confidence interval for $\lambda_0(t)$ $$\lambda_0(t) \in \left[ \widehat{\lambda_0 (t)} \pm 1.96 \sqrt{\frac{1}{\sum Q_j}\frac{\widehat{\lambda_0 (t)}}{2\sqrt \pi h}} \right]$$ Results ======= Instantaneous intensity ----------------------- The graphic in Figure \[fig: lambda0\_111\] gives the estimation result of $\widehat {\lambda_0(t)}$ for a low level of 111 with the confidence area (i.e. the intensity for all the storms of level greater or equal to 111). The bandwidth parameter is selected by cross validation and is equal to 0.035. As expected, the basic intensity is higher during the second half of the cycle. One can also see a significant increase near of the x-axis zero, highlighting the difference between the two halves of a solar cycle. ![Estimated instantaneous intensity (years$^{-1}$) of the storms of level greater or equal to 111, for a mean solar activity of 146.7[]{data-label="fig: lambda0_111"}](Fig4.jpg){width="8cm" height="6.5cm"} $P_{400}$ and $\beta$ --------------------- For $\widehat{P_{400}}$, the obtained results for different low levels are gathered in the Table \[tab: p400\]. Low level 111 132 154 --------------------- ------------------------- ------------------------- ------------------------- $\widehat{P_{400}}$ 0.031384 0.041905 0.059299 95 % C.I. \[0.018477 ; 0.044291\] \[0.024765 ; 0.059045\] \[0.035266 ; 0.083333\] : $\widehat{P_{400}}$ (the probability for a high storm to grow into a storm of level 400) and 95 % confidence intervals for each low level[]{data-label="tab: p400"} \ With a low level of 111, the estimation of $\beta$ gives: $$\hat\beta = 0.0059651\ \textrm{ with the 95\% confidence interval } \ [ 0.0035873 ; 0.0083429]$$ Although this value seems to be small, the significance of $ \hat\beta$ has been shown by a likelihood ratio test. The test of $\beta=0$ against $\beta=\hat\beta$ returns a p-value of $7.02 \times 10^{-7}$. Thus, the solar activity index $X$ affects the number of storms occurring during a cycle. Graphically, the influence of the solar activity index on the number of storms per cycle is observable on Figure \[NbO\_AS\_111\].\ ![Total number of storms per cycle for a low level of 111 according to the solar activity (centered)[]{data-label="NbO_AS_111"}](Fig5.jpg){width="8cm" height="6.5cm"} Instantaneous intensity: extrapolation to level 400 and relative risk {#section : extrapol} --------------------------------------------------------------------- The extrapolation to the storms of extreme level 400 is made by multiplying by $\widehat{P_{400}}$ (with confidence interval). We obtain the final intensity shown in Figure \[fig: intensite\_111\]. This curve corresponds to intensity of apparition of extreme storms for a solar cycle with a mean solar activity of 146.7. Recall that in the equation: $$\lambda_j(t) = \lambda_0(t) D_j \exp ( \beta X_j)$$ the risk factor is $\exp ( \beta X_j)$. Then, using $\hat\beta$, we can evaluate the relative risk for a cycle with a given solar activity index. For example compared to the average level of solar activity (146.7), a cycle with a high solar activity of 180 has a relative risk of $\exp(33.3 \times 0.0059651) = 1.22$. ![Instantaneous intensity (years$^{-1}$), with confidence interval, of the storms of level 400 obtained by extrapolation from the low level 111, for a mean solar activity of 146.7. In dash-dotted line the empirical frequency of storms of level 400[]{data-label="fig: intensite_111"}](Fig6.jpg){width="8cm" height="6.5cm"} Method stability ---------------- The results presented in the previous sections are given for a fixed low level (of 111). This asks the question of the model sensitivity to this parameter. The stability of the employed method can be evaluated by testing the stability to a low level change. The results for two other low levels, 132 and 154, are given in Figure \[fig: intensite\_132\_154\]. The two last curves seem to be smoother but this is partly due to the bandwidth parameter which is now equal to 0.045 (always selected by cross validation). For more precision, see Figure \[fig: comp3lambda\] where the three instantaneous intensity curves are plotted together. One can see that there is no significant difference between the three curves and that the method is rather stable. ![Similar to Figure \[fig: intensite\_111\] with a low level of 132 (left) and 154 (right)[]{data-label="fig: intensite_132_154"}](Fig7.jpg "fig:"){width="6.5cm" height="5cm"}![Similar to Figure \[fig: intensite\_111\] with a low level of 132 (left) and 154 (right)[]{data-label="fig: intensite_132_154"}](Fig8.jpg "fig:"){width="6.5cm" height="5cm"} ![Instantaneous intensity (years$^{-1}$) of the storms of level 400 obtained by extrapolation from the low levels 111 (plain line), 132 (dotted line) and 154 (dashed line)[]{data-label="fig: comp3lambda"}](Fig9.jpg){width="8cm" height="6.5cm"} A model extension ----------------- In an alternative approach, we consider the gradient of a storm to characterize its strength (instead of the ap index level). Gradients are calculated on one time step (3H) and the storm gradient is defined as the maximal gradient attained during a storm. This approach has been setting up because of the observation of storms with low levels (less than an ap index of 111) but strong effects due to fast variations of the ap index. We have led the same study with this new definition for the storm strength. The extreme gradient level are those greater than 100 and the low one is 35. The estimation of $\beta$ gives $$\hat\beta = 0.0053499\ \textrm{ with the confidence interval } \ [ 0.0038128 ; 0.006887]$$ These values are similar to those obtained with the ap index. The estimated intensity for the storms of extreme gradient is plotted in Figure \[fig: intensite\_grad\]. One can see that the step between the two halves of the cycle is stronger. ![Instantaneous intensity (years$^{-1}$), with confidence interval, of the storms with extreme gradient ($\geq$ 100) obtained by extrapolation from the low gradient level 35, for a mean solar activity of 146.7. In dash-dotted line the empirical frequency of storms with an extreme gradient[]{data-label="fig: intensite_grad"}](Fig10.jpg){width="8cm" height="6.5cm"} We should precise that the use of the gradient involves one disadvantage. Since the ap index represents a maximum over a 3 hours period, the two values of ap index used for the gradient calculation can be separated by nearly 6 hours or only by few minutes. The real dates of these values are not known and the gradient is calculated using 3 hours time step. However, the calculated gradient gives an approximation of the variation speed of the ap index. Moreover, since the gradient is used analogously to the ap index, the original model is still appropriate here. Conclusion ========== This study highlights that the intensity of magnetic storm occurrence strongly depends on the position on the solar cycle. The probability is higher during the second half of the cycle. The solar activity also has an influence on this intensity and, giving an activity index, allows to express a relative risk (compared to a cycle with the average level of solar activity 146.7). The analyses has been performed for different low levels in order to check his stability. The first results are given for a low level of 111 and a comparison is made using two other low levels: 132 and 154. The shape similarity of the three curves attests of the method stability. The model we built also allows us to make predictions about the current solar cycle. For the beginning date of this 24th cycle, we have chosen December 2008, a date accepted by a panel of experts (although there is no consensus). For the solar activity index, we have used the NOAA prediction with a maximum of 87.9 attained on November 2013 [@noaaPrev24]. The end of the 24th cycle is estimated around December 2019 or January 2020. The estimation (from the beginning to present) and the prediction are represented on Figure \[fig: prev24\] (plain line).\ ![Estimation and prediction of instantaneous intensity (years$^{-1}$) of the storms of level 400 for the 24th solar cycle, with confidence interval. For comparison, in dash dotted gray, the same intensity for a cycle with a mean solar activity index of 146.7[]{data-label="fig: prev24"}](Fig11.jpg){width="8.5cm" height="7cm"} Appendix ======== Maximum likelihood estimator of $\beta$ {#annexe} ======================================= The use of $N_j$ instead of $N_j(t)$ for the estimation of $\beta$ arises the question of the sufficiency of this statistic. Consider only one cycle and the model: $$N(t) \sim \mathcal P \left( \lambda_0 (t)dt \ D \exp(\beta X) \right)\quad \textrm{ for } t \in [-0.5 , 0.5 ]$$ Then, consider $\Delta_1, \Delta_2, ... , \Delta_n$ a partition of \[-0.5, 0.5\] into $n$ sub segments. For $i=1...n$, note $N(\Delta_i)= \int_{\Delta_i} dN(t)$ the number of events in $\Delta_i$. Giving that $N(t)$ is a Poisson process we know that the $\{N(\Delta_i), i=1...n\}$ are independent variables and that $N(\Delta_i) \sim \mathcal P \left( \left[ \int_{\Delta_i} \lambda_0 (s)ds \right] \ D \ \exp(\beta X) \right)$. We note $C_i = \int_{\Delta_i} \lambda_0 (s)ds \ D$. Then the Log-likelihood with respect to the counting measure (in which we integrate the weights $1/N(\Delta_i)!\ $) is $$- \exp(\beta X) \sum_{i=1}^n C_i + \sum_{i=1}^n [ N(\Delta_i) \log(C_i)] + \beta X \sum_{i=1}^n N(\Delta_i)$$ We see that $\beta$ is linked to the $N(\Delta_i)$ only by the term $\sum_{i=1}^n N(\Delta_i)$. Hence there is no loss of information to use the total number of events per cycle for the estimation of $\beta$. We can now compute the maximum likelihood estimator. For the $j^{th}$ cycle, the likelihood with respect to the counting measure with weights $1/N_j! \, $ is, noting $\alpha = \int_{-1/2}^{1/2} \lambda_0 (s)ds$ $$\exp \left( -\alpha \, D_j \exp(\beta X_j) \right) (\alpha \, D_j \exp(\beta X_j))^{N_j}$$ and the Log-likelihood for all the $J$ cycles: $$-\alpha \sum_{j=1}^J D_j \exp(\beta X_j) + \log(\alpha) \sum_{j=1}^J N_j + \sum_{j=1}^J N_j \log(D_j) + \beta \sum_{j=1}^J N_jX_j$$ The derivatives in $\alpha$ anb $\beta$ respectively give : $$\sum_{j=1}^J D_j \exp(\beta X_j) = \frac{\sum_{j=1}^J N_j}{\alpha}$$ and $$\alpha \sum_{j=1}^J D_j X_j \exp(\beta X_j) = \sum_{j=1}^J N_j X_j$$ Replacing $\alpha$ by the solution of the first equation, we obtain: $$\sum_{j=1}^J D_j X_j \exp(\beta X_j) \sum_{j=1}^J N_j = \sum_{j=1}^J D_j \exp(\beta X_j) \sum_{j=1}^J N_j X_j$$ This implicit equation resolves only numerically (by the secant method). We can also compute the Fisher information matrix: $$\left( \begin{array}{cc} \alpha^{-1} \sum_{j=1}^J D_j \exp(\beta X_j) & \sum_{j=1}^J D_j X_j \exp(\beta X_j) \\ \sum_{j=1}^J D_j X_j \exp(\beta X_j) & \alpha \sum_{j=1}^J D_j X^2_j \exp(\beta X_j) \\ \end{array} \right)$$ The (2,2) coefficient of the inverse matrix of the Fisher information matrix provides the variance of $\hat \beta$, used for the construction of a confidence interval. Chi-square test: {#annexe: chi2} ================= The chi-square independence test is performed a posteriori. When the instantaneous intensity is estimated, the time interval $[-0.5,0.5]$ is separated into two parts, of low and high intensity. The intensity threshold for this partition will be the empirical frequency of extreme storms, which is about 0.29 storm per year (horizontal dash-dotted line in Figure \[fig: intensite\_111\]). The two parts correspond to the times where the instantaneous intensity is respectively below and above this threshold. Then, the chi-square test is applied to the proportions of extreme level storms for each area and returns a p-value of 0.26, leading to the acceptance of the independence hypothesis. The same test is applied with different thresholds for the partition into two areas (0.40, 0.50 and 0.60) and always leads to the same conclusion.
--- abstract: 'Fragmentation functions for eta mesons are extracted at next-to-leading order accuracy of QCD in a global analysis of data taken in electron-positron annihilation and proton-proton scattering experiments. The obtained parametrization is in good agreement with all data sets analyzed and can be utilized, for instance, in future studies of double-spin asymmetries for single-inclusive eta production. The Lagrange multiplier technique is used to estimate the uncertainties of the fragmentation functions and to assess the role of the different data sets in constraining them.' author: - 'Christine A. Aidala' - Frank Ellinghaus - Rodolfo Sassot - 'Joseph P. Seele' - Marco Stratmann title: Global Analysis of Fragmentation Functions for Eta Mesons --- [^1] Introduction ============ Fragmentation functions (FFs) are a key ingredient in the perturbative QCD (pQCD) description of processes with an observed hadron in the final-state. Similar to parton distribution functions (PDFs), which account for the universal partonic structure of the interacting hadrons, FFs encode the non-perturbative details of the hadronization process [@ref:ffdef]. When combined with the perturbatively calculable hard scattering cross sections, FFs extend the ideas of factorization to a much wider class of processes ranging from hadron production in electron-positron annihilation to semi-inclusive deep-inelastic scattering (SIDIS) and hadron-hadron collisions [@ref:fact]. Over the last years, our knowledge on FFs has improved dramatically [@ref:ff-overview] from first rough models of quark and gluon hadronization probabilities [@ref:feynman] to rather precise global analyses at next-to-leading order (NLO) accuracy of QCD, including estimates of uncertainties [@ref:dsspion; @ref:dssproton; @ref:akk; @ref:hirai]. While the most accurate and clean information used to determine FFs comes from single-inclusive electron-positron annihilation (SIA) into hadrons, such data do not allow disentanglement of quark from anti-quark fragmentation and constrain the gluon fragmentation only weakly through scaling violations and sub-leading NLO corrections. Modern global QCD analyses [@ref:dsspion; @ref:dssproton] utilize to the extent possible complementary measurements of hadron spectra obtained in SIDIS and hadron-hadron collisions to circumvent these shortcomings and to constrain FFs for all parton flavors individually. Besides the remarkable success of the pQCD approach in describing all the available data simultaneously, the picture emerging from such comprehensive studies reveals interesting and sometimes unexpected patterns between the FFs for different final-state hadrons. For instance, the strangeness-to-kaon fragmentation function obtained in Ref. [@ref:dsspion] is considerably larger than those assumed previously in analyses of SIA data alone [@ref:kretzer]. This has a considerable impact on the extraction of the amount of strangeness polarization in the nucleon [@ref:dssv] from SIDIS data, which in turn is linked to the fundamental question of how the spin of the nucleon is composed of intrinsic spins and orbital angular momenta of quarks and gluons. Current analyses of FFs comprise pions, kaons, protons [@ref:dsspion; @ref:dssproton; @ref:akk; @ref:hirai], and lambdas [@ref:dsv; @ref:akk] as final-state hadrons. In this respect, FFs are a much more versatile tool to explore non-perturbative aspects of QCD than PDFs where studies are mainly restricted to protons [@ref:cteq; @ref:otherpdfs]. In the following, we extend the global QCD analyses of FFs at NLO accuracy as described in Refs. [@ref:dsspion; @ref:dssproton] to eta mesons and estimate the respective uncertainties with the Lagrange multiplier method [@ref:lagrange; @ref:dsspion; @ref:dssv]. We obtain a parametrization from experimental data for single-inclusive eta meson production in SIA at various center-of-mass system (c.m.s.) energies $\sqrt{S}$ and proton-proton collisions at BNL-RHIC in a wide range of transverse momenta $p_T$. We note two earlier determinations of eta FFs in Refs. [@ref:greco] and [@ref:indumathi] which are based on normalizations taken from a Monte Carlo event generator and $SU(3)$ model estimates, respectively. In both cases, parametrizations are not available. The newly obtained FFs provide fresh insight into the hadronization process by comparing to FFs for other hadrons. In particular, the peculiar wave function of the eta, $|\eta\rangle\simeq |u\bar{u}+d\bar{d}-2s\bar{s}\rangle$, with all light quarks and anti-quarks being present, may reveal new patterns between FFs for different partons and hadrons. The similar mass range of kaons and etas, $m_{K^0}\simeq 497.6\,\mathrm{MeV}$ and $m_{\eta}\simeq 547.9\,\mathrm{MeV}$, respectively, and the presence of strange quarks in both wave functions makes comparisons between the FFs for these mesons especially relevant. Of specific interest is also the apparently universal ratio of eta to neutral pion yields for $p_T\gtrsim 2\,\mathrm{GeV}$ in hadron-hadrons collisions across a wide range of c.m.s. energies, see, e.g., Ref. [@ref:phenix2006], and how this is compatible with the extracted eta and pion FFs. In addition, the availability of eta FFs permits for the first time NLO pQCD calculations of double-spin asymmetries for single-inclusive eta meson production at high $p_T$ which have been measured at RHIC [@ref:ellinghaus] recently. Such calculations are of topical interest for global QCD analyses of the spin structure of the nucleon [@ref:dssv]. Finally, the set of eta FFs also provides the baseline for studies of possible modifications in a nuclear medium [@ref:nuclreview; @ref:nffs], for instance, in deuteron-gold collisions at RHIC [@ref:phenix2006]. The remainder of the paper is organized as follows: next, we give a brief outline of the analysis. In Sec. \[sec:results\] we present the results for the eta FFs, compare to data, and discuss our estimates of uncertainties. We conclude in Sec. \[sec:conclusions\]. Outline of the Analysis\[sec:outline\] ====================================== Technical framework and parametrization \[subsec:outline\] ---------------------------------------------------------- The pQCD framework at NLO accuracy for the scale evolution of FFs [@ref:evol] and single-inclusive hadron production cross sections in SIA [@ref:eenlo] and hadron-hadron collisions [@ref:ppnlo] has been in place for quite some time and does not need to be repeated here. Likewise, the global QCD analysis of the eta FFs itself follows closely the methods outlined in a corresponding fit of pion and kaon FFs in Ref. [@ref:dsspion], where all the details can be found. As in [@ref:dsspion; @ref:dssproton] we use the Mellin technique as described in [@ref:mellin; @ref:dssv] to implement all NLO expressions. Here, we highlight the differences to similar analyses of pion and kaon FFs and discuss their consequences for our choice of the functional form parameterizing the FFs of the eta meson. As compared to lighter hadrons, in particular pions, data with identified eta mesons are less abundant and less precise. Most noticeable is the lack of any experimental information from SIDIS so far, which provided the most important constraints on the separation of contributions from $u$, $d$, and $s$ (anti-)quarks fragmenting into pions and kaons [@ref:dsspion]. Since no flavor-tagged data exist for SIA either, it is inevitable that a fit for eta FFs has considerably less discriminating power. Hence, instead of extracting the FFs for the light quarks and anti-quarks individually, we parametrize the flavor singlet combination at an input scale of $\mu_0=1\,\mathrm{GeV}$, assuming that all FFs are equal, i.e., $D^{\eta}_u=D^{\eta}_{\bar{u}}=D^{\eta}_d=D^{\eta}_{\bar{d}}= D^{\eta}_s=D^{\eta}_{\bar{s}}$. We use the same flexible functional form as in Ref. [@ref:dsspion] with five fit parameters, $$\begin{aligned} \label{eq:ansatz} D_{i}^{\eta}(z,\mu_0) = N_{i} \,z^{\alpha_{i}}(1-z)^{\beta_{i}} [1+\gamma_{i} (1-z)^{\delta_{i}}]\,\,\, \times\,\,\,\,\,\,\, \nonumber\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \frac{1}{B[2+\alpha_{i},\beta_{i}+1]+\gamma_i B[2+\alpha_{i},\beta_{i}+\delta_{i}+1]}\;,\end{aligned}$$ where $z$ is the fraction of the four-momentum of the parton taken by the eta meson and $i=u,\bar{u},d,\bar{d},s,\bar{s}$. $B[a,b]$ denotes the Euler beta function with $a$ and $b$ chosen such that $N_i$ is normalized to the second moment $\int_0^1 zD_i^{\eta}(z,\mu_0)\, dz$ of the FFs. Although the assumption of equal light quark FFs seems to be rather restrictive at first, such an ansatz can be anticipated in view of the wave function of the eta meson. One might expect a difference between strange and non-strange FFs though due to the larger mass of strange quarks, i.e., that the hadronization of $u$ or $d$ quarks is somewhat less likely as they need to pick up an $s\bar{s}$ pair from the vacuum to form the eta. Indeed, a “strangeness suppression" is found for kaon FFs [@ref:dsspion] leading, for instance, to $D_s^{K^{-}}>D_{\bar{u}}^{K^{-}}$. In case of the eta wave function one can argue, however, that also a fragmenting $s$ quark needs to pick up an $s\bar{s}$ pair from the vacuum. Nevertheless, we have explicitly checked that the introduction of a second independent parameterization like in (\[eq:ansatz\]) to discriminate between the strange and non-strange FFs, does not improve the quality of the fit to the currently available data. Clearly, SIDIS data would be required to further refine our assumptions in the light quark sector in the future. The gluon-to-eta fragmentation $D_g^{\eta}$ is mainly constrained by data from RHIC rather than scaling violations in SIA. As for pion and kaon FFs in [@ref:dsspion], we find that a simplified functional form with $\gamma_g=0$ in Eq. (\[eq:ansatz\]) provides enough flexibility to accommodate all data. Turning to the fragmentation of heavy charm and bottom quarks into eta mesons, we face the problem that none of the available data sets constraints their contributions significantly. Here, the lack of any flavor-tagged data from SIA hurts most as hadron-hadron cross sections at RHIC energies do not receive any noticeable contributions from heavy quark fragmentation. Introducing independent FFs for charm and bottom at their respective mass thresholds improves the overall quality of the fit but their parameters are essentially unconstrained. For this reason, we checked that taking the shape of the much better constrained charm and bottom FFs for pions, kaons, protons, and residual charged hadrons from [@ref:dsspion; @ref:dssproton], but allowing for different normalizations, leads to fits of comparable quality with only two additional free parameters. The best fit is obtained for the charm and bottom FFs from an analysis of residual charged hadrons [@ref:dssproton], i.e., hadrons other than pions, kaons, and protons, and hence we use $$\begin{aligned} \label{eq:ansatz-hq} D_{c}^{\eta}(z,m_c) &=& D_{\bar{c}}^{\eta}(z,m_c) = N_c \,D_{c}^{res}(z,m_c)\;, \nonumber \\ D_{b}^{\eta}(z,m_b) &=& D_{\bar{b}}^{\eta}(z,m_b) = N_b \,D_{b}^{res}(z,m_b)\;.\end{aligned}$$ $N_c$ and $N_b$ denote the normalizations for the charm and bottom fragmentation probabilities at their respective initial scales, to be constrained by the fit to data. The parameters specifying the $D_{c,b}^{res}$ can be found in Tab. III of Ref. [@ref:dssproton]. The FFs in Eq. (\[eq:ansatz-hq\]) are included discontinuously as massless partons in the scale evolution of the FFs above their $\overline{\mathrm{MS}}$ thresholds $\mu=m_{c,b}$ with $m_{c}=1.43\,\mathrm{GeV}$ and $m_{b}=4.3\,\mathrm{GeV}$ denoting the mass of the charm and bottom quark, respectively. In total, the parameters introduced in Eqs. (\[eq:ansatz\]) and (\[eq:ansatz-hq\]) to describe the FFs of quarks and gluons into eta mesons add up to 10. They are determined by a standard $\chi^2$ minimization for $N=140$ data points, where $$\label{eq:chi2} \chi^2=\sum_{j=1}^N \frac{(T_j-E_j)^2}{\delta E_j^2}\;.$$ $E_j$ represents the experimentally measured value of a given observable, $\delta E_j$ its associated uncertainty, and $T_j$ is the corresponding theoretical estimate calculated at NLO accuracy for a given set of parameters in Eqs. (\[eq:ansatz\]) and (\[eq:ansatz-hq\]). For the experimental uncertainties $\delta E_i$ we take the statistical and systematic errors in quadrature for the time being. Data sets included in the fit\[subsec:data\] -------------------------------------------- A total of 15 data sets is included in our analysis. We use all SIA data with $\sqrt{S}>10\,\mathrm{GeV}$: HRS [@ref:hrs] and MARK II [@ref:mark2] at $\sqrt{S}=29\,\mathrm{GeV}$, JADE [@ref:jade1; @ref:jade2] and CELLO [@ref:cello] at $\sqrt{S}=34-35\,\mathrm{GeV}$, and ALEPH [@ref:aleph1; @ref:aleph2; @ref:aleph3], L3 [@ref:l31; @ref:l32], and OPAL [@ref:opal] at $\sqrt{S} = M_Z = 91.2\,\mathrm{GeV}$. Preliminary results from BABAR [@ref:babar] at $\sqrt{S}=10.54\,\mathrm{GeV}$ are also taken into account. The availability of $e^+e^-$ data in approximately three different energy regions of $\sqrt{S} \simeq 10,$ 30, and $90\,\mathrm{GeV}$ helps to constrain the gluon fragmentation function from scaling violations. Also, the appropriate electroweak charges in the inclusive process $e^+e^-\to (\gamma,Z)\rightarrow \eta X$ vary with energy, see, e.g., App. A of Ref. [@ref:dsv] for details, and hence control which combinations of quark FFs are probed. Only the CERN-LEP data taken on the $Z$ resonance receive significant contributions from charm and bottom FFs. Given that the range of applicability for FFs is limited to medium-to-large values of the energy fraction $z$, as discussed, e.g., in Ref. [@ref:dsspion], data points with $z<0.1$ are excluded from the fit. Whenever the data set is expressed in terms of the scaled three-momentum of the eta meson, i.e., $x_p\equiv 2p_{\eta}/\sqrt{S}$, we convert it to the usual scaling variable $z=x_p/\beta$, where $\beta=p_{\eta}/E_{\eta}=\sqrt{1-m_{\eta}^2/E_{\eta}^2}$. In addition to the cut $z>0.1$, we also impose that $\beta>0.9$ in order to avoid kinematic regions where mass effects become increasingly relevant. The cut on $\beta$ mainly affects the data at low $z$ from BABAR [@ref:babar]. In case of single-inclusive eta meson production in hadron-hadron collisions, we include data sets from PHENIX at $\sqrt{S}=200\,\mathrm{GeV}$ at mid-rapidity [@ref:phenix2006; @ref:phenix-run6] in our global analysis. The overall scale uncertainty of $9.7\%$ in the PHENIX measurement is not included in $\delta E_j$ in Eq. (\[eq:chi2\]). All data points have a transverse momentum $p_T$ of at least $2\,\mathrm{GeV}$. As we shall demonstrate below, these data provide an invaluable constraint on the quark and gluon-to-eta fragmentation probabilities. In general, hadron collision data probe FFs at fairly large momentum fractions $z\gtrsim 0.5$, see, e.g., Fig. 6 in Ref. [@ref:lhcpaper], complementing the information available from SIA. The large range of $p_T$ values covered by the recent PHENIX data [@ref:phenix-run6], $2\le p_T\le 20\,\mathrm{GeV}$, also helps to constrain FFs through scaling violations. As in other analyses of FFs [@ref:dsspion; @ref:dssproton] we do not include eta meson production data from hadron-hadron collision experiments at much lower c.m.s. energies, like Fermilab-E706 [@ref:e706]. It is known that theoretical calculations at NLO accuracy do not reproduce such data very well without invoking resummations of threshold logarithms to all orders in pQCD [@ref:resum]. Results\[sec:results\] ====================== In this Section we discuss in detail the results of our global analysis of FFs for eta mesons at NLO accuracy of QCD. First, we shall present the parameters of the optimum fits describing the $D_i^{\eta}$ at the input scale. Next, we compare our fits to the data used in the analysis and give $\chi^2$ values for each individual set of data. Finally, we estimate the uncertainties in the extraction of the $D_i^{\eta}$ using the Lagrange multiplier technique and discuss the role of the different data sets in constraining the FFs. Optimum fit to data \[subsec:fit\] ---------------------------------- In Tab. \[tab:para\] we list the set of parameters specifying the optimum fit of eta FFs at NLO accuracy in Eqs. (\[eq:ansatz\]) and (\[eq:ansatz-hq\]) at our input scale $\mu_0=1\,\mathrm{GeV}$ for the light quark flavors and the gluon. Charm and bottom FFs are included at their mass threshold $\mu_0=m_c$ and $\mu_0=m_b$, respectively [@ref:fortran]. The data sets included in our global analysis, as discussed in Sec. \[subsec:data\], and the individual $\chi^2$ values are presented in Tab. \[tab:data\]. Flavor $i$ $N_i$ $\alpha_i$ $\beta_i$ $\gamma_i$ $\delta_i$ --------------------------------- ------- ------------ ----------- ------------ ------------ $u,\bar{u},d,\bar{d},s,\bar{s}$ 0.038 1.372 1.487 2000.0 34.03 $g$ 0.070 10.00 9.260 0 0 $c,\bar{c}$ 1.051 - - - - $b,\bar{b}$ 0.664 - - - - : \[tab:para\]Parameters describing the NLO FFs for eta mesons, $D_i^{\eta}(z,\mu_0)$, in Eqs. (\[eq:ansatz\]) and (\[eq:ansatz-hq\]) at the input scale $\mu_0=1\,\mathrm{GeV}$. Inputs for the charm and bottom FFs refer to $\mu_0=m_c$ and $\mu_0=m_b$, respectively. We note that the quoted number of points and $\chi^2$ values are based only on fitted data, i.e., $z>0.1$ and $\beta>0.9$ in SIA. As can be seen, for most sets of data their partial contribution to the $\chi^2$ of the fit is typically of the order of the number of data points or even smaller. The most notable exceptions are the HRS [@ref:hrs] and ALEPH ’02 [@ref:aleph3] data, where a relatively small number of points have a significant $\chi^2$, which in turn leads to total $\chi^2$ per degree of freedom (d.o.f.) of about 1.6 for the fit. We have checked that these more problematic sets of data could be removed from the fit without reducing its constraining power or changing the obtained $D_i^{\eta}$ significantly. The resulting, fairly large $\chi^2/d.o.f.$ due to a few isolated data points is a common characteristic of all extractions of FFs made so far [@ref:dsspion; @ref:dssproton; @ref:akk; @ref:hirai; @ref:kretzer] for other hadron species. The overall excellent agreement of our fit with experimental results for inclusive eta meson production in SIA and the tension with the HRS and ALEPH ’02 data is also illustrated in Fig. \[fig:sia-eta\]. It is worth pointing out that both ALEPH ’00 [@ref:aleph2] and BABAR [@ref:babar] data are well reproduced for all momentum fractions $z$ in spite of being at opposite ends of the c.m.s. energy range covered by experiments. ------------------------------------- ------------- ---------- Experiment data points $\chi^2$ fitted BABAR [@ref:babar] 18 8.1 HRS [@ref:hrs] 13 51.6 MARK II [@ref:mark2] 7 3.8 JADE ’85 [@ref:jade1] 1 9.6 JADE ’90 [@ref:jade2] 3 1.2 CELLO [@ref:cello] 4 1.1 ALEPH ’92 [@ref:aleph1] 8 2.0 ALEPH ’00 [@ref:aleph2] 18 22.0 ALEPH ’02 [@ref:aleph3] 5 61.6 L3 ’92 [@ref:l31] 3 5.1 L3 ’94 [@ref:l32] 8 10.5 OPAL [@ref:opal] 9 9.0 PHENIX $2 \gamma$ [@ref:phenix2006] 12 4.1 PHENIX $3 \pi$ [@ref:phenix2006] 6 2.9 PHENIX ’06 [@ref:phenix-run6] 25 13.3 [**TOTAL:**]{} 140 205.9 ------------------------------------- ------------- ---------- : \[tab:data\]Data used in the global analysis of eta FFs, the individual $\chi^2$ values for each set, and the total $\chi^2$ of the fit. Our fit compares very well with all data on high-$p_T$ eta meson production in proton-proton collisions from RHIC [@ref:phenix2006; @ref:phenix-run6]. The latest set of PHENIX data [@ref:phenix-run6] significantly extends the range in $p_T$ at much reduced uncertainties and provides stringent constraints on the FFs as we shall demonstrate below. The normalization and trend of the data is nicely reproduced over a wide kinematical range as can be inferred from Figs. \[fig:hadronic2g\]-\[fig:hadronic06\]. In each case, the invariant cross section for $pp\rightarrow \eta X$ at $\sqrt{S}=200\,\mathrm{GeV}$ is computed at NLO accuracy, averaged over the pseudorapidity range of PHENIX, $|\eta|\le 0.35$, and using the NLO set of PDFs from CTEQ [@ref:cteq] along with the corresponding value of $\alpha_s$. Throughout our analysis we choose the transverse momentum of the produced eta as both the factorization and the renormalization scale, i.e., $\mu_f=\mu_r=p_T$. Since the cross sections drop over several orders of magnitude in the given range of $p_T$, we show also the ratio (data-theory)/theory in the lower panels of Figs. \[fig:hadronic2g\]-\[fig:hadronic06\] to facilitate the comparison between data and our fit. One notices the trend of the theoretical estimates to overshoot the data near the lowest values of transverse momenta, $p_T\simeq 2\,\mathrm{GeV}$ which indicates that the factorized pQCD approach starts to fail. Compared to pion production at central pseudorapidities, see Fig. 6 in Ref. [@ref:dsspion], the breakdown of pQCD sets in at somewhat higher $p_T$ as is expected due to the larger mass of the eta meson. The shaded bands in Figs. \[fig:hadronic2g\]-\[fig:hadronic06\] are obtained with the Lagrange multiplier method, see Sec. \[subsec:uncert\] below, applied to each data point. They correspond to the maximum variation of the invariant cross section computed with alternative sets of eta FFs consistent with an increase of $\Delta \chi^2=1$ or $\Delta \chi^2=2\%$ in the total $\chi^2$ of the best global fit to all SIA and $pp$ data. In addition to the experimental uncertainties propagated to the extracted $D_i^{\eta}$, a large theoretical ambiguity is associated with the choice of the factorization and renormalization scales used in the calculation of the $pp\to \eta X$ cross sections. These errors are much more sizable than experimental ones and very similar to those estimated for $pp\to\pi X$ in Fig. 6 of Ref. [@ref:dsspion]. As in the DSS analysis for pion and kaon FFs [@ref:dsspion] the choice $\mu_f=\mu_r=p_T$ and $\mu_f=\mu_r=S$ in $pp$ collisions and SIA, respectively, leads to a nice global description of all data sets with a common universal set of eta FFs. Next, we shall present an overview of the obtained FFs $D_i^{\eta}(z,Q)$ for different parton flavors $i$ and compare them to FFs for other hadrons. The upper row of panels in Fig. \[fig:eta-ff-comp\] shows the dependence of the FFs on the energy fraction $z$ taken by the eta meson at a scale $Q$ equal to the mass of the $Z$ boson, i.e., $Q =M_Z$. Recall that at our input scale $Q=\mu_0=1\,\mathrm{GeV}$ we assume that $D^{\eta}_u=D^{\eta}_{\bar{u}}=D^{\eta}_d=D^{\eta}_{\bar{d}}= D^{\eta}_s=D^{\eta}_{\bar{s}}$, which is preserved under scale evolution. At such a large scale $Q=M_Z$ the heavy quark FFs are of similar size, which is not too surprising as mass effects are negligible, i.e., $m_{c,b}\ll M_Z$. The gluon-to-eta fragmentation function $D_g^{\eta}$ is slightly smaller but rises towards smaller values of $z$. Overall both the shape and the hierarchy between the different FFs $D_i^{\eta}$ is similar to those found, for instance, for pions; see Fig. 18 in [@ref:dsspion], with the exception of the “unfavored" strangeness-to-pion fragmentation function which is suppressed. In order to make the comparison to FFs for other hadrons more explicit, we show in the lower three rows of Fig. \[fig:eta-ff-comp\] the ratios of the obtained $D_i^{\eta}(z,M_z)$ to the FFs for pions, kaons, protons from the DSS analysis [@ref:dsspion; @ref:dssproton]. The eta and pion production yields are known to be consistent with a constant ratio of about a half in a wide range of c.m.s. energies in hadronic collisions for $p_T\gtrsim 2\,\mathrm{GeV}$, but the ratio varies from approximately 0.2 at $z\simeq 0.1$ to about 0.5 for $z\gtrsim 0.4$ in SIA [@ref:phenix2006]. It is interesting to see how these findings are reflected in the ratios of the eta and neutral pion FFs for the individual parton flavors. We find that $D^{\eta}_{u+\bar{u}}/D^{\pi^{0}}_{u+\bar{u}}$ follows closely the trend of the SIA data as is expected since gluon fragmentation enters only at NLO in the cross section calculations. For strangeness the rate of eta to pion FFs increases towards larger $z$ because of the absence of strange quarks in the pion wave functions. Inclusive hadron production at small-to-medium values of $p_T$ is known to be dominated by gluon fragmentation at relatively large values of momentum fraction $z$ [@ref:dsspion; @ref:lhcpaper] largely independent of the c.m.s. energy $\sqrt{S}$. In the relevant range of $z$, $0.4\lesssim z \lesssim 0.6$, the ratio $D_g^{\eta}/D_g^{\pi^{0}}$ resembles the constant ratio of roughly 0.5 found in the eta-to-pion production yields. Both at larger and smaller values of $z$ the $D^{\eta}_g$ is suppressed with respect to $D_g^{\pi^{0}}$. In general, one should keep in mind that FFs always appear in complicated convolution integrals in theoretical cross section calculations [@ref:eenlo; @ref:ppnlo] which complicates any comparison of cross section and fragmentation function ratios for different hadrons. The comparison to the DSS kaon FFs [@ref:dsspion] is shown in the panels in third row of Fig. \[fig:eta-ff-comp\]. Most remarkable is the ratio of the gluon FFs, which is approximately constant, $D_g^{\eta}/D_g^K \simeq 2$, over a wide region in $z$ but drops below one for $z\gtrsim 0.6$ At large $z$, $D^{\eta}_{u+\bar{u}}$ tends to be almost identical to $D^{K}_{u+\bar{u}}$, while $D^{\eta}_{s+\bar{s}}$ resembles $D^{K}_{s+\bar{s}}$ only at low $z$. The latter result might be understood due to the absence of strangeness suppression for $D^{K}_{s+\bar{s}}$, whereas a fragmenting $s$ quark needs to pick up an $\bar{s}$ quark from the vacuum to form the eta meson. It should be noted, however, that kaon FFs have considerably larger uncertainties than pion FFs [@ref:dsspion] which makes the comparisons less conclusive. This is even more true for the proton FFs [@ref:dssproton]. Nevertheless, it is interesting to compare our $D_i^{\eta}$ to those for protons which is done in the lower panels of Fig. \[fig:eta-ff-comp\]. As for kaons, we observe a rather flat behavior of the ratio $D_g^{\eta}/D_g^{p}$, which drops below one at larger values of $z$. The corresponding rates for light quark FFs show the opposite trend and rise towards $z\to 1$. Regarding the relative sizes of the fragmentation probabilities for light quarks and gluons into the different hadron species, we find that eta FFs are suppressed w.r.t. pion FFs (except for strangeness), are roughly similar to those for kaons, and larger than the proton FFs. This can be qualitatively understood from the hierarchy of the respective hadron masses. For $z\gtrsim 0.6$, the lack of decisive constraints from data prevents one from drawing any conclusions in this kinematic region. As we have already discussed in Sec. \[subsec:outline\], due to the lack of any flavor tagged SIA data sensitive to the hadronization of charm and bottom quarks into eta mesons, we adopted the same functional form as for the fragmentation into residual charged hadrons [@ref:dssproton], i.e., hadrons other than pions, kaons, and protons. The fit favors a charm fragmentation almost identical to that for the residual hadrons ($N_c=1.058$) and a somewhat reduced distribution for bottom fragmentation ($N_b=0.664$). At variance to what is found for light quarks and gluons, after evolution, $D_{c+\bar{c}}^{\eta}$ and $D_{b+\bar{b}}^{\eta}$ differ significantly in size and shape from their counterparts for pions, kaons, and protons as can be also inferred from Fig. \[fig:eta-ff-comp\]. Future data are clearly needed here for any meaningful comparison. Estimates of uncertainties \[subsec:uncert\] -------------------------------------------- Given the relatively small number of data points available for the determination of the $D_i^{\eta}$ as compared to global fits of pion, kaon, and proton FFs [@ref:dsspion; @ref:dssproton], we refrain from performing a full-fledged error analysis. However, in order to get some idea of the uncertainties of the $D_i^{\eta}$ associated with experimental errors, how they propagate into observables, and the role of the different data sets in constraining the $D_i^{\eta}$, we perform a brief study based on Lagrange multipliers [@ref:lagrange; @ref:dsspion; @ref:dssv]. This method relates the range of variation of a physical observable ${\cal{O}}$ dependent on FFs to the variation in the $\chi^2$ function used to judge the goodness of the fit. To this end, one minimizes the function $$\label{eq:lm} \Phi(\lambda,\{a_i\}) = \chi^2(\{a_i\}) + \lambda\, {\cal{O}}(\{a_i\})$$ with respect to the set of parameters $\{a_i\}$ describing the FFs in Eqs. (\[eq:ansatz\]) and (\[eq:ansatz-hq\]) for fixed values of $\lambda$. Each of the Lagrange multipliers $\lambda$ is related to an observable ${\cal{O}}(\{a_i\})$, and the choice $\lambda=0$ corresponds to the optimum global fit. From a series of fits for different values of $\lambda$ one can map out the $\chi^2$ profile for any observable ${\cal{O}}(\{a_i\})$ free of the assumptions made in the traditional Hessian approach [@ref:hessian]. As a first example and following the DSS analyses [@ref:dsspion; @ref:dssproton], we discuss the range of variation of the truncated second moments of the eta FFs, $$\label{eq:truncmom} \xi^{\eta}_i(z_{\min},Q) \equiv \int_{z_{\min}}^1 z D_i^{\eta}(z,Q)\, dz,$$ for $z_{\min}=0.2$ and $Q=5\,\mathrm{GeV}$ around the values obtained in the optimum fit to data, $\xi^{\eta}_{i\,0}$. In a LO approximation, the second moments $\int_0^1 zD_i^{\eta}(z,Q)dz$ represent the energy fraction of the parent parton of flavor $i$ taken by the eta meson at a scale $Q$. The truncated moments in Eq. (\[eq:truncmom\]) discard the low-$z$ contributions, which are not constrained by data and, more importantly, where the framework of FFs does not apply. In general, FFs enter calculations of cross sections as convolutions over a wide range of $z$, and, consequently, the $\xi^{\eta}_i(z_{\min},Q)$ give a first, rough idea of how uncertainties in the FFs will propagate to observables. The solid lines in Fig. \[fig:profiles-ffs\] show the $\xi^{\eta}_i(z_{\min},Q)$ defined in Eq. (\[eq:truncmom\]) for $i=u+\bar{u}$, $g$, $c+\bar{c}$, and $b+\bar{b}$ against the corresponding increase $\Delta \chi^2$ in the total $\chi^2$ of the fit. The two horizontal lines indicate a $\Delta \chi^2$ of one unit and an increase by $2\%$ which amounts to about 4 units in $\chi^2$, see Tab. \[tab:data\]. The latter $\Delta \chi^2$ should give a more faithful estimate of the relevant uncertainties in global QCD analyses [@ref:dsspion; @ref:dssproton; @ref:cteq; @ref:dssv] than an increase by one unit. As can be seen, the truncated moment $\xi^{\eta}_{u+\overline{u}}$, associated with light quark FFs $D^{\eta}_{u+\overline{u}}= D^{\eta}_{d+\overline{d}}=D^{\eta}_{s+\overline{s}}$, is constrained within a range of variation of approximately $^{+30\%}_{-20\%}$ around the value computed with the best fit, assuming a conservative increase in $\chi^2$ by $2\%$. The estimated uncertainties are considerably larger than the corresponding ones found for pion and kaon FFs, which are typically of the order of $\pm$3% and $\pm$10% for the light quark flavors [@ref:dsspion], respectively, but closer to the $\pm 20\%$ observed for proton and anti-proton FFs [@ref:dssproton]. For the truncated moment $\xi_g^{\eta}$ of gluons shown in the upper right panel of Fig. \[fig:profiles-ffs\], the range of uncertainty is slightly smaller than one found for light quarks and amounts to about $\pm 15\%$. The allowed variations are larger for charm and bottom FFs as can be inferred from the lower row of plots in Fig. \[fig:profiles-ffs\]. Apart from larger experimental uncertainties and the much smaller amount of SIA data for identified eta mesons, the lack of any information from SIDIS is particularly responsible for the large range of variations found for the light quarks in Fig. \[fig:profiles-ffs\]. We recall that the missing SIDIS data for produced eta mesons also forced us to assume that all light quark FFs are the same in Eq. (\[eq:ansatz\]). The additional ambiguities due to this assumption are not reflected in the $\chi^2$ profiles shown in Fig. \[fig:profiles-ffs\]. The FFs for charm and bottom quarks into eta mesons suffer most from the lack of flavor tagged data in SIA. To further illuminate the role of the different data sets in constraining the $D_{i}^{\eta}$ we give also the partial contributions to $\Delta \chi^2$ of the individual data sets from $pp$ collisions and the combined SIA data in all panels of Fig. \[fig:profiles-ffs\]. Surprisingly, the light quark FFs are constrained best by the PHENIX $pp$ data from run ’06 and not by SIA data. SIA data alone would prefer a smaller value for $\xi^{\eta}_{u+\bar{u}}$ by about $10\%$, strongly correlated to larger moments for charm and bottom fragmentation, but the minimum in the $\chi^2$ profile is much less pronounced and very shallow, resulting in rather sizable uncertainties. This unexpected result is most likely due to the fact that the SIA data from LEP experiments constrain mainly the flavor singlet combination, i.e., the sum of all quark flavors, including charm and bottom. Since there are no flavor tagged data available from SIA for eta mesons, the separation into contributions from light and heavy quark FFs is largely unconstrained by SIA data. Only the fairly precise data from BABAR at $\sqrt{S}\simeq 10\,\mathrm{GeV}$ provide some guidance as they constrain a different combination of the light $u$, $d$, and $s$ quark FFs weighted by the respective electric charges. Altogether, this seems to have a negative impact on the constraining power of the SIA data. For not too large values of $p_T$, data obtained in $pp$ collisions are in turn mainly sensitive to $D_g^{\eta}$ but in a limited range of $z$, $0.4\lesssim z \lesssim 0.6$, as mentioned above. Through the scale evolution, which couples quark and gluon FFs, these data provide a constraint on $\xi^{\eta}_{u+\bar{u}}$. In addition, the latest PHENIX data extend to a region of $p_T$ where quark fragmentation becomes important as well. To illustrate this quantitatively, Fig. \[fig:fractions\] shows the relative fractions of quarks and gluons fragmenting into the observed eta meson as a function of $p_T$ in $pp$ collisions for PHENIX kinematics. As can be seen, quark-to-eta FFs become dominant for $p_T\gtrsim 10\,\mathrm{GeV}$. The $\chi^2$ profile for the truncated moment of the gluon, $\xi^{\eta}_g$, is the result of an interplay between the PHENIX run ’06 $pp$ data and the SIA data sets which constrain the moment $\xi^{\eta}_g$ towards smaller and larger values, respectively. This highlights the complementarity of the $pp$ and SIA data. SIA data have an impact on $\xi^{\eta}_g$ mainly through the scale evolution in the energy range from LEP to BABAR. In addition, SIA data provide information in the entire range of $z$, whereas the $pp$ data constrain only the large $z$ part of the truncated moment $\xi^{\eta}_g$. Consequently, the corresponding $\chi^2$ profile for $z_{\min}=0.4$ or $0.5$ would be much more dominated by $pp$ data. In general, the other data sets from PHENIX [@ref:phenix2006] do not have a significant impact on any of the truncated moments shown in Fig. \[fig:profiles-ffs\] due to their limited precision and covered kinematic range. Compared to pion and kaon FFs [@ref:dsspion], all $\chi^2$ profiles in Fig. \[fig:profiles-ffs\] are significantly less parabolic, which prevents one from using the Hessian method [@ref:hessian] for estimating uncertainties. More importantly, the shapes of the $\chi^2$ profiles reflect the very limited experimental information presently available to extract eta FFs for all flavors reliably. Another indication in that direction are the different preferred minima for the values of the $\xi_i^{\eta}$ by the SIA and $pp$ data, although tolerable within the large uncertainties. Our fit is still partially driven by the set of assumptions on the functional form of and relations among different FFs, which we are forced to impose in order to keep the number of free fit parameters at level such that they can be actually determined by data. Future measurements of eta production in SIA, $pp$ collisions, and, in particular, SIDIS are clearly needed to test the assumptions made in our analysis and to further constrain the $D_i^{\eta}$. The large variations found for the individual FFs in Fig. \[fig:profiles-ffs\] are strongly correlated, and, therefore, their impact on uncertainty estimates might be significantly reduced for certain observables. If, in addition, the observable of interest is only sensitive to a limited range of hadron momentum fractions $z$, than the corresponding $\chi^2$ profile may assume a more parabolic shape. In order to illustrate this for a specific example, we compute the $\chi^2$ profiles related to variations in the theoretical estimates of the single-inclusive production of eta mesons in $pp$ collisions at PHENIX kinematics [@ref:phenix-run6]. The results are shown in Fig. \[fig:profiles-pp\] for four different values of $p_T$ along with the individual contributions to $\Delta \chi^2$ from the SIA and $pp$ data sets. As anticipated, we find a rather different picture as compared to Fig. \[fig:profiles-ffs\], with variations only ranging from $5$ to $10\%$ depending on the $p_T$ value and tolerating $\Delta \chi^2/\chi^2=2\%$. The corresponding uncertainty bands are also plotted in Fig. \[fig:hadronic06\] above for both $\Delta \chi^2=1$ and $\Delta \chi^2/\chi^2=2 \%$ and have been obtained for the other $pp$ data from PHENIX [@ref:phenix2006] shown in Figs. \[fig:hadronic2g\] and \[fig:hadronic3pi\] as well. The uncertainties for $pp \to \eta X$ are smallest for intermediate $p_T$ values, where the latest PHENIX measurement [@ref:phenix-run6] is most precise and the three data sets [@ref:phenix2006; @ref:phenix-run6] have maximum overlap, and increase towards either end of the $p_T$ range of the run ’06 data. In particular at intermediate $p_T$ values, the main constraint comes from the PHENIX run ’06 data, whereas SIA data become increasingly relevant at low $p_T$. The previous $pp$ measurements from PHENIX [@ref:phenix2006] are limited to $p_T\lesssim 11\,\mathrm{GeV}$ and have considerably larger uncertainties and, hence, less impact on the fit. Conclusions\[sec:conclusions\] ============================== A first global QCD analysis of eta fragmentation functions at NLO accuracy has been presented based on the world data from electron-positron annihilation experiments and latest results from proton-proton collisions. The obtained parameterizations [@ref:fortran] reproduce all data sets very well over a wide kinematic range. Even though the constraints imposed on the eta meson fragmentation functions by presently available data are significantly weaker than those for pions or kaons, the availability of eta FFs extends the applicability of the pQCD framework to new observables of topical interest. Among them are the double-spin asymmetry for eta production in longitudinally polarized proton-proton collisions at RHIC, eta meson production at the LHC, possible medium modifications in the hadronization in the presence of a heavy nucleus, and predictions for future semi-inclusive lepton-nucleon scattering experiments. The obtained FFs still depend on certain assumptions, like $SU(3)$ symmetry for the light quarks, dictated by the lack of data constraining the flavor separation sufficiently well. Compared to FFs for other hadrons they show interesting patterns of similarities and differences which can be further tested with future data. We are grateful to David R. Muller for help with the BABAR data. CAA gratefully acknowledges the support of the U.S. Department of Energy for this work through the LANL/LDRD Program. The work of FE and JPS was supported by grants no. DE-FG02-04ER41301 and no. DE-FG02-94ER40818, respectively. This work was supported in part by CONICET, ANPCyT, UBACyT, BMBF, and the Helmholtz Foundation. [99]{} J. C. Collins and D. E. Soper, Nucl. Phys. B [**193**]{}, 381 (1981); B [**213**]{}, 545(E) (1983); B [**194**]{}, 445 (1982). See, e.g., J. C. Collins, D. E. Soper, and G. Sterman, “Perturbative QCD”, A. H. Mueller (ed.), Adv. Ser. Direct. High Energy Phys. [**5**]{}, 1 (1988) and references therein. See, e.g., S. Albino [*et al.*]{}, [arXiv:0804.2021]{} and references therein. R. D. Field and R. P. Feynman, Nucl. Phys.  B [**136**]{}, 1 (1978). D. de Florian, R. Sassot, and M. Stratmann, Phys. Rev.  D [**75**]{}, 114010 (2007). D. de Florian, R. Sassot, and M. Stratmann, Phys. Rev.  D [**76**]{}, 074033 (2007). S. Albino, B. A. Kniehl, and G. Kramer, Nucl. Phys.  B [**803**]{}, 42 (2008). M. Hirai, S. Kumano, and T. H. Nagai, Phys. Rev.  C [**76**]{}, 065207 (2007). S. Kretzer, Phys. Rev.  D [**62**]{}, 054001 (2000). D. de Florian, R. Sassot, M. Stratmann, and W. Vogelsang, Phys. Rev. Lett.  [**101**]{}, 072001 (2008); Phys. Rev.  D [**80**]{}, 034030 (2009). D. de Florian, M. Stratmann, and W. Vogelsang, Phys. Rev.  D [**57**]{}, 5811 (1998). P. M. Nadolsky [*et al.*]{}, Phys. Rev.  D [**78**]{}, 013004 (2008). A. D. Martin, W. J. Stirling, R. S. Thorne, and G. Watt, Eur. Phys. J.  C [**63**]{}, 189 (2009); R. D. Ball [*et al.*]{}, Nucl. Phys.  B [**838**]{}, 136 (2010). D. Stump [*et al.*]{}, Phys. Rev.  D [**65**]{}, 014012 (2001). M. Greco and S. Rolli, Z. Phys.  C [**60**]{}, 169 (1993). D. Indumathi, H. S. Mani, and A. Rastogi, Phys. Rev.  D [**58**]{}, 094014 (1998); D. Indumathi and B. Misra, [arXiv:0901.0228]{}. S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev.  C [**75**]{}, 024909 (2007). F. Ellinghaus \[PHENIX Collaboration\], [arXiv:0808.4124]{}. See, e.g., F. Arleo, Eur. Phys. J.  C [**61**]{}, 603 (2009); A. Accardi, F. Arleo, W. K. Brooks, D. D’Enterria, and V. Muccifora, Riv. Nuovo Cim.  [**032**]{}, 439 (2010). R. Sassot, M. Stratmann, and P. Zurita, Phys. Rev.  D [**81**]{}, 054001 (2010). G. Curci, W. Furmanski, and R. Petronzio, Nucl. Phys. B [**[175]{}**]{}, 27 (1980); W. Furmanski and R. Petronzio, Phys. Lett. [**[97B]{}**]{}, 437 (1980); L. Beaulieu, E. G. Floratos, and C. Kounnas, Nucl. Phys. B [**166**]{}, 321 (1980); P. J. Rijken and W. L. van Neerven, Nucl. Phys.  B [**487**]{}, 233 (1997); M. Stratmann and W. Vogelsang, Nucl. Phys. B [**496**]{}, 41 (1997); A. Mitov and S. Moch, Nucl. Phys.  B [**751**]{}, 18 (2006); A. Mitov, S. Moch, and A. Vogt, Phys. Lett.  B [**638**]{}, 61 (2006); S. Moch and A. Vogt, Phys. Lett.  B [**659**]{}, 290 (2008). G. Altarelli, R. K. Ellis, G. Martinelli, and S. Y. Pi, Nucl. Phys.  B [**160**]{}, 301 (1979); W. Furmanski and R. Petronzio, Z. Phys.  C [**11**]{}, 293 (1982); P. Nason and B. R. Webber, Nucl. Phys.  B [**421**]{}, 473 (1994) \[Erratum-ibid.  B [**480**]{}, 755 (1996)\]. F. Aversa, P. Chiappetta, M. Greco, and J. P. Guillet, Nucl. Phys.  B [**327**]{}, 105 (1989); D. de Florian, Phys. Rev.  D [**67**]{}, 054004 (2003); B. Jäger, A. Schäfer, M. Stratmann, and W. Vogelsang, Phys. Rev.  D [**67**]{}, 054005 (2003). M. Stratmann and W. Vogelsang, Phys. Rev. D [**64**]{}, 114007 (2001). S. Abachi [*et al.*]{} \[HRS Collaboration\], Phys. Lett.  B [**205**]{}, 111 (1988). G. Wormser [*et al.*]{} \[MARK-II Collaboration\], Phys. Rev. Lett.  [**61**]{}, 1057 (1988). W. Bartel [*et al.*]{} \[JADE Collaboration\], Z. Phys.  C [**28**]{}, 343 (1985). D. Pitzl [*et al.*]{} \[JADE Collaboration\], Z. Phys.  C [**46**]{}, 1 (1990) \[Erratum-ibid.  C [**47**]{}, 676 (1990)\]. H. J. Behrend [*et al.*]{} \[CELLO Collaboration\], Z. Phys.  C [**47**]{}, 1 (1990). D. Buskulic [*et al.*]{} \[ALEPH Collaboration\], Phys. Lett.  B [**292**]{}, 210 (1992). R. Barate [*et al.*]{} \[ALEPH Collaboration\], Eur. Phys. J.  C [**16**]{}, 613 (2000). A. Heister [*et al.*]{} \[ALEPH Collaboration\], Phys. Lett.  B [**528**]{}, 19 (2002). O. Adriani [*et al.*]{} \[L3 Collaboration\], Phys. Lett.  B [**286**]{}, 403 (1992). M. Acciarri [*et al.*]{} \[L3 Collaboration\], Phys. Lett.  B [**328**]{}, 223 (1994). K. Ackerstaff [*et al.*]{} \[OPAL Collaboration\], Eur. Phys. J.  C [**5**]{}, 411 (1998). F. Anulli \[BABAR Collaboration\], [arXiv:hep-ex/0406017]{}. A. Adare [*et al.*]{} \[PHENIX Collaboration\], [arXiv:1009.6224]{}. R. Sassot, M. Stratmann, and P. Zurita, [arXiv:1008.0540]{} (to appear in Phys. Rev. D). L. Apanasevich [*et al.*]{} \[Fermilab E706 Collaboration\], Phys. Rev.  D [**68**]{}, 052001 (2003). D. de Florian and W. Vogelsang, Phys. Rev.  D [**71**]{}, 114004 (2005). A [Fortran]{} package containing our NLO set of eta FFs can be obtained upon request from the authors. P. M. Nadolsky [*et al.*]{}, Phys. Rev.  D [**78**]{}, 013004 (2008). [^1]: address after Oct. $1^{\mathrm{st}}$: Physics Department, Brookhaven National Laboratory, Upton, NY 11973, USA
--- author: - 'M. C. Wyatt' date: 'Submitted 27 September 2004, Accepted 13 December 2004' title: 'The Insignificance of P-R Drag in Detectable Extrasolar Planetesimal Belts' --- Introduction {#s:intro} ============ Some 15% of nearby stars exhibit more infrared emission than that expected from the stellar photosphere alone (e.g., Aumann et al. 1984). This excess emission comes from dust in orbit around the stars and its relatively cold temperature implies that it resides at large distances from the stars, 30-200 AU, something which has been confirmed for the disks which are near enough and bright enough for their dust distribution to be imaged (Holland et al. 1998; Greaves et al. 1998; Telesco et al. 2000; Wyatt et al. 2004). Because the dust would spiral inwards due to Poynting-Robertson (P-R) drag or be destroyed in mutual collisions on timescales which are much shorter than the ages of these stars, the dust is thought to be continually replenished (Backman & Paresce 1993), probably from collisions between km-sized planetesimals (Wyatt & Dent 2002). In this way the disks are believed to be the extrasolar equivalents of the Kuiper Belt in the Solar System (Wyatt et al. 2003). These debris disks will play a pivotal role in increasing our understanding of the outcome of planet formation. Not only do these disks tell us about the distribution of planetesimals resulting from planet formation processes, but they may also provide indirect evidence of unseen planets in their systems. Models have been presented that show how planets can cause holes at the centre of the disks and clumps in the azimuthal distribution of dust, both of which are commonly observed features of debris disks. Many of these models require the dust to migrate inward due to P-R drag to be valid; e.g., in the model of Roques et al. (1994) the inner hole is caused by a planet which prevents dust from reaching the inner system which would otherwise be rapidly replenished by P-R drag (e.g., Strom, Edwards & Skrutskie 1993), and clumps arise in models when dust migrates inward due to P-R drag and becomes trapped in a planet’s resonances (Ozernoy et al. 2000; Wilner et al. 2002; Quillen & Thorndike 2002). Alternative models exist for the formation of both inner holes and clumps; e.g., in some cases inner holes may be explained by the sublimation of icy grains (Jura et al. 1998) or by the outward migration of dust to the outer edge of a gas disk (Takeuchi & Artymowicz 2001), and clumps may arise from the destruction of planetesimals which were trapped in resonance with a planet when it migrated out from closer to the star (Wyatt 2003). The focus of the models on P-R drag is perhaps not surprising, as the dynamical evolution of dust in the solar system is undeniably dominated by the influence of P-R drag, since this is the reason the inner solar system is populated with dust (Dermott et al. 1994; Liou & Zook 1999; Moro-Martín & Malhotra 2002). However, there is no reason to expect that the physics dominating the structure of extrasolar planetesimal disks should be the same as that in the solar system. In fact the question of whether any grains in a given disk suffer significant P-R drag evolution is simply determined by how dense that disk is (Wyatt et al. 1999). It has been noted by several authors that the collisional lifetime of dust grains in the well studied debris disks is shorter than that of P-R drag (e.g., Backman & Paresce 1993; Wilner et al. 2002; Dominik & Decin 2003), a condition which means that P-R drag can be ignored in these systems. Clearly it is of vital importance to know which physical processes are at play in debris disks to ascertain the true origin of these structures. In this paper I show that P-R drag is not an important physical process in the disks which have been detected to date because collisions occur on much shorter timescales meaning that planetesimals are ground down into dust which is fine enough to be removed by radiation pressure before P-R drag has had a chance to act. In §\[s:sm\] a simple model is derived for the spatial distribution of dust created in a planetesimal belt. In §\[s:em\] this model is used to determine the emission spectrum of these dust disks. A discussion of the influence of P-R drag in detectable and detected debris disks as well as of the implications for how structure in these disks should be modelled and interpreted is given in §\[s:disc\]. Balance of Collisions and P-R Drag {#s:sm} ================================== In this simple model I consider a planetesimal belt at a distance of $r_0$ from a star of mass $M_\star$ which is producing particles all of the same size, $D$. The orbits of those particles are affected by the interaction of the dust grains with stellar radiation which causes a force which is inversely proportional to the square of distance from the star, and which is commonly defined by the parameter $\beta=F_{rad}/F_{grav}$ (Burns et al. 1979; Gustafson 1994). This parameter is a function of particle size and for large particles $\beta \propto 1/D$. The tangential component of this force is known as Poynting-Robertson drag, or P-R drag. This results in a loss of angular momentum from the particle’s orbit which makes it spiral in toward the star. Assuming the particle’s orbit was initially circular, the migration rate is: $$\dot{r}_{pr} = -2\alpha/r, \label{eq:rpr}$$ where $\alpha = 6.24 \times 10^{-4} (M_\star/M_\odot)\beta$ AU$^2$yr$^{-1}$ (Wyatt et al. 1999). On their way in, dust grains may collide with other dust grains. The mean time between such collisions depends on the dust density: $$t_{coll}(r) = t_{per}(r) / 4\pi \tau_{eff}(r), \label{eq:tcoll}$$ where $t_{per} = \sqrt{(r/a_\oplus)^3(M_\odot/M_\star)}$ is the orbital period at this distance from the star, and $\tau_{eff}$ is the effective optical depth of the disk, or the surface density of cross-sectional area of the dust (Wyatt et al. 1999). If the collisions are assumed to be destructive then the distribution of dust in the disk can be determined by considering the amount of material entering and leaving an annulus at $r$ of width $dr$. The steady state solution is that the amount entering the annulus due to P-R drag is equal to that leaving due to P-R drag and that which is lost by collisions (i.e., the continuity equation): $$d[n(r)\dot{r}_{pr}(r)]/dr = -N^{-}(r), \label{eq:cont}$$ where $n(r)$ is the one dimensional number density (number of particles per unit radius), and $N^{-}(r) = n(r)/t_{coll}(r)$ is the rate of collisional loss of $n(r)$. Since in a thin disk $\tau_{eff}(r) = 0.125 D^2 n(r)/r$, this continuity equation can be solved analytically to find the variation of effective optical depth with distance from the star (Wyatt 1999): $$\begin{aligned} \tau_{eff}(r) & = & \frac{\tau_{eff}(r_0)} {1+4\eta_0(1-\sqrt{r/r_0})} \label{eq:prwcoll} \\ \eta_0 & = & 5000 \tau_{eff}(r_0)\sqrt{(r_0/a_\oplus)(M_\odot/M_\star)}/\beta \label{eq:eta0}\end{aligned}$$ where this distribution has been scaled by the boundary condition that at $r_0$, $\tau_{eff} = \tau_{eff}(r_0)$. This distribution is shown in Fig. \[fig:teffeta0\]. The result is that in disks which are very dense, i.e., those for which $\eta_0 \gg 1$, most of the dust is confined to the region where it is produced. Very little dust in such disks makes it into the inner regions as it is destroyed in mutual collisions before it gets there. In disks which are tenuous, however, i.e., those for which $\eta_0 \ll 1$, all of the dust makes it to the star without suffering a collision. The consequence is a dust distribution with a constant surface density as expected from P-R drag since this is the solution to $d[n(r)\dot{r}_{pr}] = 0$. Dust distributions with $\eta_0 \approx 1$ have a distribution which reflects the fact that some fraction of the dust migrates in without encountering another dust grain, while other dust grains are destroyed. This can be understood by considering that $\eta_0 = 1$ describes the situation in which the collisional lifetime in the source region given by eq. \[eq:tcoll\] equals the time it takes for a dust grain to migrate from the source region to the star, which from eq. \[eq:rpr\] is $t_{pr} = 400(M_\odot/M_\star)(r_0/a_\oplus)^2/\beta$ years. --------- -------------------------------------- --------- -------------------------------------- **(a)** ![image](2073fg1a.ps){width="3.2in"} **(b)** ![image](2073fg1b.ps){width="3.2in"} --------- -------------------------------------- --------- -------------------------------------- Fig. \[fig:teffeta0\]b shows the distribution of dust originating in a planetesimal belt 30 AU from a solar mass star for different dust production rates. This illustrates the fact that the density at the centre does not increase when the dust density reduces to a level at which P-R drag becomes important, because even when the disk is very dense a significant number of particles still make it into the inner system. A look at eqs. \[eq:prwcoll\] and \[eq:eta0\] shows that even in the limit of a very dense disk the effective optical depth at the centre of the disk cannot exceed $$max[\tau_{eff}(r=0)] = 5 \times 10^{-5} \beta \sqrt{(M_\star/M_\odot)(a_\oplus/r_0)},$$ which for the belt plotted here means that the density at the centre is at most $4.6 \times 10^{-6}$. Of course the situation described above is a simplification, since dust is really produced with a range of sizes. Dust of different sizes would have different migration rates, as defined by eq. \[eq:rpr\], but would also have different collisional lifetimes. Eq. \[eq:tcoll\] was derived under the assumption that the dust is most likely to collide with grains of similar size (Wyatt et al. 1999), collisions which were assumed to be destructive. In reality the collisional lifetime depends on particle size, in a way which depends on the size distribution of dust in the disk, and the size of impactor required to destroy the particle, rather than result in a non-destructive collision (e.g., Wyatt & Dent 2002). Once such a size distribution is considered, one must also consider that dust of a given size is not only destroyed in collisions, but also replenished by the destruction of larger particles. The resulting continuity equation can no longer be solved analytically, but must be solved numerically along with an appropriate model for the outcome of collisions between dust grains of different sizes. Such a solution is not attempted in this paper which is more interested in the large scale distribution of material in extrasolar planetesimal belts for which the assumption that the observations are dominated by grains of just one size is a fair first approximation, albeit one which should be explored in more detail in future work. Emission Properties {#s:em} =================== For simplicity the emission properties of the disk are derived under the assumption that dust at a given distance from the star is heated to black body temperatures of $T_{bb} = 278.3 (L_\star/L_\odot)^{1/4}/\sqrt{r/a_\oplus}$ K. It should be noted, however, that small dust grains tend to emit at temperatures hotter than this because they emit inefficiently at mid- to far-IR wavelengths, and temperatures above black body have been seen in debris disks (e.g., Telesco et al. 2000). ![image](2073fig2.ps){width="5.0in"} The emission spectrum of dust from planetesimal belts around stars of different spectral type are shown in Fig. \[fig:fnus\]. The shape of these spectra can be understood qualitatively. At the longest wavelengths all of the dust is emitting in the Rayleigh-Jeans regime leading to a spectrum $F_\nu \propto \lambda^{-2}$. At shorter wavelengths there is a regime in which $F_\nu \propto \lambda$. This emission arises from the dust which is closest to the star. Since dust which has a temperature $\ll 2898$ $\mu$m$/\lambda$ is emitting on the Wien side of the black body curve, this contributes little to the flux at this wavelength. Thus the flux at a given wavelength the comes from a region around the star extending out to a radius $\propto \lambda^2$, corresponding to an area $\propto \lambda^4$ and so an emission spectrum $F_\nu \propto \lambda$ (see also Jura et al. 1998). For dust belts in which $\eta_0 \ll 1$ the two regimes blend smoothly into one another at a wavelength corresponding to the peak of black body emission at the distance of $r_0$. For more massive disks the shorter wavelength component is much smaller leading to a spectrum which more closely resembles black body emission at the distance of $r_0$ plus an additional hot component. The flux presented in Fig. \[fig:fnus\] includes one contentious assumption which is the size of the dust grains used for the parameter $\beta$. The most appropriate number to use is that for the size of grains contributing most to the observed flux from the disk. In general that corresponds to the size at which the cross-sectional area distribution peaks. In a collisional cascade size distribution the cross-sectional area is concentrated in the smallest grains in the distribution. Since dust with $\beta > 0.5$ is blown out of the system by radiation pressure as soon as it is created, this implies that $\beta = 0.5$ is the most appropriate value to use, which is what was assumed in Fig. \[fig:fnus\]. However, evolution due to P-R drag has an effect on the size distribution. Since small grains are removed faster than large grains (see eq. \[eq:rpr\]), the resulting cross-sectional area distribution peaks at large sizes (Wyatt et al. 1999; Dermott et al. 2001). Analogy with the zodiacal cloud in which the cross-sectional area distribution peaks at a few hundred $\mu$m (Love & Brownlee 1993) implies that a much lower value of $\beta$ may be more appropriate, perhaps as low as 0.01 for disks in which $\eta_0 \ll 1$. Thus the fluxes given in Fig. \[fig:fnus\] should be regarded as upper limits to the flux expected from these disks (since $\beta > 0.5$ regardless). This is particularly true for fluxes at wavelengths longer than 100 $\mu$m, because even in a collisional cascade distribution the emission at sub-mm wavelengths is dominated by grains larger than a few hundred $\mu$m, since grains smaller than this emit inefficiently at long wavelengths (see e.g., Fig. 5 of Wyatt & Dent 2002). Inefficient emission at long wavelengths results in a spectrum which is steeper than $F_\nu \propto \lambda^{-2}$ in the Rayleigh-Jeans regime. For debris disks the observed spectrum is seen to fall off at a rate closer to $F_\nu \propto \lambda^{-3}$ (Dent et al. 2000). Discussion {#s:disc} ========== A disk’s detectability is determined by two factors. First is the question of whether the disk is bright enough to be detected in a reasonable integration time for a given instrument. For example, SCUBA observations at 850 $\mu$m have a limit of a few mJy (Wyatt, Dent & Greaves 2003) and IRAS observations at 60 and 100 $\mu$m had $3\sigma$ sensitivity limits of around 200 and 600 mJy. More important at short wavelengths, and for nearby stars, however, is how bright the disk is relative to the stellar photosphere. This is because unless a disk is resolved in imaging, or is particularly bright, its flux is indistinguishable from the stellar photosphere, the flux of which is not generally known with better precision than $\pm 10$ %. For such cases an appropriate limit for detectability is that the disk flux must be at least 0.3 times that of the photosphere. The total flux presented in Fig. \[fig:fnus\] assumes that the star is at a distance of 10 pc. The flux from disks around stars at different distances scales proportionally with the inverse of the distance squared. However, the ratio of disk flux to stellar flux (shown with a solid line on Fig. \[fig:fnus\]) would remain the same. Given the constraints above, as a first approximation one can consider that the disks which have been detected to date are those with fluxes which lie to the upper right of the photospheric flux in Fig. \[fig:fnus\], but with the caveat that such disks can only be detected out to a certain distance which is a function of the instrument’s sensitivity. This allows conclusions to be reached about the balance between collisions and P-R drag in the disks which can have been detected. Fundamentally this is possible because the effective optical depth and $\eta_0$ are observable parameters (see next paragraph). The first conclusion is that it is impossible to detect disks with $\eta_0 \leq 0.01$ because these are too faint with respect to the stellar photosphere. The conclusion about disks with $\eta_0 = 1$ is less clear cut. It would not be possible to detect such disks if they were, like the asteroid belt, at 3 AU from the host stars. At larger distances the disks are more readily detectable. However, detectability is wavelength dependent, with disks around G0V and M0V stars only becoming detectable longward of around 100 $\mu$m, while those around A0V stars are detectable at $>50$ $\mu$m. Disks with $\eta_0 \gg 100$ are readily detectable for all stars, although again there is some dependence on wavelength. Since most disks known about to date were discovered by IRAS at 60 $\mu$m this implies that P-R drag is not a dominant factor governing the evolution of these disks, except for perhaps the faintest disks detected around A stars. To check this conclusion a crude estimate for the value of $\eta_0$ was made for all disks in the debris disk database (http://www.roe.ac.uk/atc/research/ddd). This database includes all main sequence stars determined in previous surveys of the IRAS catalogues to have infrared emission in excess of that of the stellar photosphere (e.g., Stencel & Backman 1991; Mannings & Barlow 1998). To calculate $\eta_0$, first only stars within 100 pc and with detections of excess emission at two IRAS wavelengths were chosen. The fluxes at the longest two of those wavelengths were then used to determine the dust temperature and so its radius by assuming black body emission. Eliminating spectra which implied the emission may be associated with background objects resulted in a list of 37 candidates, including all the well-known debris disks. A disk’s effective optical depth was then estimated from its flux at the longest wavelength: $$\tau_{eff} = F_\nu \Omega_{disk}/B_\nu(T), \label{eq:taueffbnu}$$ where $\Omega_{disk}$ is the solid angle subtended by the disk if seen face-on, which for a ring-like disk of radius $r$ and width $dr$ at a distance of $d$ in pc is $6.8 \times 10^9 d^2/rdr$. The ring width is generally unknown and so for uniformity it was assumed to be $dr=0.1r$ for all disks. Finally, $\eta_0$ was calculated under the assumption that $\beta=0.5$. All of these stars were found to have $\eta_0 > 10$, with a median value of 80 (see Fig. \[fig:eta0teffobs\]). [^1] All 18 stars (i.e., half the sample) with $\eta_0 <80$ are of spectral type earlier than A3V, while stars with disks with $\eta_0 > 80$ are evenly distributed in spectral type. It is worth noting that of the disks which have been resolved, those with ages $\sim 10$ Myr all have $\eta_0 > 1000$ ($\beta$ Pic, HR4796, HD141569) while those older than 100 Myr all have $\eta_0 < 100$ (Vega, $\epsilon$ Eridani, Fomalhaut, $\eta$ Corvi). ![The value of $\eta_0$ for the disks of the 37 stars in the debris disk database with excess flux measurements at two wavelengths plotted against the effective temperature of the stars. The disk around the star HD98800 falls off the plot at $\eta_0 \approx 10^{5}$.[]{data-label="fig:eta0teffobs"}](2073fig3.ps){width="3.0in"} The fact that the debris disks which have been detected to date have $\eta_0 \gg 1$ implies that the holes at their centres are not caused by planets which prevent this dust from reaching the inner system. Rather the majority of this dust is ground down in mutual collisions until it is fine enough to be removed by radiation pressure. A similar conclusion was reached by Dominik & Decin (2003) for debris disks which were detected by ISO. This also means that azimuthal structure in the disks cannot be caused by dust migrating into the resonances of a planet (e.g., Kuchner & Holman 2003), at least not due to P-R drag alone. Models of structures in debris disks which have to invoke P-R drag should be reconsidered and would have to include the effect of collisions at the fundamental level to remain viable (e.g., Lecavelier des Etangs et al. 1996), since it appears that P-R drag can effectively be ignored in most detectable disks. Collisions are not 100% efficient at stopping dust from reaching the star, and the small amount which does should result in a small mid-IR excess. If no such emission is detected at a level consistent with the $\eta_0$ for a given disk, then an obstacle such as a planet could be inferred. However, because of the low level of this emission with respect to the photosphere, it could only be detected in resolved imaging making such observations difficult (e.g., Liu et al. 2004). Even in disks with $\eta_0 \ll 1$, the resulting emission spectrum still peaks at the temperature of dust in the planetesimal belt itself. This means that the temperature of the dust is a good tracer of the distribution of the planetesimals and a relative dearth of warm dust really indicates a hole in the planetesimal distribution close to the star. While uncertainties in the simple model presented here preclude hard conclusions been drawn on whether it is possible to detect disks with $\eta_0 \approx 1$, it is important to remind the reader that fluxes plotted in Fig. \[fig:fnus\] used the most optimistic assumptions about the amount of flux emanating from a disk with a given $\eta_0$, so that the conclusions may become firmer than this once a proper analysis of the evolution of a disk with a range of particle sizes is done. However, this study does show that detecting such disks would be much easier at longer wavelengths, since photosphere subtraction is less problematic here. Disks which are too cold for IRAS to detect in the far-IR, but which are bright enough to detect in the sub-mm have recently been found (Wyatt, Dent & Greaves 2003). Thus disks with $\eta_0 \leq 1$ may turn up in sub-mm surveys of nearby stars. They may also be detected at 160 $\mu$m by SPITZER (Rieke et al. 2004). Aumann, H. H., et al. 1984, , 278, L23 Backman, D. E., & Paresce, F. 1993, in Protostars and Planets III, eds. E. H. Levy & J. I. Lunine (Tucson: Univ. Arizona Press), 1253 Burns, J. A., Lamy, P. L., & Soter, S. 1979, Icarus, 40, 1 Dent, W. R. F., Walker, H. J., Holland, W. S., & Greaves, J. S. 2000, , 314, 702 Dermott, S. F., Jayaraman, S., Xu, Y. L., Gustafson, B. A. S., & Liou, J. C. 1994, , 369, 719 Dermott, S. F., Grogan, K., Durda, D. D., Jayaraman, S., Kehoe, T. J. J., Kortenkamp, S. J., & Wyatt, M. C. 2001, in Interplanetary Dust, eds. E. Grun, B. Å. S. Gustafson, S. F. Dermott, H. Fechtig (Heidelberg: Springer-Verlag), 569 Dominik, C., & Decin, G. 2003, , 598, 626 Greaves, J. S., et al. 1998, , 506, L133 Gustafson, B. Å. S. 1994, Annu. Rev. Earth Planet. Sci., 22, 553 Holland, W. S., et al. 1998, , 392, 788 Jura, M., Malkan, M., White, R., Telesco, C., Piña, R., & Fisher, R. S. 1998, , 505, 897 Kuchner, M. J., & Holman, M. J. 2003, , 588, 1110 Lecavelier des Etangs, A., Scholl, H., Roques, F., Sicardy, B., & Vidal-Madjar, A. 1996, Icarus, 123, 168 Liou, J.-C., & Zook, H. A. 1999, , 118, 580 Liu, W. M., et al. 2004, , 610, L125 Love, S. G., & Brownlee, D. E. 1993, Science, 262, 550 Mannings, V., & Barlow, M. J. 1998, , 497, 330 Moro-Martín, A., & Malhotra, R. 2002, , 124, 2305 Ozernoy, L. M., Gorkavyi, N. N., Mather, J. C., & Taidakova, T. A. 2000, , 537, L147 Quillen, A. C., & Thorndike, S. 2002, , 578, L149 Rieke, G. H., et al. 2004, , 154, 25 Roques, F., Scholl, H., Sicardy, B., & Smith, B. A. 1994, Icarus, 108, 37 Strom, S. E., Edwards, S., & Skrutskie, M. F. 1993, in Protostars and Planets III, eds. E. H. Levy & J. I. Lunine (Tucson: Univ. Arizona Press), 837 Stencel, R. E., & Backman, D. E. 1991, , 75, 905 Takeuchi, T., & Artymowicz, P. 2001, , 557, 990 Telesco, C. M., et al. 2000, , 530, 329 Wilner, D. J., Holman, M. J., Kuchner, M. J., & Ho, P. T. P. 2002, , 569, L115 Wyatt, M. C. 1999, PhD Thesis, Univ. Florida Wyatt, M. C. 2003, , 598, 1321 Wyatt, M. C., & Dent, W. R. F. 2002, , 334, 589 Wyatt, M. C., Dermott, S. F., Telesco, C. M., Fisher, R. S., Grogan, K., Holmes, E. K., & Piña, R. K. 1999, , 527, 918 Wyatt, M. C., Dent, W. R. F., & Greaves, J. S. 2003, , 342, 876 Wyatt, M. C., Holland, W. S., Greaves, J. S., & Dent, W. R. F. 2003, Earth Moon Planets, 92, 423 Wyatt, M. C., Greaves, J. S., Dent, W. R. F., & Coulson, I. M. C. 2004, , in press [^1]: The biggest uncertainties in the derived values of $\eta_0$ are in $r$, $dr$ and $\beta$: e.g., if black body temperatures underestimate the true radius by a factor of 2 and the width of the ring is $dr=0.5r$ then the $\eta_0$ values would have to be reduced by a factor of 10; changes to $\beta$ would increase $\eta_0$.
--- abstract: 'We are carrying out a search for all radio loud Active Galactic Nuclei observed with [*XMM-Newton*]{}, including targeted and field sources to perform a multi-wavelength study of these objects. We have cross-correlated the Verón-Cetty & Verón (2010) catalogue with the [*XMM-Newton*]{} Serendipitous Source Catalogue (2XMMi) and found around 4000 sources. A literature search provided radio, optical, and X-ray data for 403 sources. This poster summarizes the first results of our study.' --- Introduction and Sample ======================= ![Radio loud (green) and radio quiet (blue and red) AGN. Rx is the ratio between 5 GHz and 2-10 keV emission. R is the ratio between 5 GHz and B-band emission. Boundares from Panessa et al. 2007. Blue points are sources classified as radio quiet by R and Rx parameters Red points are sources classified as radio quiet by the R parameter and radio loud by the Rx parameter []{data-label="loudquiet"}](logr_logrx.ps){width="3.4in"} Bianchi et al. (2009a,b) presented the Catalogue of Active Galactic Nuclei (AGN) in the [*XMM-Newton*]{} Archive (CAIXA). They focused on the radio-quiet, X-ray unobscured (NH $< 2\times\ 10^{22}$ cm$^{-2}$) AGN observed by XMM-Newton in targeted observations. We are carrying out a similar multiwavelength study, for both targeted and field radio-loud AGN observed by [*XMM Newton*]{}. We cross-correlated the Verón-Cetty & Verón (2010) catalogue (Quasars and Active Galactic Nuclei, 13th edition) with the [*XMM-Newton*]{} Serendipitous Source Catalogue (2XMMi, Watson et al., 2009) Third Data Release, and obtained a list of around 4000 sources. However, only 10% of the sources have published optical and radio data. Our sample consists of all AGN (403 total, Figures \[loudquiet\] and \[lxlb\]) with available X-ray (2-10 keV), optical (B-band) and radio (5 GHz) data. ![X-ray versus B-band luminosities. For optical luminosities higher than 10$^{-6}$ erg/s/Hz radio loud (green) AGN are brighter in X-rays than radio quiet (red and blue). This effect is stronger at higher luminosities (10$^{-5}$-10$^{-4}$ erg/s/Hz), where radio loud AGN deviate from the low luminosities correlation. X-ray emission in radio loud sources could have larger contributions from the jet.[]{data-label="lxlb"}](lumin210_luminb.ps){width="3.4in"} First results and ongoing work ============================== Radio loud sources show jet contribution to optical and X-ray emission, and are brighter in X-rays than radio quiet. Optical and X-rays are AGN dominated with small contribution from host. For optical luminosities higher than 10$^{-6}$ erg/s/Hz radio loud AGN are brighter in X-rays than radio quiet. This effect increases for higher luminosities ($10^{-5}-10^{-4}$), where loud AGN deviate from the low luminosities correlation. X-rays in radio loud sources could have higher contributions from the jet. The sample seems to be missing faint, radio loud AGN, although at this point it is not clear if this is due to selection or astrophysical effects. While X-rays in radio loud AGN seem to come mainly from jets, other mechanisms of X-ray emission are being studied (e.g. ADAF)? A complete spectral optical and X-ray analysis, including also the 0.2-2 kev band will bring light to the origin of the X-rays in these sources. We are currently studying the sample properties according to different classifications (Seyfert and QSO or FR I and FR II morphologies), and will include IR data when available, with the goal of carrying out a systematic analysis in as many wavelengths as possible. [4]{} natexlab\#1[\#1]{} , S., [Bonilla]{}, N. F., [Guainazzi]{}, M., [Matt]{}, G., & [Ponti]{}, G. 2009, [A&A]{}, 501, 915 , S., [Guainazzi]{}, M., [Matt]{}, G., [Fonseca Bonilla]{}, N., & [Ponti]{}, G. 2009, [A&A]{}, 495, 421 , F., [Barcons]{}, X., & [Bassani]{} L. et al., 2009, å, 467, 519 , M. & [V[é]{}ron]{}, P. 2010, [A&A]{}, 518, A10 , M. G., [Schr[ö]{}der]{}, A. C., [Fyfe]{}, D., [et al.]{} 2009, [A&A]{}, 493, 339
--- author: - Tarek Sayed Ahmed title: 'On complete representability of Pinter’s algebras and related structures' --- We answer an implicit question of Ian Hodkinson’s. We show that atomic Pinters algebras may not be completely representable, however the class of completely representable Pinters algebras is elementary and finitely axiomatizable. We obtain analagous results for infinite dimensions (replacing finite axiomatizability by finite schema axiomatizability). We show that the class of subdirect products of set algebras is a canonical variety that is locally finite only for finite dimensions, and has the superamalgamation property; the latter for all dimensions. However, the algebras we deal with are expansions of Pinter algebras with substitutions corresponding to tranpositions. It is true that this makes the a lot of the problems addressed harder, but this is an acet, not a liability. Futhermore, the results for Pinter’s algebras readily follow by just discarding the substitution operations corresponding to transpostions. Finally, we show that the multi-dimensional modal logic corresponding to finite dimensional algebras have an $NP$-complete satisfiability problem. [^1] Introduction ============ Suppose we have a class of algebras infront of us. The most pressing need is to try and classify it. Classifying is a kind of defining. Most mathematical classification is by axioms, either first order, or even better equations. In algebraic logic the typical question is this. Given a class of concrete set algebras, that we know in advance is elementary or is a variety. Furthermore, such algebras consist of sets of sequences (usually with the same length called the dimension) and the operations are set - theoretic, utilizing the form of elements as sets of sequences. Is there a [*simple*]{} elementary (equational) axiomatization of this class? A harder problem is: Is their a [*finite*]{} elementary (equational) axiomatization of this class? The prime examples of such operations defined on the unit of the algebra, which is in the form of $^nU$ ($n\geq 2)$ are cylindrifiers and substitutions. For $i<n$, and $t, s\in {}^nU$, define, the equivalence relation, $s\equiv_i t$ if $s(j)=t(j)$ for all $j\neq i$. Now fix $i<n$ and $\tau\in {}^nn$, then these operations are defined as follows $$c_iX=\{s\in {}^nU: \exists t\in X, t \equiv_i s\},$$ $$s_{\tau}X=\{s\in {}^nU: s\circ \tau\in X\}.$$ Both are unary operations on $\wp(^nU)$; the $c_i$ is called the $i$th cylindrfier, while the $s_{\tau}$ is called the substitution operation corresponding to te transformation $\tau$, or simply a substitution. For Boolean algebras this question is completely settled by Stone’s representation theorem. Every Boolean algebra is representable, equivalently, the class of Boolean algebras is finitely axiomatizable by a set of equations. This is equivalent to the completeness of propositional logic. When we approach the realm of first order logic things tend to become much more complicated. The standard algebraisation of first order logic is cylindric algebras (where cylindrifiers are the prominent citizens) and polyadic algebras (where cylindrifiers and substitutions are the prominent citizens). Such algebras, or rather the abstract version thereof, are defined by a finite set of equations that aim to capture algebraically the properties of cylindrifiers and substitutions (and diagonal elements if present in the signature). Let us concentrate on polyadic algebras of dimension $n$; where $n$ is a finite ordinal. A full set algebra is one whose unit is of the form $^nU$ and the non-Boolean operations are cylindrifiers and substitutions. The class of representable algebras, defined as the class of subdirect products of full set algebras is a discriminator variety that is not finitely axiomatizable for $n\geq 3$, thus the set of equations postulated by Halmos is not complete. Furthermore, when we also have diagonal elements, then there is an inevitable degree of complexity in any potential universal axiomatization. There is another type of representations for polyadic algebras, and that is [*complete*]{} representations. An algebra is completely representable if it has a representation that preserves arbitrary meets whenever they exist. For Boolean algebras the completely representable algebras are easily characterized; they are simply the atomic ones; in particular, this class is elementary and finitey axiomatizable, one just adds the first order sentence expressing atomicity. For cylindric and polyadic algebras, again, this problem turns much more involved, This class for $n\geq 3$ is not even elementary. Strongly related to complete representations [@Tarek], is the notion of omitting types for the corresponging multi-dimensional modal logic. Let $W$ be a class of algebras (usually a variety or at worst quasi-variety) with a Boolean reduct, having the class $RW$ as the class of representable algebras, so that $RW\subseteq W$, and for $\B\in RW$, $\B$ has top element a set of sequences having the same length, say $n$ (in our case the dimension of the algebra), and the Boolean operations are interpreted as concrete intersections and complementation of $n$-ary relatons. We say the $\L_V$, the multi-dimensional modal logic has the omitting types theorem, if whenever $\A\in V$ is countable, and $(X_i: i\in \omega)$ is a family of non-principal types, meaning that $\prod X_i=0$ for each $i\in \omega$, then there is a $\B\in RW$ with unit $V$, and an injective homomorphism $f:\A\to \wp(V)$ such that $\bigcap_{i\in \omega}f(X_i)=\emptyset$ for each $i\in \omega$. In this paper we study, among other things, complete representability for cylindrifier free reducts of polyadic algebras, as well as omitting types for the corresponding multi-dimensional model logic. We answer a question of Hodkinson [@atomic] p. by showing that for various such reducts of polyadic algebras, atomic algebras might not be completely representable, however, they can be easily characterized by a finite simple set of first order formulas. Let us describle our algebras in a somewhat more general setting. Let $T$ be a submonoid of $^nn$ and $U$ be a non-empty set. A set $V\subseteq {}^nU$ is called $T$ closed, if whenever $s\in V$ and $\tau\in T$, then $s\circ \tau\in V$. (For example $T$ is $T$ closed). If $V$ is $T$ closed then $\wp(V)$ denotes the set algebra $({\cal P}(V),\cap,\sim s_{\tau})_{\tau\in T}$. $\wp(^nU)$ is called a full set algebra. Let $GT$ be a set of generators of $T$. One can obtain a variety $V_T$ of Boolean algebras with extra non-boolean operators $s_{\tau}$, $\tau\in GT$ by translating a presentation of $T$, via the finite set of generators $GT$ to equations, and stipulating that the $s_{\tau}$’s are Boolean endomorphisms. It is known that every monoid not necessarily finite, has a presentation. For finite monoids, the multiplicative table provides one. Encoding finite presentations in terms of a set of generators of $T$ into a finite set of equations $\Sigma$, enables one to define for each $\tau\in T$, a substitution unary operation $s_{\tau}$ and for any algebra $\A$, such that $\A\models \Sigma$, $s_{\tau}$ is a Boolean endomorpsim of $\A$ and for $\sigma, \tau\in T$, one has $s_{\sigma}^{\A}\circ s_{\tau}^{\A}(a)=s_{\sigma\circ \tau}^{\A}(a)$ for each $a\in A$. The translation of presentations to equations, guarantee that if $\A\models \Sigma$ and $a\in \A$ is non zero, then for any Boolean ultrafilter $F$ of $\A$ containing $a$, the map $f:\A\to \wp(T)$ defined via $$x\mapsto \{\tau \in T: s_{\tau}x\in F\}$$ is a homomorphism such that $f(a)\neq 0$. Such a homomorphism determines a (finite) relativized representation meaning that the unit of the set algebra is possibly only a proper subset of the square $^nn$. Let $RT_n$ be the class of subdirect products of [*full*]{} set algebras; those set algebras whose units are squares (possibly with infinite base). One can show that $\Sigma$ above axiomatizes the variety generated by $RT_n$, but it is not obvious that $RK_n$ is closed under homomorphic images. Indeed, if $T$ is the monoid of all non-bijective maps, that $RTA_n$ is only a quasi-variety. Such algebras are called Pinters algebras. Sagi [@sagiphd] studied the representation problem for such algebras. In his recent paper [@atomic], Hodkinson asks whether atomic such algebras are completely representable. In this paper we answer Hodkinson’s question completely; but we deal with the monoid $T={}^nn$, with transpositions and replacements as a set of generators; all our results apply to Pinter’s algebras. In particular, we show that atomic algebras are not necessarily completely representable, but that the class of completely representable algebras is far less complex than in the case when we have cyindrifiers, like cylindric algebras. It turns out that this class is finitely axiomatizable in first order logic by a very simple set of first order sentences, expressing additivity of the extra non-boolean operations, namely, the substitutions. Taking the appropriate reduct we answer Hodkinson’s question formulated for Pinter’s algebras. We also show that this variety is locally finite and has the superamalgamaton property. All results except for local finiteness are proved to hold for infinite dimensions. We shall always deal with a class $K$ of Boolean algebras with operators. We shall denote its corresponding multi-dimensional modal lgic by $\L_K$. Representability ================ Here we deal with algebras, where substitutions are indexed by transpositions and replacements, so that we are dealing with the full monoid $^nn$. A transpostion that swaps $i$ and $j$ will be denoted by $[i,j]$ and the replacement that take $i$ to $j$ and leaves everything else fixed will be denoted by $[i|j]$.The treatment resembles closely Sagi’s [@sagiphd], with one major difference, and that is we prove that the class of subdirect product of full set algebras is a variety (this is not the case with Pinter’s algebras). Let $U$ be a set. *The full substitution set algebra with transpositions of dimension* $\alpha$ *with base* $U$ is the algebra $$\langle\mathcal{P}({}^\alpha U); \cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in\alpha},$$ where the $S_i^j$’s and $S_{ij}$’s are unary operations defined by $$S_{i^j}(X)=\{q\in {}^\alpha U:q\circ [i|j]\in X\},$$ and $$S_{ij}(X)=\{q\in {}^{\alpha}U: q\circ [i,]\in X\}.$$ The class of *Substitution Set Algebras with Transpositions of dimension* $\alpha$ is defined as follows: $$SetSA_\alpha=\mathbf{S}\{\A:\A\text{ is a full substitution set algebra with transpositions}$$ $$\text{of dimension }\alpha \text{ with base }U,\text{ for some set }U\}.$$ The full set algebra $\wp(^{\alpha}U)$ can be viewed as the complex algebra of the atom structure or the modal frame $(^{\alpha}U, S_{ij})_{i,j\in \alpha}$ where for all $i,j, S_{ij}$ is an accessibility binary relation, such that for $s,t\in {}^{\alpha}U$, $(s,t)\in S_{ij}$ iff $s\circ [i,j]=t.$ When we consider arbitrary subsets of the square $^nU$, then from the modal point of view we are restricting or relativizing the states or assignments to $D$. On the other hand, subalgebras of full set algebras, can be viewed as [*general*]{} modal frames, which are $BAO$’s and ordinary frames, rolled into one. In this context, if one wants to use traditional terminology from modal logic, this means that the assignments are [*not*]{} links between the possible (states) worlds of the model; they [*themselves*]{} are the possible (states) worlds. The class of [*representable substitution set algebras with transpositions of dimension $\alpha$* ]{}is defined to be $$RSA_\alpha=\mathbf{SP}SetSA_\alpha.$$ Let $U$ be a given set, and let $D\subseteq{}^\alpha U.$ We say that $D$ is *locally square* iff it satisfies the following condition: $$(\forall i\neq j\in\alpha)(\forall s\in{}^\alpha U)(s\in D\Longrightarrow s\circ [i/j]\mbox{ and }s\circ [i,j]\in D),$$ The class of *locally square Set Algebras* of dimension $\alpha$ is defined to be $$WSA_{\alpha}=\mathbf{SP}\{\langle\mathcal{P}(D); \cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in \alpha}: U\text{ \emph{is a set}}, D\subseteq{}^\alpha U\text{\emph{permutable}}\}.$$ Here the operatins are relatvized to $D$, namely $S_j^i(X)=\{q\in D:q\circ [i/j]\in X\}$ and $S_{ij}(X)=\{q\in D:q\circ [i,j]\in X\}$, and $\sim$ is complement w.r.t. $D$.\ If $D$ is a locally square set then the algebra $\wp(D)$ is defined to be $$\wp(D)=\langle\mathcal{P}(D);\cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in n}.$$ It is easy to show: \[relativization\] Let $U$ be a set and suppose $G\subseteq{}^n U$ is locally square. Let $\A=\langle\mathcal{P}({}^n U);\cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in n}$ and let $\mathcal{B}=\langle\mathcal{P}(G);\cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in n}$. Then the following function $h$ is a homomorphism. $$h:\A\longrightarrow\mathcal{B},\quad h(x)=x\cap G.$$ Straigtforward from the definitions. For any natural number $k\leq n$ the algebra $\A_{nk}$ is defined to be $$\A_{nk}=\langle\mathcal{P}({}^nk);\cap,\sim,S^i_j,S_{ij}\rangle_{i\neq j\in n}.$$ So $\A_{nk}\in SetSA_n$. $RSA_n=\mathbf{SP}\{\A_{nk}:k\leq n\}.$ Exactly like the proof in [@sagiphd] for Pinter’s algebras, however we include the proof for self completeness. Of course, $\{\A_{nk}:k\leq n\}\subseteq RSA_n,$ and since, by definition, $RSA_n$ is closed under the formation of subalgebras and direct products, $RSA_n\supseteq\mathbf{SP}\{\A_{nk}:k\leq n\}.$ To prove the other slightly more difficult inclusion, it is enough to show $SetSA_n\subseteq \mathbf{SP}\{\A_{nk}:k\leq n\}.$ Let $\A\in SetSA_n$ and suppose that $U$ is the base of $\A.$ If $U$ is empty, then $\A$ has one element, and one can easily show $\A\cong\A_{n0}.$ Otherwise for every $0^\A\neq a\in A$ we can construct a homomorphism $h_a$ such that $h_a(a)\neq 0$ as follows. If $a\neq 0^\A$ then there is a sequence $q\in a.$ Let $U_0^a=range(q)$. Clearly, $^nU_0^a$ is locally square and herefore by theorem \[relativization\] relativizing by $^nU_0^a$ is a homomorphism to $\A_{nk_a}$ (where $k_a:=|range(q)|\leq n$). Let $h_a$ be this homomorphism. Since $q\in {}^nU_0^a$ we have $h_a(a)\neq0^{\A_{nk_a}}.$ One readily concludes that $\A\in\mathbf{SP}\{\A_{nk}:k\leq n\}$ as desired. Axiomatizing $RSA_n$. --------------------- We know that the variety generated by $RTA_n$ is finitely axiomatizable since it is generated by finitely many finite algebras, and because, having a Boolean reduct, it is congruence distributive. This follows from a famous theorem by Baker. In this section we show that $RSA_n$ is a variety by providing a particular finite set $\Sigma_n$ of equations such that $\mathbf{Mod}(\Sigma_n)=RSA_n$. We consider the similarity types $\{., -, s_i^j, s_{ij}\}$, where $.$ is the Boolean meet, $-$ is complementation and for $i,j\in n$, $s_i^j$ and $s_{ij}$ are unary operations, designating substitutions. We consider meets and complementation are the basic operation and $a+b$ abbreviates $-(-a.-b).$ Our choice of equations is not haphazard; we encode a presentation of the semigroup $^nn$ into the equations, and further stipulate that the substitution operations are Boolean endomorphisms. We chose the presentation given in [@semigroup] \[ax2\] For all natural $n>1$, let $\Sigma'_n$ be the following set of equations joins. For distinct $i,j,k,l$ 1. The Boolean axioms 2. $s_{ij}$ preserves joins and meets 3. $s_{kl}s^j_is_{kl}x=s^j_ix$ 4. $s_{jk}s^j_is_{jk}x=s^k_ix$ 5. $s_{ki}s^j_is_{ki}x=s^j_kx$ 6. $s_{ij}s^j_is_{ij}x=s^i_jx$ 7. $s^j_is^k_lx=s^k_ls^j_ix$ 8. $s^j_is^k_ix=s^k_is^j_ix=s^j_is^k_jx$ 9. $s^j_is^i_kx=s^j_ks_{ij}x$ 10. $s^j_is^j_kx=s^j_kx$ 11. $s^j_is^j_ix=s^j_ix$ 12. $s^j_is^i_jx=s^j_ix$ 13. $s^j_is_{ij}x=s_{ij}x$ Let $SA_n$ be the abstractly defined class $\mathbf{Mod}(\Sigma_n)$. In the above axiomatization, it is stipulated that $s_{ij}$ respects meet and join. From this it can be easily inferred that $s_{ij}$ respects $-$, so that it is in fact a Boolean endomorphism. Indeed if $x=-y$, then $x+y=1$ and $x.y=0$, hence $s_{ij}(x+y)=s_{ij}x+s_{ij}y=0$ and $s_{ij}(x.y)=s_{ij}x.s_{ij}y=0,$ hence $s_{ij}x=-s_{ij}y$. We chose not to involve negation in our axiomtatization, to make it strictly positive. Note that different presentations of $^nn$ give rise to different axiomatizations, but of course they are all definitionally equivalent. Here we are following the conventions of $\cite{HMT2}$ by distinguishing in notation between operations defined in abstract algebras, and those defined in concrete set algebras. For example, for $\A\in {\bf Mod}(\Sigma_n)$, $s_{ij}$ denotes the $i, j$ substitution operator, while in set algebras we denote the (interpretation of this) operation by capital $S_{ij}$; similarly for $s_i^j$. This convention will be followed for all algebras considered in this paper without any further notice.(Notice that the Boolean operations are also distinguished notationally). To prove our main representation theorem, we need a few preparations: Let $R(U)=\{s_{ij}:i\neq j\in U\}\cup\{s^i_j:i\neq j\in U\}$ and let $\hat{}:R(U)^*\longrightarrow {}^UU$ be defined inductively as follows: it maps the empty string to $Id_U$ and for any string $t$, $$(s_{ij}t)^{\hat{}}=[i,j]\circ t^{\hat{}}\;\;and\; (s^i_jt)^{\hat{}}=[i/j]\circ t^{\hat{}}.$$ For all $n\in\omega$ the set of (all instances of the) axiom-schemas 1 to 11 of Def.\[ax2\] is a presentation of the semigroup ${}^nn$ via generators $R(n)$ (see [@semigroup]). That is, for all $t_1,t_2\in R(n)^*$ we have $$\mbox{1 to 11 of Def.\ref{ax2} }\vdash t_1=t_2\text{ iff }t_1^{\hat{}}=t_2^{\hat{}}.$$ Here $\vdash$ denotes derivability using Birkhoff’s calculus for equational logic. This is clear because the mentioned schemas correspond exactly to the set of relations governing the generators of ${}^nn$ (see [@semigroup]). For every $\xi\in {}^nn$ we associate a sequence $s_\xi\in R(U)^*$ (like we did before for $S_n$ using $^nn$ instead) such that $s_\xi^{\hat{}}=\xi.$ Such an $s_\xi$ exists, since $R(n)$ generates ${}^nn.$ Like before, we have \[lemma\] Let $\A$ be an $RSA_n$ type $BAO$. Suppose $G\subseteq {}^nn$ is a locally square set, and $\langle\mathcal{F}_\xi:\xi\in G\rangle$ is a system of ultrafilters of $\A$ such that for all $\xi\in G,\;i\neq j\in n$ and $a\in\A,$ the following conditions hold: $${S_{ij}}^\A(a)\in\mathcal{F}_\xi\Leftrightarrow a\in \mathcal{F}_{\xi\circ[i,j]}\quad\quad (*),\text{and}$$ $${S^i_j}^\A(a)\in\mathcal{F}_\xi\Leftrightarrow a\in \mathcal{F}_{\xi\circ[i/j]}\quad\quad (**)$$ Then the following function $h:\A\longrightarrow\wp(G)$ is a homomorphism$$h(a)=\{\xi\in G:a\in \mathcal{F}_\xi\}.$$ Now, we show, unlike replacement algebras $RSA_n$ is a variety. \[variety\] For any finite $n\geq 2$, $RSA_n=SA_n$ Clearly, $RSA_n\subseteq SA_n$ because $SetSA_n\models\Sigma'_n$ (checking it is a routine computation). Conversely, $RSA_n\supseteq SA_n$. To see this, let $\A\in SA_n$ be arbitrary. We may suppose that $\A$ has at least two elements, otherwise, it is easy to represent $\A$. For every $0^\A\neq a\in A$ we will construct a homomorphism $h_a$ on $\A_{nn}$ such that $h_a(a)\neq 0^{\A_{nn}}$. Let $0^\A\neq a\in A$ be an arbitrary element. Let $\mathcal{F}$ be an ultrafilter over $\A$ containing $a$, and for every $\xi\in {}^nn$, let $\mathcal{F}_\xi=\{z\in A: S^\A_\xi(z)\in\mathcal{F}\}$ (which is an ultrafilter). (Here we use that all maps in $^nn$ are available, which we could not do before). Then, $h:\A\longrightarrow\A_{nn}$ defined by $h(z)=\{\xi\in{}^nn:z\in\mathcal{F}_{\xi}\}$ is a homomorphism by \[lemma\] as $(*),$ $(**)$ hold. Simple algebras are finite. Finiteness follow from the previous theorem, since if $\A$ is simple, then the map $h$ defined above is injective. However, not any finite algebra is simple. Indeed if $V\subseteq {}^nn,$ and $s\in V$ is constant, then $\wp(\{s\})$ is a homomorphic image of $\wp(V)$. So if $|V|>2$, then this homomorphism will have a non-trivial kernel. Let $Sir(SA_n)$ denote the class of subdirectly indecomposable algebrbra, and $Sim(SA_n)$ be the class of simple algebras. Characterize the simple and subdirectly irreducible elements, is $SA_n$ a discriminator variety? \[super\]$SA_n$ is locally finite, and has the superamalgamation property For the first part let $\A\in SA_n$ be generated by $X$ and $|X|=m$. We claim that $|\A|\leq 2^{2^{m\times {}^nn}}.$ Let $Y=\{s_{\tau}x: x\in X, \tau\in {}^nn\}$. Then $\A=\Sg^{\Bl\A}Y$. This follows from the fact that the substitutions are Boolean endomorphisms. Since $|Y|\leq m\times {}^nn,$ the conclusion follows. For the second part, first a piece of notation. For an algebra $\A$ and $X\subseteq A$, $fl^{\Bl\A}X$ denotes the Boolean filter generated by $X$. We show that the following strong form of interpolation holds for the free algebras: Let $X$ be a non-empty set. Let $\A=\Fr_XSA_n$, and let $X_1, X_2\subseteq \A$ be such that $X_1\cup X_2=X$. Assume that $a\in \Sg^{\A}X_1$ and $c\in \Sg^{\A}X_2$ are such that $a\leq c$. Then there exists an interpolant $b\in \Sg^{\A}(X_1\cap X_2)$ such that $a\leq b\leq c$. Assume that $a\leq c$, but there is no such $b$. We will reach a contradiction. Let $$H_1=fl^{\Bl\Sg^{\A}X_1}\{a\}=\{x: x\geq a\},$$ $$H_2=fl^{\Bl\Sg^{\A}X_2}\{-c\}=\{x: x\geq -c\},$$ and $$H=fl^{\Bl\Sg^{\A}(X_1\cap X_2)}[(H_1\cap \Sg^{\A}(X_1\cap X_2))\cup (H_2\cap \Sg^{\A}(X_1\cap X_2))].$$ We show that $H$ is a proper filter of $\Sg^{\A}(X_1\cap X_2)$. For this, it suffices to show that for any $b_0,b_1\in \Sg^{\A}(X_1\cap X_2)$, for any $x_1\in H_1$ and $x_2\in H_2$ if $a.x_1\leq b_0$ and $-c.x_2\leq b_1$, then $b_0.b_1\neq 0$. Now $a.x_1=a$ and $-c.x_2=-c$. So assume, to the contrary, that $b_0.b_1=0$. Then $a\leq b_0$ and $-c\leq b_1$ and so $a\leq b_0\leq-b_1\leq c$, which is impossible because we assumed that there is no interpolant. Hence $H$ is a proper filter. Let $H^*$ be an ultrafilter of $\Sg^{\A}(X_1\cap X_2)$ containing $H$, and let $F$ be an ultrafilter of $\Sg^{\A}X_1$ and $G$ be an ultrafilter of $\Sg^{\A}X_2$ such that $$F\cap \Sg^{\A}(X_1\cap X_2))=H^*=G\cap \Sg^{\A}(X_1\cap x_2).$$ Such ultrafilters exist. For simplicity of notation let $\A_1=\Sg^{\A}(X_1)$ and $\A_2=\Sg^{\A}(X_2).$ Define $h_1:\A_1\to \wp({}^nn)$ by $$h_1(x)=\{\eta\in {}^nn: x\in s_{\eta}F\},$$ and $h_2:\A_1\to \wp({}^nn)$ by $$h_2(x)=\{\eta\in {}^nn: x\in s_{\eta}G_\},$$ Then $h_1, h_2$ are homomorphisms, they agree on $\Sg^{\A}(X_1\cap X_2).$ Indeed let $x\in \Sg^{\A}(X_1\cap X_2)$. Then $\eta\in h_1(x)$ iff $s_{\eta}x\in F$ iff $s_{\eta}x\in F\cap \Sg^{\A}(X_1\cap X_2)=H^*=G\cap \Sg^{\A}(X_1\cap X_2)$ iff $s_{\eta}x\in G$ iff $\eta\in h_2(x)$. Thus $h_1\cup h_2$ is a function. By freeness there is an $h:\A\to \wp({}^nn)$ extending $h_1$ and $h_2$. Now $Id\in h(a)\cap h(-c)\neq \emptyset$ which contradicts $a\leq c$. The result now follows from [@Mak], stating that the super amalgamtion property for a variety of $BAO$s follow from the interpolation property in the free algebras. Complete representability for $SA_n$ ==================================== For $SA_n$, the problem of complete representations is delicate since the substitutions corresponding to replacements may not be completey additive, and a complete representation, as we shall see, forces the complete additivity of the so represented algebra. In fact, as we discover, they are not. We first show that representations may not preserve arbitrary joins, from which we infer that the omitting types theorem fails, for the corresponding multi dimensional modal logic. Throughout this section $n$ is a natural number $\geq 2$. All theorems in this subsection, with the exception of theorem \[additive\], apply to Pinter’s algebras, by simply discarding the substitution operations corresponding to transpositions, and modifying the proofs accordingly. \[counter\] There exists a countable $\A\in SA_n$ and $X\subseteq \A$, such that $\prod X=0$, but there is no representation $f:\A\to \wp(V)$ such that $\bigcap_{x\in X}f(x)=\emptyset$. We give the example for $n=2$, and then we show how it extends to higher dimensions. It suffices to show that there is an algebra $\A$, and a set $S\subseteq A$, such that $s_0^1$ does not preserves $\sum S$. For if $\A$ had a representation as stated in the theorem, this would mean that $s_0^1$ is completely additive in $\A$. For the latter statement, it clearly suffices to show that if $X\subseteq A$, and $\sum X=1$, and there exists an injection $f:\A\to \wp(V)$, such that $\bigcup_{x\in X}f(x)=V$, then for any $\tau\in {}^nn$, we have $\sum s_{\tau}X=1$. So fix $\tau \in V$ and assume that this does not happen. Then there is a $y\in \A$, $y<1$, and $s_{\tau}x\leq y$ for all $x\in X$. (Notice that we are not imposing any conditions on cardinality of $\A$ in this part of the proof). Now $$1=s_{\tau}(\bigcup_{x\in X} f(x))=\bigcup_{x\in X} s_{\tau}f(x)=\bigcup_{x\in X} f(s_{\tau}x).$$ (Here we are using that $s_{\tau}$ distributes over union.) Let $z\in X$, then $s_{\tau}z\leq y<1$, and so $f(s_{\tau}z)\leq f(y)<1$, since $f$ is injective, it cannot be the case that $f(y)=1$. Hence, we have $$1=\bigcup_{x\in X} f(s_{\tau}x)\leq f(y) <1$$ which is a contradiction, and we are done. Now we turn to constructing the required counterexample, which is an easy adaptation of a construction dut to Andréka et all in [@AGMNS] to our present situation. We give the detailed construction for the reader’s conveniance. Let $\B$ be an atomless Boolean set algebra with unit $U$, that has the following property: For any distinct $u,v\in U$, there is $X\in B$ such that $u\in X$ and $v\in {}\sim X$. For example $\B$ can be taken to be the Stone representation of some atomless Boolean algebra. The cardinality of our constructed algebra will be the same as $|B|$. Let $$R=\{X\times Y: X,Y\in \B\}$$ and $$A=\{\bigcup S: S\subseteq R: |S|<\omega\}.$$ Then indeed we have $|R|=|A|=|B|$. We claim that $\A$ is a subalgebra of $\wp(^2U)$. Closure under union is obvious. To check intersections, we have: $$(X_1\times Y_1)\cap (X_2\times Y_2)=(X_1\cap X_2) \times (Y_1\cap Y_2).$$ Hence, if $S_1$ and $S_2$ are finite subsets of $R$, then $$S_3=\{W\cap Z: W\in S_1, Z\in S_2\}$$ is also a finite subset of $R$ and we have $$(\bigcup S_1)\cap (\bigcup S_2)=\bigcup S_3$$ For complementation: $$\sim (X\times Y)=[\sim X\times U]\cup [U\times \sim Y].$$ If $S\subseteq R$ is finite, then $$\sim \bigcup S=\bigcap \{\sim Z: Z\in S\}.$$ Since each $\sim Z$ is in $A$, and $A$ is closed under intersections, we conclude that $\sim \bigcup S$ is in $A$. We now show that it is closed under substitutions: $$S_0^1(X\times Y)=(X\cap Y)\times U, \\ \ S_1^0(X\times Y)=U\times (X\cap Y)$$ $$S_{01}(X\times Y)=Y\times X.$$ Let $$D_{01}=\{s\in U\times U: s_0=s_1\}.$$ We claim that the only subset of $D_{01}$ in $\A$ is the empty set. Towards proving this claim, assume that $X\times Y$ is a non-empty subset of $D_{01}$. Then for any $u\in X$ and $v\in Y$ we have $u=v$. Thus $X=Y=\{u\}$ for some $u\in U$. But then $X$ and $Y$ cannot be in $\B$ since the latter is atomless and $X$ and $Y$ are atoms. Let $$S=\{X\times \sim X: X\in B\}.$$ Then $$\bigcup S={}\sim D_{01}.$$ Now we show that $$\sum{}^{\A}S=U\times U.$$ Suppose that $Z$ is an upper bound different from $U\times U$. Then $\bigcup S\subseteq Z.$ Hence $\sim D_{01}\subseteq Z$, hence $\sim Z=\emptyset$, so $Z=U\times U$. Now $$S_{0}^1(U\times U) =(U\cap U)\times U=U\times U.$$ But $$S_0^1(X\times \sim X)=(X\cap \sim X)\times U=\emptyset.$$ for every $X\in B$. Thus $$S_0^1(\sum S)=U\times U$$ and $$\sum \{S_{0}^1(Z): Z\in S\}=\emptyset.$$ For $n>2$, one takes $R=\{X_1\times\ldots\times X_n: X_i\in \B\}$ and the definition of $\A$ is the same. Then, in this case, one takes $S$ to be $X\times \sim X\times U\times\ldots\times U$ such that $X\in B$. The proof survives verbatim. By taking $\B$ to be countable, then $\A$ can be countable, and so it violates the omitting types theorem. \[additive\]Let $\A$ be in $SA_n$. Then $\A$ is completely additive iff $s_0^1$ is completely additive, in particular if $\A$ is atomic, and $s_0^1$ is additive, then $\A$ is completely representable. It suffices to show that for $i, j\in n$, $i\neq j$, we have $s_i^j$ is completely additive. But $[i|j]= [0|1]\circ [i,j]$, and $s_{[i,j]}$ is completely additive. For replacement algebras, when we do not have transpositions, so the above proof does not work. So in principal we could have an algebra such that $s_0^1$ is completely additive in $\A$, while $s_1^0$ is not. Find a Pinter’s algebra for which $s_0^1$ is completely additive but $s_1^0$ is not However, like $SA_n$, we also have: For every $n\geq 2$, and every distinct $i,j\in n$, there is an algebra $\B\in SA_n$ such that $s_i^j$ is not completely additive. One modifies the above example by letting $X$ occur in the $ith$ place of the product, and $\sim X$, in the $jth$ place. Now we turn to the notion of complete representability which is not remote from that of minimal completions [@complete]. [^2] Let $\A\in SA_n$ and $b\in A$, then $\Rl_{b}\A=\{x\in \A: x\leq b\}$, with operations relativized to $b$. \[atomic\] Let $\A\in SA_n$ atomic and assume that $\sum_{x\in X} s_{\tau}x=b$ for all $\tau\in {}^nn$. Then $\Rl_{b}\A$ is completely representable. In particular, if $\sum_{x\in x}s_{\tau}x=1$, then $\A$ is completely representable. Clearly $\B=\Rl_{b}\A$ is atomic. For all $a\neq 0$, $a\leq b$, find an ultrafilter $F$ that contains $a$ and lies outside the nowhere dense set $S\sim \bigcup_{x\in X} N_{s_{\tau}x}.$ This is possible since $\B$ is atomic, so one just takes the ultrafilter generated by an atom below $a$. Define for each such $F$ and such $a$, $rep_{F,a}(x)=\{\tau \in {}^nn: s_{\tau}x\in F\}$; for each such $a$ let $V_a={}^nn$, and then set $g:\B\to \prod_{a\ne 0} \wp(V_a)$ by $g(b)=(rep_{F,a}(b): a\leq b, a\neq 0)$. Since $SA_n$ can be axiomatized by Sahlqvist equations, it is closed under taking canonical extensions. The next theorem says that canonical extensions have complete (square) representations. The argument used is borrowed from Hirsch and Hodkinson [@step] which is a first order model-theoretic view of representability, using $\omega$-saturated models. A model is $\omega$ saturated if every type that is finitely realized, is realized. Every countable consistent theory has an $\omega$ saturated model. \[canonical\] Let $\A\in SA_n$. Then $\A^+$ is completely representable on a square unit. For a given $\A\in SA_n$, we define a first order language $L(\A)$, which is purely relational; it has one $n-ary$ relation symbol for each element of $\A$. Define an $L(\A)$ theory $T(\A)$ as follows; for all $R,S,T\in \A$ and $\tau\in S_n$: $$\sigma_{\lor}(R,S,T)=[R(\bar{x})\longleftrightarrow S(\bar{x})\lor T(\bar{x})], \\ R=S\lor T$$ $$\sigma_{\neg}(R,S)=(1(\bar{x})\to (R(\bar{x})\longleftrightarrow \neg S(\bar{x})], \\ R=\neg S$$ $$\sigma _{\tau}(R,S)=R(\bar{x})\longleftrightarrow s_{\tau}S(\bar{x}), \\R=s_{\tau}S.$$ $$\sigma_{\neq 0}(R)=\exists \bar{x}R(\bar{x}).$$ Let $\A$ be given. Then since $\A$ has a representation, hence $T(\A)$ is a consistent first order theory. Let $M$ be an $\omega$ saturated model of $T({\A})$. Let $1^{M}={}^nM$. Then for each $\bar{x}\in 1^{M}$, the set $f_{\bar{x}}=\{a\in \A: M\models a(\bar{x})\}$ is an ultrafilter of $\A$. Define $h:\A^+\to \wp ({}^nM)$ via $$S\mapsto \{\bar{x}\in 1^M: f_{\bar{x}}\in S\}.$$ Here $S$, an element of $\A^+$, is a set of ultrafilters of $\A$. Clearly, $h(0)=h(\emptyset)=\emptyset$. $h$ respects complementation, for $\bar{x}\in 1^M$ and $S\in \A^+$, $\bar{x}\notin h(S)$ iff $f_{\bar{x}}\notin S$ iff $f_{\bar{x}}\in -S$ iff $\bar{x}\notin h(-S).$ It is also straightforward to check that $h$ preserves arbitrary unions. Indeed, we have $\bar{x}\in h(\bigcup S_i)$ iff $f_{\bar{x}}\in \bigcup S_i$ iff $\bar{x}\in h(S_i)$ for some $i$. We now check that $h$ is injective. Here is where we use saturation. Let $\mu$ be an ultrafilter in $\A$, we show that $h(\{\mu\})\neq \emptyset$. Take $p(\bar{x})=\{a(\bar{x}): a\in \mu\}$. Then this type is finitely satisfiable in $M$. For if $\{a_0(\bar{x}),\ldots, a_{n-1}(\bar{x})\}$ is an arbitrary finite subset of $p(\bar{x})$, then $a=a_0.a_1..a_{n-1}\in \mu$, so $a>0$. By axiom $\sigma_{\neq 0}(a)$ in $T(\A)$, we have $M\models \exists\bar{x}a(\bar{x})$. Since $a\leq a_i$ for each $i<n$, we obtain using $\sigma_{\lor}(a_i, a, a_i)$ axiom of $T_{\A}$ that $M\models \exists \bar{x}\bigwedge_{i<n}a_i(\bar{x})$, showing that $p(\bar{x})$ is finitely satisfiable as required. Hence, by $\omega$ saturation $p$ is realized in $M$ by some $n$ tuple $\bar{x}$. Hence $p$ is realized in $M$ by the tuple $\bar{x}$, say. Now $\mu=f_{\bar{x}}$. So $\bar{x}\in h(\{\mu\}$ and we have proved that $h$ is an injection. Preservation of the substitution operations is straightforward. For $\A\in SA_n$, the following two conditions are equivalent: There exists a locally square set $V,$ and a complete representation $f:\A\to \wp(V)$. For all non zero $a\in A$, there exists a homomorphism $f:\A\to \wp(^nn)$ such that $f(a)\neq 0$, and $\bigcup_{x\in \At\A} f(x)={}^nn$. [Proof]{} Having dealt with the other implication before, we prove that (1) implies (2). Let there be given a complete representation $g:\A\to \wp(V)$. Then $\wp(V)\subseteq \prod_{i\in I} \A_i$ for some set $I$, where $\A_i=\wp{}(^nn)$. Assume that $a$ is non-zero, then $g(a)$ is non-zero, hence $g(a)_i$ is non-zero for some $i$. Let $\pi_j$ be the $j$th projection $\pi_j:\prod \A_i\to \A_j$, $\pi_j[(a_i: i\in I)]=a_j$. Define $f:\A\to \A_i$ by $f(x)=(\pi_i\circ g(x)).$ Then clearly $f$ is as required. The following theorem is a converse to \[atomic\] \[converse\] Assume that $\A\in SA_n$ is completely representable. Let $f:\A\to \wp(^nn)$ be a non-zero homomorphism that is a complete representation. Then $\sum_{x\in X} s_{\tau}x=1$ for every $\tau\in {}^nn$. Like the first part of the proof of theorem \[counter\]. Adapting another example in [@AGMNS] constructed for $2$ dimensional quasi-polyadic algebras, we show that atomicity and complete representability do not coincide for $SA_n$. Because we are lucky enough not have cylindrifiers, the construction works for all $n\geq 2$, and even for infinite dimensions, as we shall see. Here it is not a luxary to include the details, we have to do so because we will generalize the example to all higher dimensions. \[counter2\] There exists an atomic complete algebra in $SA_n$ ($2\leq n<\omega$), that is not completely representable. Furthermore, dropping the condition of completeness, the algebra can be atomic and countable. It is enough, in view of the previous theorem, to construct an atomic algebra such that $\Sigma_{x\in \At\A} s_0^1x\neq 1$. In what follows we produce such an algebra. (This algebra will be uncountable, due to the fact that it is infinite and complete, so it cannot be countable. In particular, it cannot be used to violate the omitting types theorem, the most it can say is that the omitting types theorem fails for uncountable languages, which is not too much of a surprise). Let $\mathbb{Z}^+$ denote the set of positive integers. Let $U$ be an infinite set. Let $Q_n$, $n\in \omega$, be a family of $n$-ary relations that form partition of $^nU$ such that $Q_0=D_{01}=\{s\in {}^nU: s_0=s_1\}$. And assume also that each $Q_n$ is symmetric; for any $i,j\in n$, $S_{ij}Q_n=Q_n$. For example one can $U=\omega$, and for $n\geq 1$, one sets $$Q_n=\{s\in {}^n\omega: s_0\neq s_1\text { and }\sum s_i=n\}.$$ Now fix $F$ a non-principal ultrafilter on $\mathcal{P}(\mathbb{Z}^+)$. For each $X\subseteq \mathbb{Z}^+$, define $$R_X = \begin{cases} \bigcup \{Q_k: k\in X\} & \text { if }X\notin F, \\ \bigcup \{Q_k: k\in X\cup \{0\}\} & \text { if } X\in F \end{cases}$$ Let $$\A=\{R_X: X\subseteq \mathbb{Z}^+\}.$$ Notice that $\A$ is uncountable. Then $\A$ is an atomic set algebra with unit $R_{\mathbb{Z}^+}$, and its atoms are $R_{\{k\}}=Q_k$ for $k\in \mathbb{Z}^+$. (Since $F$ is non-principal, so $\{k\}\notin F$ for every $k$). We check that $\A$ is indeed closed under the operations. Let $X, Y$ be subsets of $\mathbb{Z}^+$. If either $X$ or $Y$ is in $F$, then so is $X\cup Y$, because $F$ is a filter. Hence $$R_X\cup R_Y=\bigcup\{Q_k: k\in X\}\cup\bigcup \{Q_k: k\in Y\}\cup Q_0=R_{X\cup Y}$$ If neither $X$ nor $Y$ is in $F$, then $X\cup Y$ is not in $F$, because $F$ is an ultrafilter. $$R_X\cup R_Y=\bigcup\{Q_k: k\in X\}\cup\bigcup \{Q_k: k\in Y\}=R_{X\cup Y}$$ Thus $A$ is closed under finite unions. Now suppose that $X$ is the complement of $Y$ in $\mathbb{Z}^+$. Since $F$ is an ultrafilter exactly one of them, say $X$ is in $F$. Hence, $$\sim R_X=\sim{}\bigcup \{Q_k: k\in X\cup \{0\}\}=\bigcup\{Q_k: k\in Y\}=R_Y$$ so that $\A$ is closed under complementation (w.r.t $R_{\mathbb{Z}^+}$). We check substitutions. Transpositions are clear, so we check only replacements. It is not too hard to show that $$S_0^1(R_X)= \begin{cases} \emptyset & \text { if }X\notin F, \\ R_{\mathbb{Z}^+} & \text { if } X\in F \end{cases}$$ Now $$\sum \{S_0^1(R_{k}): k\in \mathbb{Z}^+\}=\emptyset.$$ and $$S_0^1(R_{\mathbb{Z}^+})=R_{\mathbb{Z}^+}$$ $$\sum \{R_{\{k\}}: k\in \mathbb{Z}^+\}=R_{\mathbb{Z}^+}=\bigcup \{Q_k:k\in \mathbb{Z}^+\}.$$ Thus $$S_0^1(\sum\{R_{\{k\}}: k\in \mathbb{Z}^+\})\neq \sum \{S_0^1(R_{\{k\}}): k\in \mathbb{Z}^+\}.$$ For the completeness part, we refer to [@AGMNS]. The countable algebra required is that generated by the countably many atoms. Our next theorem gives a plathora of algebras that are not completely representable. Any algebra which shares the atom structure of $\A$ constructed above cannot have a complete representation. Formally: Let $\A$ be as in the previous example. Let $\B$ be an atomic an atomic algebra in $SA_n$ such that $\At\A\cong \At\B$. Then $\B$ is not completely representable Let such a $\B$ be given. Let $\psi:\At\A\to \At\B$ be an isomorphism of the atom structures (viewed as first order structures). Assume for contradiction that $\B$ is completely representable, via $f$ say; $f:\B\to \wp(V)$ is an injective homomorphism such that $\bigcup_{x\in \At\B}f(x)=V$. Define $g:\A\to \wp(V)$ by $g(a)=\bigcup_{x\in \At\A, x\leq a} f(\psi(x))$. Then, it can be easily checked that $f$ establishes a complete representation of $\A$. There is a wide spread belief, almost permenantly established that like cylindric algebras, any atomic [*poyadic algebras of dimension $2$*]{} is completely representable. This is wrong. The above example, indeed shows that it is not the case, because the set algebras consrtucted above , if we impose the additional condition that each $Q_n$ has $U$ as its domain and range, then the algebra in question becomes closed under the first two cylindrfiers, and by the same reasoning as above, it [*cannot*]{} be completely representable. However, this condition cannot be imposed for for higher dimension, and indeed for $n\geq 3$, the class of completely representable quasiplyadic algebras is not elementary. When we have diagonal elements, the latter result holds for quasipolyadic equality algebras, but the former does not. On the other hand, the variety of polyadic algebras of dimension $2$ is conjugated (which is not the case when we drop diagonal elements), hence atomic representable algebras are completely representable. We introduce a certain cardinal that plays an essential role in omitting types theorems [@Tarek]. A Polish space is a complete separable metric space. For a Polish space $X$, $K(X)$ denotes the ideal of meager subsets of $X$. Set $$cov K(X)=min \{|C|: C\subseteq K(X),\ \cup C=X\}.$$ If $X$ is the real line, or the Baire space $^{\omega}\omega$, or the Cantor set $^{\omega}2$, which are the prime examples of Polish spaces, we write $K$ instead of $K(X)$. The above three spaces are sometimes referred to as [*real*]{} spaces since they are all Baire isomophic to the real line. Clearly $\omega <covK \leq {}2^{\aleph_0}.$ The crdinal $covK$ is an important cardinal studied extensively in descriptive set theory, and it turns out strongly related to the number of types that can be omitted in consitent countable first order theory, a result of Newelski. It is known, but not trivial to show, that $covK$ is the least cardinal $\kappa$ such that the real space can be covered by $\kappa$ many closed nowhere dense sets, that is the least cardinal such that the Baire Category Theorem fails. Also it is the largest cardinal for which Martin’s axiom restricted to countable Boolean algebras holds. Indeed, the full Martin’s axiom, imply that $covK=2^{\aleph_0}$ but it is also consistent that $covK=\omega_1<2^{\aleph_0}.$ Varying the value of $covK$ by (iterated) forcing in set theory is known. For example $covK<2^{\aleph_0}$ is true in the random real model. It also holds in models constructed by forcings which do not add Cohen reals. Let $\A\in SA_n$ be countable and completely additive, and let $\kappa$ be a cardinal $<covK$. Assume that $(X_i: i<\kappa)$ is a family on non principal types. Then there exists a countable locally square set $V$, and an injective homomorphism $f:\A\to \wp(V)$ such that $\bigcap_{x\in X_i} f(x)=\emptyset$ for each $i\in \kappa$. Let $a\in A$ be non-zero. For each $\tau\in {}^nn$ for each $i\in \kappa$, let $X_{i,\tau}=\{{\sf s}_{\tau}x: x\in X_i\}.$ Since the algebra is additive, then $(\forall\tau\in V)(\forall i\in \kappa)\prod{}^{\A}X_{i,\tau}=0.$ Let $S$ be the Stone space of $\A$, and for $a\in N$, let $N_a$ be the clopen set consisting of all Boolean ultrafilters containing $a$. Let $\bold H_{i,\tau}=\bigcap_{x\in X_i} N_{{\sf s}_{\tau}x}.$ Each $\bold H_{i,\tau}$ is closed and nowhere dense in $S$. Let $\bold H=\bigcup_{i\in \kappa}\bigcup_{\tau\in V}\bold H_{i,\tau}.$ By properties of $covK$, $\bold H$ is a countable collection of nowhere dense sets. By the Baire Category theorem for compact Hausdorff spaces, we get that $S\sim \bold H$ is dense in $S$. Let $F$ be an ultrafilter that contains $a$ and is outside $\bold H$, that is $F$ intersects the basic set $N_a$; exists by density. Let $h_a(z)=\{\tau \in V: s_{\tau}\in F\}$, then $h_a$ is a homomorphism into $\wp(V)$ such that $h_a(a)\neq 0$. Define $g:\A\to \prod_{a\in A}\wp(V)$ via $a\mapsto (h_a(x): x\in A)$. Let $V_{a}=V\times \{a\}$. Since $\prod_{a\in A}\wp(V)\cong \wp(\bigcup_{a\in A} V_a)$, then we are done for $g$ is clearly an injection. The statement of omitting $< {}^{\omega}2$ non-principal types is independent of $ZFC$. Martins axiom offers solace here, in two respects. Algebras adressed could be uncountable (though still satisfying a smallness condition, that is a natural generalization of countability, and in fact, is necessary for Martin’s axiom to hold), and types omitted can be $< {}^{\omega}2$. More precisely: In $ZFC+MA$ the following can be proved: Let $\A\in SA_n$ be completely additive, and further assume that $\A$ satisfies the countable chain condition (it contains no uncountable anti-chains). Let $\lambda<{}^{\omega}2$, and let $(X_i: i<\lambda$) be a family of non-principal types. Then there exists a countable locally square set $V$, and an injective homomorphism $f:\A\to \wp(V)$ such that $\bigcap_{x\in X_i} f(x)=\emptyset$ for each $i\in \lambda$. The idea is exactly like the previous proof, except that now we have a union of $<{}^{\omega}2$ no where dense sets; the required ultrafilter to build the representation we need lies outside this union. $MA$ offers solace here because it implies that such a union can be written as a countable union, and again the Baire category theorem readily applies. But without $MA$, if the given algebra is countable and completely additive, and if the types are maximal, then we can also omit $<{} ^{\omega}2$ types. This [*is indeed* ]{} provable in $ZFC$, without any additional assumptions at all. For brevity, when we have an omitting types theore, like the previos one, we say that the hypothesis implies existence of a representation omitting the given set of non-principl types. Let $\A\in SA_n$ be a countable, let $\lambda<{}^{\omega}2$ and $\bold F=(F_i: i<\lambda)$ be a family of non principal ultrafilters. Then there is a single representation that omits these non-principal ultrafilters. One can easily construct two representations that overlap only on principal ultrafilters [@h]. With a pinch of diagonalization this can be pushed to countable many. But even more, using ideas of Shelah [@Shelah] thm 5.16, this this can be further pushed to $^{\omega}2$ many representations. Such representations are not necessarily pair-wise distinct, but they can be indexed by a set $I$ such that $|I|={}^{\omega}2$, and if $i, j\in I$, are distinct and there is an ultrafilter that is realized in the representations indexed by $i$ and $j$, then it is principal. Note that they can be the same representation. Now assume for contradiction that there is no representation omitting the given non-principal ultrafilters. Then for all $i<{}^{\omega}2$, there exists $F$ such that $F$ is realized in $\B_i$. Let $\psi:{}^{\omega}2\to \wp(\bold F)$, be defined by $\psi(i)=\{F: F \text { is realized in }\B_i\}$. Then for all $i<{}^{\omega}2$, $\psi(i)\neq \emptyset$. Furthermore, for $i\neq j$, $\psi(i)\cap \psi(j)=\emptyset,$ for if $F\in \psi(i)\cap \psi(j)$ then it will be realized in $\B_i$ and $\B_j$, and so it will be principal. This implies that $|\F|={}^{\omega}2$ which is impossible. In case of omitting the single special type, $\{-x: x\in \At\A\}$ for an atomic algebra, no conditions whatsoever on cardinalities are pre-supposed. If $\A\in SA_n$ is completelyadditive and atomic, then $\A$ is completely representable Let $\B$ be an atomic transposition algebra, let $X$ be the set of atoms, and let $c\in \B$ be non-zero. Let $S$ be the Stone space of $\B$, whose underlying set consists of all Boolean ulltrafilters of $\B$. Let $X^*$ be the set of principal ultrafilters of $\B$ (those generated by the atoms). These are isolated points in the Stone topology, and they form a dense set in the Stone topology since $\B$ is atomic. So we have $X^*\cap T=\emptyset$ for every nowhere dense set $T$ (since principal ultrafilters, which are isolated points in the Stone topology, lie outside nowhere dense sets). Recall that for $a\in \B$, $N_a$ denotes the set of all Boolean ultrafilters containing $a$. Now for all $\tau\in S_n$, we have $G_{X, \tau}=S\sim \bigcup_{x\in X}N_{s_{\tau}x}$ is nowhere dense. Let $F$ be a principal ultrafilter of $S$ containing $c$. This is possible since $\B$ is atomic, so there is an atom $x$ below $c$; just take the ultrafilter generated by $x$. Also $F$ lies outside the $G_{X,\tau}$’s, for all $\tau\in S_n.$ Define, as we did before, $f_c$ by $f_c(b)=\{\tau\in S_n: s_{\tau}b\in F\}$. Then clearly for every $\tau\in S_n$ there exists an atom $x$ such that $\tau\in f_c(x)$, so that $S_n=\bigcup_{x\in \At\A} f_c(x)$ For each $a\in A$, let $V_a=S_n$ and let $V$ be the disjoint union of the $V_a$’s. Then $\prod_{a\in A} \wp(V_a)\cong \wp(V)$. Define $f:\A\to \wp(V)$ by $f(x)=g[(f_ax: a\in A)]$. Then $f: \A\to \wp(V)$ is an embedding such that $\bigcup_{x\in \At\A}f(x)=V$. Hence $f$ is a complete representation. Another proof inspired from modal logic, and taken from Hirsch and Hodkinson [@atomic], with the slight difference that we assume complete additivity not conjugation. Let $\A\in SA_n$ be additive and atomic, so the first order correspondants of the equations are valid in its atom structure $\At\A$. But $\At\A$ is a bounded image of a disjoint union of square frames $\F_i$, that is there exists a bounded morphism from $\At\A\to \bigcup_{i\in I}\F_i$, so that $\Cm\F_i$ is a full set algebra. Dually the inverse of this bounded morphism is an embedding frm $\A$ into $\prod_{i\in I}\Cm\F_i$ that preserves all meets. The previous theorem tells us how to capture (in first order logic) complete representability. We just spell out first order axioms forcing complete additivity of substitutions corresponding to replacements. Other substitutions, corresponding to the transpositions, are easily seen to be completely additive anway; in fact, they are self-conjugate. \[elementary\] The class $CSA_n$ is elementary and is finitely axiomatizable in first order logic. We assume $n>1$, the other cases degenerate to the Boolean case. Let $\At(x)$ be the first order formula expressing that $x$ is an atom. That is $\At(x)$ is the formula $x\neq 0\land (\forall y)(y\leq x\to y=0\lor y=x)$. For distinct $i,j<n$ let $\psi_{i,j}$ be the formula: $y\neq 0\to \exists x(\At(x)\land s_i^jx\neq 0\land s_i^jx\leq y).$ Let $\Sigma$ be obtained from $\Sigma_n$ by adding $\psi_{i,j}$ for every distinct $i,j\in n$. We show that $CSA_n={\bf Mod}(\Sigma)$. Let $\A\in CSA_n$. Then, by theorem \[converse\], we have $\sum_{x\in X} s_i^jx=1$ for all $i,j\in n$. Let $i,j\in $ be distinct. Let $a$ be non-zero, then $a.\sum_{x \in X}s_i^jx=a\neq 0$, hence there exists $x\in X$, such that $a.s_i^jx\neq 0$, and so $\A\models \psi_{i,j}$. Conversely, let $\A\models \Sigma$. Then for all $i,j\in n$, $\sum_{x\in X} s_i^jx=1$. Indeed, assume that for some distinct $i,j\in n$, $\sum_{x\in X}s_i^jx\neq 1$. Let $a=1-\sum_{x\in X} s_i^jx$. Then $a\neq 0$. But then there exists $x\in X$, such that $s_i^jx.\sum_{x\in X}s_i^jx\neq 0$ which is impossible. But for distinct $i, j\in n$, we have $\sum_{x\in X}s_{[i,j]}X=1$ anyway, and so $\sum s_{\tau}X=1$, for all $\tau\in {}^nn$, and so it readily follows that $\A\in CRA_n.$ Call a completely representable algebra [*square completely representable*]{}, if it has a complete representation on a square. If $\A\in SA_n$ is completely representable, then $\A$ is square completely representable. Assume that $\A$ is completely representable. Then each $s_i^j$ is completely additive for all $i,j\in n$. For each $a\neq 0$, choose $F_a$ outside the nowhere dense sets $S\sim \bigcup_{x\in \At \A}N_{s_{\tau}x}$, $\tau\in {}^nn$, and define $h_a:\A\to \wp(^nn)$ the usual way, that is $h(x)=\{\tau\in {}^nn: s_{\tau}a\in F_a\}.$ Let $V_a={}^nn\times \{a\}.$ Then $h:\A\to \prod_{a\in \A}\wp(V_a)$ defined via $a\mapsto (h_a(x): a\in \A)$ is a complete representation. But $\prod_{a\in \A, a\neq 0}\wp(V_a)\cong \wp(\bigcup_{a\in \A, a\neq 0} V_a)$ and the latter is square. A variety $V$ is called completion closed if whenever $\A\in V$ is completely additive then its minimal completion $\A^*$ (which exists) is in $V$. On completions, we have: If $\A\in SA_n$ is completely additive, then $\A$ has a strong completion $\A^*$. Furthermore, $\A^*\in SA_n.$ In other words, $SA_n$ is completion closed. The completion can be constructed because the algebra is completely additive. The second part follows from the fact that the stipulaed positive equations axiomatizing $SA_n$ are preserved in completions [@v3]. We could add diagonal elements $d_{ij}'s$ to our signature and consider $SA_n$ enriched by diagonal elements, call this class $DSA_n$. In set algebras with unit $V$ a locally square unit, the diagonal $d_{ij}, i,j \in n$ will be interpreted as $D_{ij}=\{s\in V: s_i=s_j\}$. All positive results, with no single exception, established for the diagonal fre case, i.e. for $SA_n$ will extend to $DSA_n$, as the reader is invited to show. However, the counterexamples constructed above to violate complete representability of atomic algebras, and an omitting types theorem for the corresponding muti-dimensional modal logic [*do not work*]{} in this new context, because such algebras do not contain the diagonal $D_{01}$, and this part is essential in the proof. We can show though that again the class of completely represenable algebras is elementary. We will return to such an issue in the infinite dimensional csae, where even more interesting results hold; for example square representaion and weak ones form an interesting dichotomy. The Infinite Dimensional Case ----------------------------- For $SA$’s, we can lift our results to infinite dimensions. We give, what we believe is a reasonable generalization to the above theorems for the infinite dimensional case, by allowing weak sets as units, a weak set being a set of sequences that agree cofinitely with some fixed sequence. That is a weak set is one of the form $\{s\in {}^{\alpha}U: |\{i\in \alpha, s_i\neq p_i\}|<\omega\}$, where $U$ is a set, $\alpha$ an ordinal and $p\in {}^{\alpha}U$. This set will be denoted by $^{\alpha}U^{(p)}$. The set $U$ is called the base of the weak set. A set $V\subseteq {}^{\alpha}\alpha^{(Id)}$, is defined to be di-permutable just like the finite dimensional case. Altering top elements to be weak sets, rather than squares, turns out to be fruitful and rewarding. We let $WSA_{\alpha}$ be the variety generated by $$\wp(V)=\langle\mathcal{P}(V),\cap,-,S_{ij}, S_i^j\rangle_{i,j\in\alpha},$$ where $V\subseteq {}^\alpha\alpha^{(Id)}$ is locally square (Recall that $V$ is locally square, if whenever $s\in V$, then, $s\circ [i|j]$ and $s\circ [i,j]\in V$, for $i,j\in \alpha$.) Let $\Sigma_{\alpha}$ be the set of finite schemas obtained from $\Sigma_n$ but now allowing indices from $\alpha$. Obviously $\Sigma_{\alpha}$ is infinite, but it has a finitary feature in a two sorted sense. One sort for the ordinals $<\alpha$, the other for the first order situation. Indeed, the system $({\bf Mod}(\Sigma_{\alpha}): \alpha\geq \omega)$ is an instance of what is known in the literature as a system of varieties definable by a finite schema, which means that it is enough to specify a strictly finite subset of $\Sigma_{\omega}$, to define $\Sigma_{\alpha}$ for all $\alpha\geq \omega$. (Strictly speaking, systems of varieties definable by schemes require that we have a unary operation behaving like a cylindrifier, but such a distinction is immaterial in the present context.) More precisely, let $L_{\kappa}$ be the language of $WSA_{\kappa}$. Let $\rho:\alpha\to \beta$ be an injection. One defines for a term $t$ in $L_{\alpha}$ a term $\rho(t)$ in $L_{\beta}$ by recursion as follows: For any variable $v_i$, $\rho(v_i)=v_i$ and for any unary operation $f$ $\rho(f(\tau))=f(\rho(\tau))$. Now let $e$ be a given equation in the language $L_{\alpha}$, say $e$ is the equation $\sigma=\tau$. then one defines $\rho(e)$ to be the equation $\rho(\sigma)=\rho(\tau)$. Then there exists a finite set $\Sigma\subseteq \Sigma_{\omega}$ such that $\Sigma_{\alpha}=\{\rho(e): \rho:\omega\to \alpha \text { is an injection },e\in \Sigma\}.$ Notice that $\Sigma_{\omega}=\bigcup_{n\geq 2} \Sigma_n.$ Let $SA_{\alpha}={\bf Mod}(\Sigma_{\alpha})$. We give two proofs of the next main representation theorem, but first a definition. Let $\alpha\leq\beta$ be ordinals and let $\rho:\alpha\rightarrow\beta$ be an injection. For any $\beta$-dimensional algebra $\B$ we define an $\alpha$-dimensional algebra $\Rd^\rho\B$, with the same base and Boolean structure as $\B$, where the $(i,j)$th transposition substitution of $\Rd^\rho\B$ is $s_ {\rho(i)\rho(j)}\in\B$, and same for replacements. For a class $K$, $\Rd^{\rho}K=\{\Rd^{\rho}\A: \A\in K\}$. When $\alpha\subseteq \beta$ and $\rho$ is the identity map on $\alpha$, then we write $\Rd_{\alpha}\B$, for $\Rd^{\rho}\B$. Our first proof, is more general than the present context; it is basically a lifting argument that can be used to transfer results in the finite dimensional case to infinite dimensions. \[r\] For any infinite ordinal $\alpha$, $SA_{\alpha}=WSA_{\alpha}.$ [First proof]{} First for $\A\models \Sigma_{\alpha}$, $\rho:n\to \alpha,$ an injection, and $n\in \omega,$ we have $\Rd^{\rho}\A\in SA_n$. For any $n\geq 2$ and $\rho:n\to \alpha$ as above, $SA_n\subseteq\mathbf{S}\Rd^{\rho}WSA_{\alpha}$ as in [@HMT2] theorem 3.1.121. $WTA_{\alpha}$ is, by definition, closed under ultraproducts. Now we show that if $\A\models \Sigma_{\alpha}$, then $\A$ is representable. First, for any $\rho:n\to \alpha$, $\Rd^{\rho}\A\in SA_n$. Hence it is in $S\Rd^{\rho}WSA_{\alpha}$. Let $I$ be the set of all finite one to one sequences with range in $\alpha$. For $\rho\in I$, let $M_{\rho}=\{\sigma\in I:\rho\subseteq \sigma\}$. Let $U$ be an ultrafilter of $I$ such that $M_{\rho}\in U$ for every $\rho\in I$. Exists, since $M_{\rho}\cap M_{\sigma}=M_{\rho\cup \sigma}.$ Then for $\rho\in I$, there is $\B_{\rho}\in WSA_{\alpha}$ such that $\Rd^{\rho}\A\subseteq \Rd^{\rho}\B_{\rho}$. Let $\C=\prod\B_{\rho}/U$; it is in $\mathbf{Up}WSA_{\alpha}=WSA_{\alpha}$. Define $f:\A\to \prod\B_{\rho}$ by $f(a)_{\rho}=a$ , and finally define $g:\A\to \C$ by $g(a)=f(a)/U$. Then $g$ is an embedding. The **second proof** follows from the next lemma, whose proof is identical to the finite dimensional case with obvious modifications. Here, for $\xi\in {}^\alpha\alpha^{(Id)},$ the operator $S_\xi$ works as $S_{\xi\upharpoonright J}$ (which can be defined as in in the finite dimensional case because we have a finite support) where $J=\{i\in\alpha:\xi(i)\neq i\}$ (in case $J$ is empty, i.e., $\xi=Id_\alpha,$ $S_\xi$ is the identity operator). \[f\] Let $\A$ be an $SA_\alpha$ type $BAO$ and $G\subseteq{}^\alpha\alpha^{(Id)}$ permutable. Let $\langle\mathcal{F}_\xi:\xi\in G\rangle$ is a system of ultrafilters of $\A$ such that for all $\xi\in G,\;i\neq j\in \alpha$ and $a\in\A$ the following condition holds:$$S_{ij}^\A(a)\in\mathcal{F}_\xi\Leftrightarrow a\in \mathcal{F}_{\xi\circ[i,j]}\quad\quad (*).$$ Then the following function $h:\A\longrightarrow\wp(G)$ is a homomorphism $$h(a)=\{\xi\in G: a\in \mathcal{F}_\xi\}.$$ We let $WSA_{\alpha}$ be the variety generated by $$\wp(V)=\langle\mathcal{P}(V),\cap,\sim,S^i_j,S_{ij}\rangle_{i,j\in\alpha}, \ \ V\subseteq{}^\alpha\alpha^{(Id)}$$ is locally square. Let $\Sigma_{\alpha}$ be the set of finite schemas obtained from from the $\Sigma_n$ but now allowing indices from $\alpha$; and let $SA_{\alpha}={\bf Mod}(\Sigma_{\alpha}')$. Then as before, we can prove, completeness and interpolation (for the corresponding multi dimensional modal logic): \[1\] Let $\alpha$ be an infinite ordinal. Then, we have: $WSA_{\alpha}=SA_{\alpha}$ $SA_{\alpha}$ has the superamalgamation property Like before undergoing the obvious modifications. In particular, from the first item, it readily follows, that if $\A\subseteq \wp(^{\alpha}U)$ and $a\in A$ is non-zero, then there exists a homomorphism $f:\A\to \wp(V)$ for some permutable $V$ such that $f(a)\neq 0$. We shall prove a somewhat deep converse of this result, that will later enable us to verify that the quasi-variety of subdirect products of full set algebras is a variety. But first a definition and a result on the number of non-isomorphic models. Let $\A$ and $\B$ be set algebras with bases $U$ and $W$ respectively. Then $\A$ and $\B$ are *base isomorphic* if there exists a bijection $f:U\to W$ such that $\bar{f}:\A\to \B$ defined by ${\bar f}(X)=\{y\in {}^{\alpha}W: f^{-1}\circ y\in x\}$ is an isomorphism from $\A$ to $\B$ An algebra $\A$ is *hereditary atomic*, if each of its subalgebras is atomic. Finite Boolean algebras are hereditary atomic of course, but there are infinite hereditary atomic Boolean algebras, example any Boolean algebra generated by the atoms is necessarily hereditory atomic, like the finite cofinite Boolean algebra. An algebra that is infinite and complete, like that in our example violating complete representability, is not hereditory atomic, whether atomic or not. Hereditary atomic algebras arise naturally as the Tarski Lindenbaum algebras of certain countable first order theories, that abound. If $T$ is a countable complete first order theory which has an an $\omega$-saturated model, then for each $n\in \omega$, the Tarski Lindenbuam Boolean algebra $\Fm_n/T$ is hereditary atomic. Here $\Fm_n$ is the set of formulas using only $n$ variables. For example $Th(\Q,<)$ is such with $\Q$ the $\omega$ saturated model. A well known model-theoretic result is that $T$ has an $\omega$ saturated model iff $T$ has countably many $n$ types for all $n$. Algebraically $n$ types are just ultrafilters in $\Fm_n/T$. And indeed, what characterizes hereditary atomic algebras is that the base of their Stone space, that is the set of all ultrafilters, is at most countable. \[b\] Let $\B$ be a countable Boolean algebra. If $\B$ is hereditary atomic then the number of ultrafilters is at most countable; of course they are finite if $\B$ is finite. If $\B$ is not hereditary atomic then it has exactly $2^{\omega}$ ultrafilters. [@HMT1] p. 364-365 for a detailed discussion. A famous conjecture of Vaught says that the number of non-isomorphic countable models of a complete theory is either $\leq \omega$ or exactly $^{\omega}2$. We show that this is the case for the multi (infinite) dimensional modal logic corresponding to $SA_{\alpha}$. Morleys famous theorem excluded all possible cardinals in between except for $\omega_1$. \[2\] Let $\A\in SA_{\omega}$ be countable and simple. Then the number of non base isomorphic representations of $\A$ is either $\leq \omega$ or $^{\omega}2$. Furthermore, if $\A$ is assumed completely additive, and $(X_i: i< covK)$ is a family of non-principal types, then the number of models omitting these types is the same. For the first part. If $\A$ is hereditary atomic, then the number of models $\leq$ the number of ultrafilters, hence is at most countable. Else, $\A$ is not hereditary atomic, then it has $^{\omega}2$ ultrafilters. For an ultrafilter $F$, let $h_F(a)=\{\tau \in V: s_{\tau}a\in F\}$, $a\in \A$. Then $h_F\neq 0$, indeed $Id\in h_F(a)$ for any $a\in \A$, hence $h_F$ is an injection, by simplicity of $\A$. Now $h_F:\A\to \wp(V)$; all the $h_F$’s have the same target algebra. We claim that $h_F(\A)$ is base isomorphic to $h_G(\A)$ iff there exists a finite bijection $\sigma\in V$ such that $s_{\sigma}F=G$. We set out to confirm our claim. Let $\sigma:\alpha\to \alpha$ be a finite bijection such that $s_{\sigma}F=G$. Define $\Psi:h_F(\A)\to \wp(V)$ by $\Psi(X)=\{\tau\in V:\sigma^{-1}\circ \tau\in X\}$. Then, by definition, $\Psi$ is a base isomorphism. We show that $\Psi(h_F(a))=h_G(a)$ for all $a\in \A$. Let $a\in A$. Let $X=\{\tau\in V: s_{\tau}a\in F\}$. Let $Z=\Psi(X).$ Then $$\begin{split} &Z=\{\tau\in V: \sigma^{-1}\circ \tau\in X\}\\ &=\{\tau\in V: s_{\sigma^{-1}\circ \tau}(a)\in F\}\\ &=\{\tau\in V: s_{\tau}a\in s_{\sigma}F\}\\ &=\{\tau\in V: s_{\tau}a\in G\}.\\ &=h_G(a)\\ \end{split}$$ Conversely, assume that $\bar{\sigma}$ establishes a base isomorphism between $h_F(\A)$ and $h_G(\A)$. Then $\bar{\sigma}\circ h_F=h_G$. We show that if $a\in F$, then $s_{\sigma}a\in G$. Let $a\in F$, and let $X=h_{F}(a)$. Then, we have $$\begin{split} &\bar{\sigma\circ h_{F}}(a)=\sigma(X)\\ &=\{y\in V: \sigma^{-1}\circ y\in h_{F}(X)\}\\ &=\{y\in V: s_{\sigma^{-1}\circ y}a\in F\}\\ &=h_G(a)\\ \end{split}$$ Now we have $h_G(a)=\{y\in V: s_{y}a\in G\}.$ But $a\in F$. Hence $\sigma^{-1}\in h_G(a)$ so $s_{\sigma^{-1}}a\in G$, and hence $a\in s_{\sigma}G$. Define the equivalence relation $\sim $ on the set of ultrafilters by $F\sim G$, if there exists a finite permutation $\sigma$ such that $F=s_{\sigma}G$. Then any equivalence class is countable, and so we have $^{\omega}2$ many orbits, which correspond to the non base isomorphic representations of $\A$. For the second part, suppose we want to count the number of representations omitting a family $\bold X=\{X_i:i<\lambda\}$ ($\lambda<covK)$ of non-isolated types of $T$. We assume, without any loss of generality, that the dimension is $\omega$. Let $X$ be stone space of $\A$. Then $$\mathbb{H}={\bf X}\sim \bigcup_{i\in\lambda,\tau\in W}\bigcap_{a\in X_i}N_{s_\tau a}$$ (where $W=\{\tau\in{}^\omega\omega:|\{i\in \omega:\tau(i)\neq i\}|<\omega\}$) is clearly (by the above discussion) the space of ultrafilters corresponding to representations omitting $\Gamma.$ Note that $\mathbb{H}$ the intersection of two dense sets. But then by properties of $covK$ the union $\bigcup_{i\in\lambda}$ can be reduced to a countable union. We then have $\mathbb{H}$ a $G_\delta$ subset of a Polish space, namely the Stone space $X$. So $\mathbb{H}$ is Polish and moreover, $\mathcal{E}'=\sim \cap (\mathbb{H}\times \mathbb{H})$ is a Borel equivalence relation on $\mathbb{H}.$ It follows then that the number of representations omitting $\Gamma$ is either countable or else $^{\omega}2.$ The above theorem is not so deep, as it might appear on first reading. The relatively simple proof is an instance of the obvious fact that if a countable Polish group, acts on an uncountable Polish space, then the number of induced orbits has the cardinality of the continuum, because it factors out an uncountable set by a countable one. When the Polish group is uncountable, finding the number of orbits is still an open question, of which Vaught’s conjecture is an instance (when the group is the symmetric group on $\omega$ actong on the Polish space of pairwise non-isomorphic models.) We shall prove that weak set algebras are strongly isomorphic to set algebras in the sense of the following definition. This will enable us to show that $RSA_{\alpha}$, like the finite dimensional case, is also a variety. Let $\A$ and $\B$ be set algebras with units $V_0$ and $V_0$ and bases $U_0$ and $U_1,$ respectively, and let $F$ be an isomorphism from $\B$ to $\A$. Then $F$ is a *strong ext-isomorphism* if $F=(X\cap V_0: X\in B)$. In this case $F^{-1}$ is called a *strong subisomorphism*. An isomorphism $F$ from $\A$ to $\B$ is a *strong ext base isomorphism* if $F=g\circ h$ for some base isomorphism and some strong ext isomorphism $g$. In this case $F^{-1}$ is called a [*strong sub base isomorphism.*]{} The following, this time deep theorem, uses ideas of Andréka and Németi, reported in [@HMT2], theorem 3.1.103, in how to square units of so called weak cylindric set algebras (cylindric algebras whose units are weak spaces): \[weak\] If $\B$ is a subalgebra of $ \wp(^{\alpha}\alpha^{(Id)})$ then there exists a set algebra $\C$ with unit $^{\alpha}U$ such that $\B\cong \C$. Furthermore, the isomorphism is a strong sub-base isomorphism. We square the unit using ultraproducts. We prove the theorem for $\alpha=\omega$. We warn the reader that the proof uses heavy machinery of proprties of ultraproducts for algebras consisting of infinitary relations. Let $F$ be a non-principal ultrafilter over $\omega$. (For $\alpha>\omega$, one takes an $|{\alpha}^+|$ regular ultrafilter on $\alpha$). Then there exists a function $h: \omega\to \{\Gamma\subseteq_{\omega} \omega\}$ such that $\{i\in \omega: \kappa\in h(i)\}\in F$ for all $\kappa<\omega$. Let $M={}^{\omega}U/F$. $M$ will be the base of our desired algebra, that is $\C$ will have unit $^{\omega}M.$ Define $\epsilon: U\to {}^{\omega}U/F$ by $$\epsilon(u)=\langle u: i\in \omega\rangle/F.$$ Then it is clear that $\epsilon$ is one to one. For $Y\subseteq {}^{\omega}U$, let $$\bar{\epsilon}(Y)=\{y\in {}^{\omega}(^{\omega}U/F): \epsilon^{-1}\circ y\in Y\}.$$ By an $(F, (U:i\in \omega), \omega)$ choice function we mean a function $c$ mapping $\omega\times {}^{\omega}U/F$ into $^{\omega}U$ such that for all $\kappa<\omega$ and all $y\in {}^{\omega}U/F$, we have $c(k,y)\in y.$ Let $c$ be an $(F, (U:i\in \omega), \omega)$ choice function satisfying the following condition: For all $\kappa, i<\omega$ for all $y\in X$, if $\kappa\notin h(i)$ then $c(\kappa,y)_i=\kappa$, if $\kappa\in h(i)$ and $y=\epsilon u$ with $u\in U$ then $c(\kappa,y)_i=u$. Let $\delta: \B\to {}^{\omega}\B/F$ be the following monomorphism $$\delta(b)=\langle b: i\in \omega\rangle/F.$$ Let $t$ be the unique homomorphism mapping $^{\omega}\B/F$ into $\wp{}^{\omega}(^{\omega}U/F)$ such that for any $a\in {}^{\omega}B$ $$t(a/F)=\{q\in {}^{\omega}(^{\omega}U/F): \{i\in \omega: (c^+q)_i\in a_i\}\in F\}.$$ Here $(c^+q)_i=\langle c(\kappa,q_\kappa)_i: k<\omega\rangle.$ It is easy to show that show that $t$ is well-defined. Assume that $J=\{i\in \omega: a_i=b_i\}\in F$. If $\{i\in \omega: (c^+q)_i\in a_i\}\in F$, then $\{i\in \omega; (c^+q)_i\in b_i\}\in F$. The converse inclusion is the same, and we are done. Now we check that the map preserves the operations. That the Boolean operations are preserved is obvious. So let us check substitutions. It is enough to consider transpositions and replacements. Let $i,j\in \omega.$ Then $s_{[i,j]}g(a)=g(s_{[i,j]}a)$, follows from the simple observation that $(c^+q\circ [i,j])_k\in a$ iff $(c^+q)_k\in s_{[i,j]}a$. The case of replacements is the same; $(c^+q\circ [i|j])_k\in a$ iff $(c^+q)_k\in s_{[i|j]}a.$ Let $g=t\circ \delta$. Then for $a\in B$, we have $$g(a)=\{q\in {}^{\omega}(^{\omega}U/F): \{i\in \omega: (c^+q)_i\in a\}\in F\}.$$ Let $\C=g(\B)$. Then $g:\B\to \C$. We show that $g$ is an isomorphism onto a set algebra. First it is clear that $g$ is a monomorphism. Indeed if $a\neq 0$, then $g(a)\neq \emptyset$. Now $g$ maps $\B$ into an algebra with unit $g(V)$. Recall that $M={}^{\omega}U/F$. Evidently $g(V)\subseteq {}^{\omega}M$. We show the other inclusion. Let $q\in {}^{\omega}M$. It suffices to show that $(c^+q)_i\in V$ for all $i\in\omega$. So, let $i\in \omega$. Note that $(c^+q)_i\in {}^{\omega}U$. If $\kappa\notin h(i)$ then we have $$(c^+q)_i\kappa=c(\kappa, q\kappa)_i=\kappa.$$ Since $h(i)$ is finite the conclusion follows. We now prove that for $a\in B$ $$(*) \ \ \ g(a)\cap \bar{\epsilon}V=\{\epsilon\circ s: s\in a\}.$$ Let $\tau\in V$. Then there is a finite $\Gamma\subseteq \omega$ such that $$\tau\upharpoonright (\omega\sim \Gamma)= p\upharpoonright (\omega\sim \Gamma).$$ Let $Z=\{i\in \omega: \Gamma\subseteq hi\}$. By the choice of $h$ we have $Z\in F$. Let $\kappa<\omega$ and $i\in Z$. We show that $c(\kappa,\epsilon\tau \kappa)_i=\tau \kappa$. If $\kappa\in \Gamma,$ then $\kappa\in h(i)$ and so $c(\kappa,\epsilon \tau \kappa)_i=\tau \kappa$. If $\kappa\notin \Gamma,$ then $\tau \kappa=\kappa$ and $c(\kappa,\epsilon \tau \kappa)_i=\tau\kappa.$ We now prove $(*)$. Let us suppose that $q\in g(a)\cap {\bar{\epsilon}}V$. Since $q\in \bar{\epsilon}V$ there is an $s\in V$ such that $q=\epsilon\circ s$. Choose $Z\in F$ such that $$c(\kappa, \epsilon(s\kappa))\supseteq\langle s\kappa: i\in Z\rangle$$ for all $\kappa<\omega$. This is possible by the above. Let $H=\{i\in \omega: (c^+q)_i\in a\}$. Then $H\in F$. Since $H\cap Z$ is in $F$ we can choose $i\in H\cap Z$. Then we have $$s=\langle s\kappa: \kappa<\omega\rangle= \langle c(\kappa, \epsilon(s\kappa))_i:\kappa<\omega\rangle= \langle c(\kappa,q\kappa)_i:\kappa<\omega\rangle=(c^+q)_i\in a.$$ Thus $q\in \epsilon \circ s$. Now suppose that $q=\epsilon\circ s$ with $s\in a$. Since $a\subseteq V$ we have $q\in \epsilon V$. Again let $Z\in F$ such that for all $\kappa<\omega$ $$c(\kappa, \epsilon s \kappa)\supseteq \langle s\kappa: i\in Z\rangle.$$ Then $(c^+q)_i=s\in a$ for all $i\in Z.$ So $q\in g(a).$ Note that $\bar{\epsilon}V\subseteq {}^{\omega}(^{\omega}U/F)$. Let $rl_{\epsilon(V)}^{\C}$ be the function with domain $\C$ (onto $\bar{\epsilon}(\B))$ such that $$rl_{\epsilon(V)}^{\C}Y=Y\cap \bar{\epsilon}V.$$ Then we have proved that $$\bar{\epsilon}=rl_{\bar{\epsilon V}}^{\C}\circ g.$$ It follows that $g$ is a strong sub-base-isomorphism of $\B$ onto $\C$. Like the finite dimensional case, we get: \[v2\] $\mathbf{SP}\{ \wp(^{\alpha}U): \text {$U$ a set }\}$ is a variety. Let $\A\in SA_\alpha$. Then for $a\neq 0$ there exists a weak set algebra $\B$ and $f:\A\to \B$ such that $f(a)\neq 0$. By the previous theorem there is a set algebra $\C$ such that $\B\cong \C$, via $g$ say. Then $g\circ f(a)\neq 0$, and we are done. We then readily obtain: \[3\] Let $\alpha$ be infinite. Then $$WSA_{\alpha}= {\bf Mod}(\Sigma_{\alpha})={\bf HSP}\{{}\wp(^{\alpha}\alpha^{(Id)})\}={\bf SP}\{\wp(^{\alpha}U): U \text { a set }\}.$$ Here we show that the class of subdirect prouct of Pinter’s algebras is not a variety, this is not proved by Sagi. \[notvariety\] For infinite ordinals $\alpha$, $RPA_{\alpha}$ is not a variety. Assume to the contrary that $RTA_{\alpha}$ is a variety and that $RTA_{\alpha}={\bf Mod}(\Sigma_{\alpha})$ for some (countable) schema $\Sigma_{\alpha}.$ Fix $n\geq 2.$ We show that for any set $U$ and any ideal $I$ of $\A=\wp(^nU)$, we have $\A/I\in RTA_n$, which is not possible since we know that there are relativized set algebras to permutable sets that are not in $RTA_n$. Define $f:\A\to \wp(^{\alpha}U)$ by $f(X)=\{s\in {}^{\alpha}U: f\upharpoonright n\in X\}$. Then $f$ is an embedding of $\A$ into $\Rd_n(\wp({}^nU))$, so that we can assume that $\A\subseteq \Rd_n\B$, for some $\B\in RTA_{\alpha}.$ Let $I$ be an ideal of $\A$, and let $J=\Ig^{\B}I$. Then we claim that $J\cap \A=I$. One inclusion is trivial; we need to show $J\cap \A\subseteq I$. Let $y\in A\cap J$. Then $y\in \Ig^{\B}I$ and so, there is a term $\tau$, and $x_1,\ldots x_n\in I$ such that $y\leq \tau(x_1,\dots x_n)$. But $\tau(x_1,\ldots x_{n-1})\in I$ and $y\in A$, hence $y\in I$, since ideals are closed downwards. It follows that $\A/I$ embeds into $\Rd_n(\B/J)$ via $x/I\mapsto x/J$. The map is well defined since $I\subseteq J$, and it is one to one, because if $x,y\in A$, such that $x\delta y\in J$, then $x\delta y\in I$, where $\delta$ denotes symmetric difference. We have $\B/J\models \Sigma_{\alpha}$. For $\beta$ an ordinal, let $K_{\beta}$ denote the class of all full set algebras of dimension $\beta$. Then ${\bf SP}\Rd_nK_{\alpha}\subseteq {\bf SP}K_n$. It is enough to show that $\Rd_nK_{\alpha}\subseteq {\bf SP}K_n$, and for that it suffices to show that that if $\A\subseteq \Rd_n(\wp({}^{\alpha}U))$, then $\A$ is embeddable in $\wp(^nW)$, for some set $W$. Let $\B=\wp({}^{\alpha}U)$. Just take $W=U$ and define $g:\B\to \wp(^nU)$ by $g(X)=\{f\upharpoonright n: f\in X\}$. Then $g\upharpoonright \A$ is the desired embedding. Now let $\B'=\B/I$, then $\B'\in {\bf SP}K_{\alpha}$, so $\Rd_n\B'\in \Rd_n{\bf SP}K_{\alpha}={\bf SP}\Rd_nK_{\alpha}\subseteq {\bf SP}K_n$. Hence $\A/I\in RTA_n$. But this cannot happen for all $\A\in K_n$ and we are done. Next we approach the issue of representations preserving infinitary joins and meets. But first a lemma. Let $\A\in SA_{\alpha}$. If $X\subseteq \A$, is such that $\sum X=0$ and there exists a representation $f:\A\to \wp(V)$, such that $\bigcap_{x\in X}f(x)=\emptyset$, then for all $\tau\in {}^{\alpha}{\alpha}^{(Id)}$, $\sum_{x\in X} s_{\tau}x=\emptyset$. In particular, if $\A$ is completely representable, then for every $\tau\in {}^{\alpha}\alpha^{(Id)}$, $s_{\tau}$ is completely additive. Like the finite dimensional case. \[counterinfinite\] For any $\alpha\geq \omega$, there is an $\A\in SA_{\alpha}$, and $S\subseteq \A$, such that $\sum S$ is not preserved by $s_0^1$. In particular, the omitting types theorem fails for our multi-modal logic. The second part follows from the previous lemma. Now we prove the first part. Let $\B$ be the Stone representation of some atomless Boolean algebra, with unit $U$ in the Stone representation. Let $$R=\{\times_{i\in \alpha} X_i, X_i\in \B \text { and $X_i=U$ for all but finitely many $i$} \}$$ and $$A=\{\bigcup S: S\subseteq R: |S|<\omega\}$$ $$S=\{X\times \sim X\times \times_{i>2} U_i: X\in B\}.$$ Then one proceeds exactly like the finite dimensional case, theorem \[counter\] showing that the sum $\sum S$ is not preserved under $s_0^1$. Like the finite dimensional case, adapting the counterexample to infinite dimensions, we have: \[counterinfinite2\] There is an atomic $\A\in SA_{\alpha}$ such that $\A$ is not completely representable. First it is clear that if $V$ is any weak space, then $\wp(V)\models \Sigma$. Let $(Q_n: n\in \omega)$ be a sequence $\alpha$-ary relations such that $(Q_n: n\in \omega)$ is a partition of $$V={}^{\alpha}\alpha^{(\bold 0)}=\{s\in {}^{\alpha}\alpha: |\{i: s_i\neq 0\}|<\omega\}.$$ Each $Q_n$ is symmetric. Take $Q_0=\{s\in V: s_0=s_1\}$, and for each $n\in \omega\sim 0$, take $Q_n=\{s\in {}^{\alpha}\omega^{({\bold 0})}: s_0\neq s_1, \sum s_i=n\}.$ (Note that this is a finite sum). Clearly for $n\neq m$, we have $Q_n\cap Q_m=\emptyset$, and $\bigcup Q_n=V.$ Furthermore, obviously each $Q_n$ is symmetric, that is $S_{[i,j]}Q_n=Q_n$ for all $i,j\in \alpha$. Now fix $F$ a non-principal ultrafilter on $\mathcal{P}(\mathbb{Z}^+)$. For each $X\subseteq \mathbb{Z}^+$, define $$R_X = \begin{cases} \bigcup \{Q_n: n\in X\} & \text { if }X\notin F, \\ \bigcup \{Q_n: n\in X\cup \{0\}\} & \text { if } X\in F \end{cases}$$ Let $$\A=\{R_X: X\subseteq \mathbb{Z}^+\}.$$ Then $\A$ is an atomic set algebra, and its atoms are $R_{\{n\}}=Q_n$ for $n\in \mathbb{Z}^+$. (Since $F$ is non-principal, so $\{n\}\notin F$ for every $n$. Then one proceeds exactly as in the finite dimensional case, theorem \[counter2\]. Let $CRSA_{\alpha}$ be the class of completely representable algebras of dimension $\alpha$, then we have For $\alpha\geq \omega$, $CRSA_{\alpha}$ is elementary that is axiomatized by a finite schema Let $\At(x)$ is the formula $x\neq 0\land (\forall y)(y\leq x\to y=0\lor y=x)$. For distinct $i,j<\alpha$ let $\psi_{i,j}$ be the formula: $y\neq 0\to \exists x(\At(x)\land s_i^jx\neq 0\land s_i^jx\leq y).$ Let $\Sigma$ be obtained from $\Sigma_{\alpha}$ by adding $\psi_{i,j}$ for every distinct $i,j\in \alpha$. These axioms force additivity of the operations $s_i^j$ for every $i,j\in \alpha$. The rest is like the finite dimensional case. The folowing theorem can be easilly destilled from the literature. $SA_{\alpha}$ is Sahlqvist variety, hence it is canonical, $\Str SA_{\alpha}$ is elementary and ${\bf S}\Cm(\Str SA_{\alpha})=SA_{\alpha}.$ We know that if $\A$ is representable on a weak unit, then it is representable on a square one. But for complete representability this is not at all clear, because the isomorphism defined in \[weak\] might not preserve arbitrary joins. For canonical extensions, we guarantee complete representations. Let $\A\in SA_{\alpha}$. Then $\A^+$ is completely representable on a weak unit. Let $S$ be the Stone space of $\A$, and for $a\in \A$, let $N_a$ denote the clopen set consisting of all ultrafilters containing $\A$. The idea is that the operations are completely additive in the canonical extension. Indeed, for $\tau\in {}^{\alpha}\alpha^{(Id)}$, we have $$s_{\tau}\sum X=s_{\tau}\bigcup X=\bigcup s_{\tau}X=\sum s_{\tau}X.$$ (Indeed this is true for any full complex algebra of an atom structure, and $\A^+=\Cm\Uf \A$.) In particular, since $\sum \At\A=1$, because $\A$ is atomic, we have $\sum s_{\tau}\At\A=1$, for each $\tau$. Then we proceed as the finite dimensional case for transposition algebras. Given any such $\tau$, let $G(\At \A, \tau)$ be the following no where dense subset of the Stone space of $\A$: $$G(\At \A, \tau)=S\sim \bigcup N_{s_{\tau}x}.$$ Now given non-zero $a$, let $F$ be a principal ultrafilter generated by an atom below $a$. Then $F\notin \bigcup_{\tau\in {}^{\alpha}\alpha^{(Id)}} G(\At\A, \tau)$, and the map $h$ defined via $x\mapsto \{\tau\in {}^{\alpha}\alpha^{(Id)}: s_{\tau}x\in F\}$, as can easily be checked, establishes the complete representation. We do not know whether canonical extensions are completely representable on square units. For any finite $\beta$, $\Fr_{\beta}SA_{\alpha}$ is infinite. Furthermore If $\beta$ is infinite, then $\Fr_{\beta}SA_{\alpha}$ is atomless. In particular, $SA_{\alpha}$ is not locally finite. For the first part, we consider the case when $\beta=1$. Assume that $b$ is the free generator. First we show that for any finite transposition $\tau$ that is not the identity $s_{\tau}b\neq b$. Let such a $\tau$ be given. Let $\A=\wp(^{\alpha}U)$, and let $X\in \A$, be such that $s_{\tau}X\neq X.$ Such an $X$ obviously exists. Assume for contradiction that $s_{\tau}b=b$. Let $\B=\Sg^{\A}\{X\}$. Then, by freeness, there exists a surjective homomorphism $f:\Fr_{\beta}SA_{\alpha}\to \B$ such that $f(b)=X$. Hence $$s_{\tau}X=s_{\tau}f(b)=f(s_{\tau}b)=f(b)=X,$$ which is impossible. We have proved our claim. Now consider the following subset of $\Fr_{\beta}SA_{\alpha}$, $S=\{s_{[i,j]}b: i,j\in \alpha\}$. Then for $i,j,k, l\in \alpha$, with $\{i,j\}\neq \{k,l\}$, we have $s_{[i,j]}b\neq s_{[k,l]}b$, for else, we would get $s_{[i,j]}s_{[k,l]}b=s_{\sigma}b=b $ and $\sigma\neq Id$. It follows that $S$ is infinite, and so is $\Fr_{\beta}SA_{\alpha}.$ The proof for $\beta>1$ is the same. For the second part, let $X$ be the infinite generating set. Let $a\in A$ be non-zero. Then there is a finite set $Y\subseteq X$ such that $a\in \Sg^{\A} Y$. Let $y\in X\sim Y$. Then by freeness, there exist homomorphisms $f:\A\to \B$ and $h:\A\to \B$ such that $f(\mu)=h(\mu) $ for all $\mu\in Y$ while $f(y)=1$ and $h(y)=0$. Then $f(a)=h(a)=a$. Hence $f(a.y)=h(a.-y)=a\neq 0$ and so $a.y\neq 0$ and $a.-y\neq 0$. Thus $a$ cannot be an atom. For Pinter’s algebras the second part applies equally well. For the first part one takes for distict $i,j,k,l$ such that $\{i,j\}\cap \{k,l\}=\emptyset$, a relation $X$ in the full set algebra such that $s_i^jX\neq s_k^lX$, and so the set $\{s_i^jb: i,j\in \alpha\}$ will be infinite, as well. Adding Diagonals ================ We now show that adding equality to our infinite dimensional modal logic, algebraically reflected by adding diagonals, does not affect the positive representability results obtained for $SA_{\alpha}$ so far. Also, in this context, atomicity does not imply complete representability. However, we lose elementarity of the class of square completely representable algebras; which is an interesting twist. We start by defining the concrete algebras, then we provide the finite schema axiomatization. The class of *Representable Diagonal Set Algebras* is defined to be $$RDSA_{\alpha}=\mathbf{SP}\{\langle\mathcal{P}(^{\alpha}U); \cap,\sim,S^i_j,S_{ij}, D_{ij}\rangle_{i\neq j\in n}: U\text{ \emph{is a set}}, \}$$ where $S_j^i$ and $S_{ij}$ are as before and $D_{ij}=\{q\in D: q_i=q_j\}$. We show that $RDSA_{\alpha}$ is a variety that can be axiomatized by a finite schema. Let $L_{\alpha}$ be the language of $SA_{\alpha}$ enriched by constants $\{d_{ij}: i,j\in \alpha\}.$ Let $\Sigma'^d_{\alpha}$ be the axiomatization in $L_{\alpha}$ obtained by adding to $\Sigma'_{\alpha}$ the following equations for al $i,j<\alpha$. 1. $d_{ii}=1$ 2. $d_{i,j}=d_{j,i}$ 3. $d_{i,k}.d_{k,j}\leq d_{i,j}$ 4. $s_{\tau}d_{i,j}=d_{\tau(i), \tau(i)}$, $\tau\in \{[i,j], [i|j]\}$. \[infinite\] For any infinite ordinal $\alpha$, we have ${\bf Mod}(\Sigma_{\alpha})=RDSA_{\alpha}$. Let $\A\in\mathbf{Mod}(\Sigma'^d_{\alpha})$ and let $0^\A\neq a\in A$. We construct a homomorphism $h:\A\longrightarrow\wp (^{\alpha}\alpha^{(Id)})$. such that $h(a)\neq 0$. Like before, choose an ultrafilter $\mathcal{F}\subset A$ containing $a$. Let $h:\A\longrightarrow \wp(^{\alpha}\alpha^{(Id)})$ be the following function $h(z)=\{\xi\in ^{\alpha}\alpha^{(Id)}:S_{\xi}^\A(z)\in\mathcal{F}\}.$ The function $h$ respects substitutions but it may not respect the newly added diagonal elements. To ensure that it does we factor out $\alpha$, the base of the set algebra, by a congruence relation. Define the following equivalence relation $\sim$ on $\alpha$, $i\sim j$ iff $d_{ij}\in F$. Using the axioms for diagonals $\sim$ is an equivalence relation. Let $V={}^{\alpha}\alpha^{(Id}),$ and $M=V/\sim$. For $h\in V$ we write $h=\bar{\tau}$, if $h(i)=\tau(i)/\sim$ for all $i\in n$. Of course $\tau$ may not be unique. Now define $f(z)=\{\bar{\xi}\in M: S_{\xi}^{\A}(z)\in \mathcal{F}\}$. We first check that $f$ is well defined. We use extensively the property $(s_{\tau}\circ s_{\sigma})x=s_{\tau\circ \sigma}x$ for all $\tau,\sigma\in {}^{\alpha}\alpha^{(Id)}$, a property that can be inferred form our axiomatization. We show that $f$ is well defined, by induction on the cardinality of $$J=\{i\in \mu: \sigma (i)\neq \tau (i)\}.$$ Of course $J$ is finite. If $J$ is empty, the result is obvious. Otherwise assume that $k\in J$. We introduce a piece of notation. For $\eta\in V$ and $k,l<\alpha$, write $\eta(k\mapsto l)$ for the $\eta'\in V$ that is the same as $\eta$ except that $\eta'(k)=l.$ Now take any $$\lambda\in \{\eta\in \alpha: \sigma^{-1}\{\eta\}= \tau^{-1}\{\eta\}=\{\eta\}\}$$ We have $${ s}_{\sigma}x={ s}_{\sigma k}^{\lambda}{ s}_{\sigma (k\mapsto \lambda)}x.$$ Also we have (b) $${s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}. {\sf s}_{\sigma} x) ={ d}_{\tau k, \sigma k} { s}_{\sigma} x,$$ and (c) $${ s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}.{\sf s}_{\sigma(k\mapsto \lambda)}x)$$ $$= { d}_{\tau k, \sigma k}.{ s}_{\sigma(k\mapsto \tau k)}x.$$ and (d) $${ d}_{\lambda, \sigma k}.{ s}_{\sigma k}^{\lambda}{ s}_{{\sigma}(k\mapsto \lambda)}x= { d}_{\lambda, \sigma k}.{ s}_{{\sigma}(k\mapsto \lambda)}x$$ Then by (b), (a), (d) and (c), we get, $${ d}_{\tau k, \sigma k}.{ s}_{\sigma} x= { s}_{\tau k}^{\lambda}({ d}_{\lambda,\sigma k}.{ s}_{\sigma}x)$$ $$={ s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}.{ s}_{\sigma k}^{\lambda} { s}_{{\sigma}(k\mapsto \lambda)}x)$$ $$={s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}.{s}_{{\sigma}(k\mapsto \lambda)}x)$$ $$= { d}_{\tau k, \sigma k}.{ s}_{\sigma(k\mapsto \tau k)}x.$$ The conclusion follows from the induction hypothesis. Clearly $f$ respects diagonal elements. Now using exactly the technique in theorem \[weak\] (one can easily check that the defined isomorphism respects diagonal elements), we can square the weak unit, obtaining the desired result. All positive representation theorems, \[1\], \[2\], \[weak\], \[v2\], proved for the diagonal free case, hold here. But the negative do not, because our counter examples [*do not*]{} contain diagonal elements. Is it the case, that for $\alpha\geq \omega$, $RDSA_{\alpha}$ is conjuagted, hence completey additive. If the answer is affirmative, then we would get all positive results formulated for transposition algebras, given in the second subsection of the next section (for infinte dimensions). In the finite dimensional case, we could capture [*square*]{} complete representability by stipulating that all the operations are completey additive. However, when we have one single diagonal element, this does not suffice. Indeed, using a simple cardinality argument of Hirsch and Hodkinson, that fits perfectly here, we get the following slightly surprising result: \[hh\] For $\alpha\geq \omega$, the class of square completely representable algebras is not elementary. In particular, there is an algebra that is completely representable, but not square completely representable. [@Hirsh]. Let $\C\in SA_{\alpha}$ such that $\C\models d_{01}<1$. Such algebras exist, for example one can take $\C$ to be $\wp(^{\alpha}2).$ Assume that $f: \C\to \wp(^{\alpha}X)$ is a square complete representation. Since $\C\models d_{01}<1$, there is $s\in h(-d_{01})$ so that if $x=s_0$ and $y=s_1$, we have $x\neq y$. For any $S\subseteq \alpha$ such that $0\in S$, set $a_S$ to be the sequence with $ith$ coordinate is $x$, if $i\in S$ and $y$ if $i\in \alpha\sim S$. By complete representability every $a_S$ is in $h(1)$ and so in $h(\mu)$ for some unique atom $\mu$. Let $S, S'\subseteq \alpha$ be destinct and assume each contains $0$. Then there exists $i<\alpha$ such that $i\in S$, and $i\notin S'$. So $a_S\in h(d_{01})$ and $a_S'\in h (-d_{01}).$ Therefore atoms corresponding to different $a_S$’s are distinct. Hence the number of atoms is equal to the number of subsets of $\alpha$ that contain $0$, so it is at least $^{|\alpha|}2$. Now using the downward Lowenheim Skolem Tarski theorem, take an elementary substructure $\B$ of $\C$ with $|\B|\leq |\alpha|.$ Then in $\B$ we have $\B\models d_{01}<1$. But $\B$ has at most $|\alpha|$ atoms, and so $\B$ cannot be [*square*]{} completely representable (though it is completely representable on a weak unit). Axiomatizating the quasi-varieties =================================== We start this section by proving a somewhat general result. It is a non-trival generalisation of Sági’s result providing an axiomatization for the quasivariety of full replacement algebras [@sagiphd], the latter result is obtained by taking $T$ to be the semigroup of all non-bijective maps on $n$. submonoid of $^nn$. Let $$G=\{\xi\in S_n: \xi\circ \sigma\in T,\text{ for all }\sigma\in T\}.$$ Let $RT_n$ be the class of subdirect products of full set algebras, in the similarity type of $T$, and let $\Sigma_n$ be the axiomatization of the variety generated by $RT_n$, obtained from any presentation of $T$. (We know that every finite monoid has a representation). We now give an axiomatization of the quasivariety $RT_n$, which may not be a variety. For all $n\in\omega,$ $n\geq 2$ the set of quasiequations $\Sigma^q_n$ defined to be $$\Sigma^q_n=\Sigma_n\cup\{\bigwedge_{\sigma\in T}s_{\sigma}(x_{\sigma})=0\Rightarrow \bigwedge_{\sigma\in T} s_{i\circ \sigma}(x_{\sigma})=0: i \in G\}.$$ Let $\A$ be an $RT_n$ like algebra. Let $\xi\in {}^nn$ and let $F$ be an ultrafilter over $\A.$ Then $F_\xi$ denotes the following subset of $A.$ $$F_\xi = \begin{cases} \{t\in A:(\forall \sigma\in T)(\exists a_{\sigma}\in A)S_{\xi\circ\sigma}(a_{\sigma})\in F\\\mbox{ and } t\geq\bigwedge_{\xi\in G}S_{\eta\circ \sigma} (a_{\sigma}) & \text{ if }\xi\in G\\ \{a\in A: S^\A_\xi(a)\in F\} & \text{otherwise } \end{cases}$$ The proof of the following theorem, is the same as Sagi’s corresponding proof for Pinter’s algebras [@sagiphd], modulo undergoing the obvious replacements; and therefore it will be omitted. Let $\A\in {\bf Mod}(\Sigma^q_n).$ Let $\xi\in {}^nn$ and let $F$ be an ultrafilter over $\A.$ Then, $F_\xi$ is a proper filter over $\A$ Let $\A\in {\bf Mod}(\Sigma^q_n)$ and let $F$ be an ultrafilter over $\A.$ Then, $F_{Id}\subseteq F$. $\A\in {\bf Mod}(\Sigma^q_n)$ and let $F$ be an ultrafilter over $\A.$ For every $\xi\in {}^nn$ let us choose an ultrafilter $F^*_\xi$ containing $F_\xi$ such that $F^*_{Id}=F$. Then the following condition holds for this system of ultrafilters: $$(\forall \xi\in {}^nn)(\forall \sigma\in T)(\forall a\in A)({S^k_l}^\A(a)\in F^*_\xi\Leftrightarrow a\in F^*_{\xi\circ\sigma})$$ For finite $n$, we have ${\bf Mod}(\Sigma^q_n)=RT_n.$ Soundness is immediate [@sagiphd]. Now we prove completeness. Let $\eta\in {}^nn$, and let $a\in A$ be arbitrary. If $\eta=Id$ or $\eta\notin G$, then $F_{\eta}$ is the inverse image of $F$ with respect to $s_{\eta}$ and so we are done. Else $\eta\in G$, and so for all $\sigma\in T$, we have $F_{\eta\circ \sigma}$ is an ultrafilter. Let $a_{\sigma}=a$ and $a_f=1$ for $f\in T$ and $f\neq \sigma$. Now $a_{\tau}\in F$ for all $\tau\in T$. Hence $s_{\eta\circ \tau}(a_{\tau})\in F$, but $s_{\eta}a\geq \prod s_{\tau}(a_{\tau})$ and we are done. Now with the availabity of $F_{\eta}$ for every $\eta\in {}^nn,$ we can represent our algebra on square units by $f(x)=\{\tau\in {}^nn: x\in F_{\tau}\}$ Transpositions only -------------------- Consider the case when $T=S_n$ so that we have substitutions corresponding to transpositions. This turns out an interesting case with a plathora of positive results, with the sole exception that the class of subdirect products of set algebras is [*not*]{} a variety; it is only a quasi-variety. We can proceed exactly like before obtaining a finite equational axiomatization for the variety generated by full set algebras by translating a presentation of $S_n$. In this case all the operation (corresponding to transpositions) are self-conjugate (because a transposition is the inverse of itself) so that our variety, call it $V$, is conjugated, hence completely additive. Now, the following can be proved exactly like before, undergoing the obvious modifications. $V$ is finitely axiomatizable $V$ is locally finite $V$ has the superamalgamation property $V$ is canonical and atom canonical, hence $\At V=\Str V$ is elementary, and finitely axiomatizable. $V$ is closed under canonical extensions and completions $\L_V$ enjoys an omiting types theorem Atomic algebras are completely representable. Here we give a different proof (inspired by the duality theory of modal logic) that $V$ has the the superamalgamation property. The proof works verbatim for any submonoid of $^nn$, and indeed, so does the other implemented for $SA_{\alpha}$. In fact the two proofs work for any submonoid of $^nn$. Recall that a frame of type $TA_n$ is a first order structure $\F=(V, S_{ij})_{i,j\in \alpha}$ where $V$ is an arbitrary set and and $S_{ij}$ is a binary relation on $V$ for all $i, j\in \alpha$. Given a frame $\F$, its complex algebra will be denotet by $\F^+$; $\F^+$ is the algebra $(\wp(\F), s_{ij})_{i,j}$ where for $X\subseteq V$, $s_{ij}(X)=\{s\in V: \exists t\in X, (t, s)\in S_{i,j} \}$. For $K\subseteq TA_n,$ we let $\Str K=\{\F: \F^+\in K\}.$ For a variety $V$, it is always the case that $\Str V\subseteq \At V$ and equality holds if the variety is atom-canonical. If $V$ is canonical, then $\Str V$ generates $V$ in the strong sense, that is $V= {\bf S}\Cm \Str V$. For Sahlqvist varieties, as is our case, $\Str V$ is elementary. Given a family $(\F_i)_{i\in I}$ of frames, a [*zigzag product*]{} of these frames is a substructure of $\prod_{i\in I}\F_i$ such that the projection maps restricted to $S$ are onto. Let $\F, \G, \H$ be frames, and $f:\G\to \F$ and $h:\F\to \H$. Then $INSEP=\{(x,y)\in \G\times \H: f(x)=h(y)\}$. The frame $INSEP \upharpoonright G\times H$ is a zigzag product of $G$ and $H$, such that $\pi\circ \pi_0=h\circ \pi_1$, where $\pi_0$ and $\pi_1$ are the projection maps. [@Marx] 5.2.4 For $h:\A\to \B$, $h_+$ denotes the function from $\Uf\B\to \Uf\A$ defined by $h_+(u)=h^{-1}[u]$ where the latter is $\{x\in a: h(x)\in u\}.$ For an algebra $\C$, Marx denotes $\C$ by $\C_+,$ and proves: ([@Marx] lemma 5.2.6) Assume that $K$ is a canonical variety and $\Str K$ is closed under finite zigzag products. Then $K$ has the superamalgamation property. [Sketch of proof]{} Let $\A, \B, \C\in K$ and $f:\A\to \B$ and $h:\A\to \C$ be given monomorphisms. Then $f_+:\B_+\to \A_+$ and $h_+:\C_+\to \A_+$. We have $INSEP=\{(x,y): f_+(x)=h_+(y)\}$ is a zigzag connection. Let $\F$ be the zigzag product of $INSEP\upharpoonright \A_+\times \B_+$. Then $\F^+$ is a superamalgam. The variety $TA_n$ has $SUPAP$. Since $TA_n$ can be easily defined by positive equations then it is canonical. The first order correspondents of the positive equations translated to the class of frames will be Horn formulas, hence clausifiable [@Marx] theorem 5.3.5, and so $\Str K$ is closed under finite zigzag products. Marx’s theorem finishes the proof. The following example is joint with Mohammed Assem (personnel communication). \[not\] For $n\geq 2$, $RTA_n$ is not a variety. Let us denote by $\sigma$ the quasi-equation $$s_f(x)=-x\longrightarrow 0=1,$$ where $f$ is any permutation. We claim that for all $k\leq n,$ $\sigma$ holds in the small algebra $\A_{nk}$ (or more generally, any set algebra with square unit). This can be seen using a constant map in $^nk.$ More precisely, let $q\in {}^nk$ be an arbitrary constant map and let $X$ be any subset of $^nk.$ We have two cases for $q$ which are $q\in X$ or $q\in -X$. In either case, noticing that $q\in X\Leftrightarrow q\in S_f(X),$ it cannot be the case that $S_f(X)=-X.$ Thus, the implication $\sigma$ holds in $\A_{nk}.$ It follows then, that $RTA_n\models\sigma$ (because the operators $\mathbf{S}$ and $\mathbf{P}$ preserve quasi-equations). Now we are going to show that there is some element $\B\in PTA_n$, and a specific permutation $f$, such that $\B\nvDash\sigma.$ Let $G\subseteq {}^nn$ be the following permutable set $$G=\{s\in {}^n2:|\{i:s(i)=0\}|=1\}.$$ Let $\B=\wp(G)$, then $\wp(G)\in PTA_n.$ Let $f$ be the permutation defined as follows For $n=2,3,$ $f$ is simply the transposition $[0,1]$. For larger $n$: $$f = \begin{cases} [0,1]\circ[2,3]\circ\ldots\circ[n-2,n-1] & \text{if $n$ is even}, \\ [0,1]\circ[2,3]\circ\ldots\circ[n-3,n-2] & \text{if $n$ is odd} \end{cases}$$ Notice that $f$ is the composition of disjoint transpositions. Let $X$ be the following subset of $G,$ $$X=\{e_i:i\mbox{ is odd, }i<n\},$$ where $e_i$ denotes the map that maps every element to $1$ except that the $i$th element is mapped to $0$. It is easy to see that, for all odd $i<n,$ $e_i\circ f=e_{i-1}.$ This clearly implies that $$S_f^\B(X)=-X=\{e_i:i\mbox{ is even, }i<n\}.$$ Since $0^\B\neq 1^\B,$ $X$ falsifies $\sigma$ in $\B.$ Since $\B\in {\bf H}\{{\wp(^nn)}\}$ we are done. Let $Sir(K)$ denote he class of subdirectly indecomposable algebras in $K$. The variety $PTA_n$ is not a discriminator variety If it were then, there would be a discriminator term on $Sir(RTA_n)$ forcing $RTA_n$ to be variety, which is not the case. All of the above positive results extend to the infinite dimensional case, by using units of the form $V=\{t\in {}^{\alpha}\alpha^{(Id)}: t\upharpoonright {\sf sup} t\text { is a bijection }\},$ where ${\sf sup} t=\{i\in \alpha: s_i\neq i\}$, and defining $s_{\tau}$ for $\tau\in V$ the obvious way. This will be well defined, because the schema axiomatizating the variety generated by the square set algebras, is given by lifting the finite axiomatization for $n\geq 5$ (resulting from a presentation of $S_n$, to allow indices ranging over $\alpha$. Also, $RTA_{\alpha}$ will [*not*]{} be a variety using the previous example together with the same lifting argument implemented for Pinter’s algebras, and finally $TA_{\alpha}$ is not locally finite. Decidability ============ The decidability of the studied $n$ dimensional multi modal logic, can be proved easily by filtration (since the corresponding varieties are locally finite, so such logics are finitely based), or can inferred from the decidability of the word problem for finite semigroups. But this is much more than needed. In fact we shall prove a much stronger result, concerning $NP$ completeness. The $NP$ completeness of our multi dimensional modal logics (for all three cases, Pinters algebras, transposition algebras, and substitution algebras), by the so called [*selection method*]{}, which gives a (polynomial) bound on a model satifying a given formula in terms of its length. This follows from the simple observation that the accessibility relations, are not only partial functions, but they are actually total functions, so the method of selection works. This for example is [*not*]{} the case for accessibility relations corresponding to cylindrifiers, and indeed cylindric modal logic of dimension $>2$, is highly [*undecidable*]{}, a result of Maddux. We should also mention that the equational theory of the variety and quasi-varieties (in case of non-closure under homomorphic images) are also decidable. This is proved exactly like in [@sagiphd], so we omit the proof. (Basically, the idea is to reduce the problem to decidability in the finite dimensional using finite reducts). Our proof of $NP$ completeness is fairly standard. We prepare with some well-known definitions [@modal]. Let $\L$ be a normal modal logic, $\M$ a family of finitely based models (based on a $\tau$-frame of finite character). $\L$ *has the polysize model property* with respect to $\M$ if there is a polynomial function $f$ such that any consistent formula $\phi$ is satisfiable in a model in $\M$ containing at most $f(|\phi|)$ states. Let $\tau$ be finite similarity type. Let $\L$ be a consistent normal modal logic over $\tau$ with the polysize model property with respect to some class of models $\M$. If the problem of deciding whether $M\in\M$ is computable in time polynomial in $|M|$, then $\L$ has an NP-complete satisfiability problem. See Lemma 6.35 in [@modal]. If $\F$ is a class of frames definable by a first order sentence, then the problem of deciding whether $F$ belongs to $\F$ is decidable in time polynomial in the size of $F$. See Lemma 6.36 [@modal]. The same theorem can be stated for models based on elements in $\F$. More precisely, replace $\F$ by $\M$ (the class of models based on members of $\F$), and $F$ by $M.$ This is because models are roughly frames with valuations. We prove our theorem for any submonoid $T\subseteq {}^nn$. $V_T$ has an NP-complete satisfiability problem. By the two theorems above, it remains to show that $V_T$ has the polysize model property. We use the [*selection method.*]{} Suppose $M$ is a model. We define a selection function as follows (intuitively, it selects states needed when evaluating a formula in $M$ at $w$): $$s(p,w)=\{w\}$$ $$s(\neg\psi,w)=s(\phi,w)$$ $$s(\theta\wedge\psi,w)=s(\theta,w)\cup s(\psi,w)$$ $$s(s_\tau\psi,w)=\{w\}\cup s(\psi,\tau(w)).$$ It follows by induction on the complexity of $\phi$ that for all nodes $w$ such that $$M,w\Vdash\phi\mbox{ iff }M\upharpoonright s(\phi,w),w\Vdash\phi.$$ The new model $M\upharpoonright s(\phi,w)$ has size $|s(\phi,w)|= 1+ \mbox{ the number of modalities in }\phi$. This is less than or equal to $|\phi|+1,$ and we done. Andréka, H., [*Complexity of equations valid in algebras of relations*]{}. Annals of Pure and Applied logic, [**89**]{}, (1997) p. 149 - 209. Andréka, H., Givant, S., Mikulas, S. Németi, I., Simon A., [*Notions of density that imply representability in algebraic logic.*]{} Annals of Pure and Applied logic, [**91**]{}(1998) p. 93 –190. H. Andréka, I. Németi [*Reducing first order logic to $Df_3$ free algebras.*]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklós; Németi, István (Eds.) (2013.) Blackburn, P., de Rijke M, Venema, y. [*Modal Logic*]{} Cambridge test in theoretical Computer Science, Third printing 2008. S. Burris, H.P., Sankappanavar, A course in universal algebra Graduate Texts in Mathematics, Springer Verlag, New York 1981. M. Ferenzci [*The polyadic generalization of the Boolean axiomatization of field of sets*]{} Trans. Amer Math. society 364 (2012) 867-886. Givant and Venema Y. [*The preservation of Sahlqvist equations in completions of Boolean algebras with operators*]{} Algebra Universalis, 41, 47-48 (1999). R. Hirsch and I. Hodkinson [Step-by step building representations in algebraic logic]{} Journal of Symbolic Logic (62)(1)(1997) 225-279. R. Hirsch and I. Hodkinson, *Complete Representations in Algebraic Logic* Journal of Symbolic Logic, 62(3) (1997), pp. 816-847 R. Hirsch and I. Hodkinson [*Completions and complete representations*]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklós; Németi, István (Eds.) (2013.) L. Henkin, J.D. Monk and A.Tarski, [*Cylindric Algebras Part I*]{}. North Holland, 1971. L. Henkin, J.D. Monk and A.Tarski, [*Cylindric Algebras Part II*]{}. North Holland, 1985. Hodges [*Model theory*]{} Cambridge, Encyclopedia of Mathematics. Hodkinson, I. [*A construction of cylindric algebras and polyadic algebras from atomic relation algebras*]{} Algebra Universalis, 68 (2012) 257-285 Maksimova, L. [*Amalgamation and interpolation in normal modal logics*]{}. Studia Logica [**50**]{}(1991) p.457-471. M. Marx [*Algebraic relativization and arrow logic*]{} ILLC Dissertation Series 1995-3, University of Amsterdam, (1995). A. Kurusz [*Represetable cylindric algebras and many dimensional modal logic* ]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andŕeka, Hajnal; Ferenczi, Miklos; Németi, Istvan (Eds.) (2013.) O. Ganyushkin and V. Mazorchuk,*Classical Finite Transformation Semigroups-An Introduction*, Springer, 2009. G. Sági, *A Note on Algebras of Substitutions*, Studia Logica, (72)(2) (2002), p 265-284. G. Sági [*Polyadic algebras*]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklos; Németi, Istvan (Eds.) (2013.) Shelah [*Classification theory, the number of non isomorphic models*]{} North Holland. Studies in Logic and Foundation in Mathematics. (1978) T. Sayed Ahmed [*Complete representations, completions and omitting types*]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklos; Németi, Istvan (Eds.) (2013.) T. Sayed Ahmed [*Classes of algebras without the amalgamation property*]{} Logic Journal of IGPL, 192 (2011) p.87-2011. Sayed Ahmed,T. and Mohamed Khaled [*On complete representations in algebras of logic*]{} Logic journal of IGPL [**17**]{}(3)(2009)p. 267-272 Ide Venema [*Cylindric modal logic*]{} Journal of Symbolic Logic. (60) 2 (1995)p. 112-198 Ide Venema [*Cylindric modal logic*]{} In Cylindric-like Algebras and Algebraic Logic, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklós; Németi, Istvan (Eds.) (2013.) Ide Venema[*Atom structures and Sahlqvist equations*]{} Algebra Universalis, 38 (1997) p. 185-199 [^1]: Mathematics Subject Classification. 03G15; 06E25 Key words: multimodal logic, substitution algebras, interpolation [^2]: One way to show that varieties of representable algebras, like cylindric algebras, are not closed under completions, is to construct an atom structure $\F$, such that $\Cm\F$ is not representable, while $\Tm\F$, the subalgebra of $\Cm\F$, generated by the atoms, is representable. This algebra cannot be completely representable; because a complete representation induces a representation of the full complex algebra.
--- abstract: | Background : Measurements of $\beta$ decay provide important nuclear structure information that can be used to probe isospin asymmetries and inform nuclear astrophysics studies. Purpose : To measure the $\beta$-delayed $\gamma$ decay of $^{26}$P and compare the results with previous experimental results and shell-model calculations. Method : A $^{26}$P fast beam produced using nuclear fragmentation was implanted into a planar germanium detector. Its $\beta$-delayed $\gamma$-ray emission was measured with an array of 16 high-purity germanium detectors. Positrons emitted in the decay were detected in coincidence to reduce the background. Results : The absolute intensities of $^{26}$P $\beta$-delayed $\gamma$-rays were determined. A total of six new $\beta$-decay branches and 15 new $\gamma$-ray lines have been observed for the first time in $^{26}$P $\beta$-decay. A complete $\beta$-decay scheme was built for the allowed transitions to bound excited states of $^{26}$Si. $ft$ values and Gamow-Teller strengths were also determined for these transitions and compared with shell model calculations and the mirror $\beta$-decay of $^{26}$Na, revealing significant mirror asymmetries. Conclusions : A very good agreement with theoretical predictions based on the USDB shell model is observed. The significant mirror asymmetry observed for the transition to the first excited state ($\delta=51(10)\%$) may be evidence for a proton halo in $^{26}$P. author: - 'D. Pérez-Loureiro' - 'C. Wrede' - 'M. B. Bennett' - 'S. N. Liddick' - 'A. Bowe' - 'B. A. Brown' - 'A. A. Chen' - 'K. A. Chipps' - 'N. Cooper' - 'D. Irvine' - 'E. McNeice' - 'F. Montes' - 'F. Naqvi' - 'R. Ortez' - 'S. D. Pain' - 'J. Pereira' - 'C. J. Prokop' - 'J. Quaglia' - 'S. J. Quinn' - 'J. Sakstrup' - 'M. Santia' - 'S. B. Schwartz' - 'S. Shanab' - 'A. Simon' - 'A. Spyrou' - 'E. Thiagalingam' title: '${\bm \beta}$-delayed $\gamma$ decay of $\bm{^{26}\mathrm{P}}$: Possible evidence of a proton halo' --- Introduction\[sec:intro\] ========================= The detailed study of unstable nuclei was a major subject in nuclear physics during recent decades. $\beta$ decay measurements provide not only important information on the structure of the daughter and parent nuclei, but can also be used to inform nuclear astrophysics studies and probe fundamental subatomic symmetries [@Hardy2015]. The link between experimental results and theory is given by the reduced transition probabilities, $ft$. Experimental $ft$ values involve three measured quantities: the half-life, $t_{1/2}$, the $Q$ value of the transition, which determines the statistical phase space factor $f$, and the branching ratio associated with that transition, $BR$. In the standard $\mathcal{V\!\!-\!\!A}$ description of $\beta$ decay, $ft$ values are related to the fundamental constants of the weak interaction and the matrix elements through this equation: $$ft=\frac{\mathcal{K}}{g_V^2|\langle f|\tau|i\rangle|^2+g_A^2|\langle f|\sigma\tau|i\rangle|^2} , \label{eq:theo_ft}$$ where $\mathcal{K}$ is a constant and $g_{V(A)}$ are the vector (axial) coupling constants of the weak interaction; $\sigma$ and $\tau$ are the spin and isospin operators, respectively. Thus, a comparison of the experimental $ft$ values with the theoretical ones obtained from the calculated matrix elements is a good test of the nuclear wave functions obtained with model calculations. However, to reproduce the $ft$ values measured experimentally, the axial-vector coupling constant $g_A$ involved in Gamow-Teller transitions has to be renormalized [@Wilkinson1973; @WILKINSON1973_2]. The effective coupling constant $g'_A=q\times g_A$ is deduced empirically from experimental results and depends on the mass of the nucleus: The quenching factor is $q=0.820(15)$ in the $p$ shell [@Chou1993], $q=0.77(2)$ in the $sd$ shell [@Wildenthal1983], and $q=0.744(15)$ in the $pf$ shell [@Martinez1996]. Despite several theoretical approaches attempting to reveal the origin of the quenching factor it is still not fully understood [@Brown2005]. Another phenomenon which shows the limitations of our theoretical models is the so-called *$\beta$-decay mirror asymmetry*. If we assume that the nuclear interaction is independent of isospin, the theoretical description of $\beta$ decay is identical for the decay of a proton ($\beta^+$) or a neutron ($\beta^-$) inside a nucleus. Therefore, the $ft$ values corresponding to analog transitions should be identical. Any potential asymmetries are quantified by the asymmetry parameter $\delta=ft^+/ft^-1$, where the $ft^\pm$ refers to the $\beta^\pm$ decays in the mirror nuclei. The average value of this parameter is $(4.8\pm0.4)\%$ for $p$ and $sd$ shell nuclei [@Thomas2004]. From a theoretical point of view the mirror asymmetry can have two origins: (a) the possible existence of exotic *second-class currents* [@Wilkinson1970447; @PhysRevLett.38.321; @WilkinsonEPJ], which are not allowed within the framework of the standard $\mathcal{V\!\!-\!\!A}$ model of the weak interaction and (b) the breaking of the isospin symmetry between the initial or final nuclear states. Shell-model calculations were performed to test the isospin non-conserving part of the interaction in $\beta$ decay [@Smirnova2003441]. The main contribution to the mirror asymmetry from the nuclear structure was found to be from the difference in the matrix elements of the Gamow-Teller operator ($|\langle f|\sigma\tau|i\rangle|^2$), because of isospin mixing and/or differences in the radial wave functions. Large mirror asymmetries have been reported for transitions involving halo states [@Tanihata2013]. For example, the asymmetry parameter for the $A=17$ mirror decays $^{17}$Ne$\rightarrow^{17}$F and $^{17}$N$\rightarrow^{17}$O to the first excited states of the respective daughters was measured to be $\delta=(-55\pm9)\%$ and $\delta=(-60\pm1)\%$ in two independent experiments [@Borge1993; @Ozawa1998]. This result was interpreted as evidence for a proton halo in the first excited state of $^{17}$F assuming that the fraction of the $2s_{1/2}$ component of the valence nucleons remains the same in $^{17}$Ne and $^{17}$N. However, a different interpretation was also given in terms of charge dependent effects which increase the $2s_{1/2}$ fraction in $^{17}$Ne by about 50% [@PhysRevC.55.R1633]. The latter result is also consistent with the high cross section obtained in the fragmentation of $^{17}$Ne [@Ozawa199418; @Ozawa199663], suggesting the existence of a halo in $^{17}$Ne. More recently Kanungo *et al.* reported the possiblity of a two-proton halo in $^{17}$Ne [@Kanungo200321]. An extremely large mirror asymmetry was also observed in the mirror decay of $A=9$ isobars $^{9}$Li$\rightarrow^{9}$Be and $^{9}$C$\rightarrow^{9}$B. A value of $\delta=(340\pm70)\%$ was reported for the $^{9}$Li and $^{9}$C $\beta$-decay transitions to the 11.8 and 12.2 MeV levels of their respective daughters, which is the largest ever measured [@Bergmann2001427; @Prezado2003]. Despite the low experimental interaction cross sections measured with various targets in attempts to establish the halo nature of $^{9}$C [@Ozawa199663; @Blank1997242], recent results at intermediate energies [@Nishimura2006], together with the anomalous magnetic moment [@Matsuta1995c153] and theoretical predictions [@0256-307X-27-9-092101; @PhysRevC.52.3013; @Gupta2002], make $^{9}$C a proton halo candidate. The potential relationship between large mirror asymmetries and halos is therefore clear. Precision measurements of mirror asymmetries in states involved in strong, isolated, $\beta$-decay transitions might provide a technique to probe halo nuclei that is complementary to total interaction cross section and momentum distribution measurements in knockout reactions [@Tanihata2013]. Moreover, $\beta$ decay of proton-rich nuclei can be used for nuclear astrophysics studies. Large $Q_\beta$-values of these nuclei not only allow the population of the bound excited states of the daughter, but also open particle emission channels. Some of these levels correspond to astrophysically significant resonances which cannot be measured directly because of limited radioactive beam intensities. For example, the $^{25}\mathrm{Al}(p,\gamma)^{26}\mathrm{Si}$ reaction [@Wrede_2009] plays an important role in the abundance of the cosmic $\gamma$-ray emitter $^{26}\mathrm{Al}$. The effect of this reaction is to reduce the amount of ground state $^{26}\mathrm{Al}$, which is bypassed by the sequence $^{25}\mathrm{Al}(p,\gamma)^{26}\mathrm{Si}(\beta\nu)^{26m}\mathrm{Al}$, reducing therefore the intensity of the 1809-keV $\gamma$-ray line characteristic of the $^{26}\mathrm{Al}$ $\beta$ decay [@Iliadis_96]. Thus it is important to constrain the $^{25}\mathrm{Al}(p,\gamma)^{26}\mathrm{Si}$ reaction rate. $^{26}$P is the most proton-rich bound phosphorus isotope. With a half-life of $43.7(6)$ ms and a $Q_{EC}$ value of $18258(90)$ keV [@Thomas2004] the $\beta$ decay can be studied over a wide energy interval. $\beta$-delayed $\gamma$-rays and protons from excited levels of $^{26}$Si below and above the proton separation energy of $5513.8(5)$ keV [@AME2012] were observed directly in previous experiments [@Thomas2004; @Cable_83; @Cable_84] and, more recently, indirectly from the Doppler broadening of peaks in the $\beta$-delayed proton-$\gamma$ spectrum [@Schwartz2015]. The contribution of novae to the abundance of $^{26}\mathrm{Al}$ in the galaxy was recently constrained by using experimental data on the $\beta$ decay of $^{26}$P [@Bennett2013]. In addition, $^{26}\mathrm{P}$ is a candidate to have a proton halo [@Brown1996; @Ren1996; @Gupta2002; @Liang2009]. Phosphorus isotopes are the lightest nuclei expected to have a ground state with a dominant contribution of a $\pi s_{1/2}$ orbital. Low orbital angular momentum orbitals enhance the halo effect, because higher $\ell$-values give rise to a confining centrifugal barrier. The low separation energy of $^{26}$P (143(200) keV [@AME2012], 0(90) keV[@Thomas2004]), together with the narrow momentum distribution and enhanced cross section observed in proton-knockout reactions [@Navin1998] give some experimental evidence for the existence of a proton halo in $^{26}$P. In this paper, we present a comprehensive summary of the $\beta$-delayed $\gamma$ decay of $^{26}$P measured at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University during a fruitful experiment for which selected results have already been reported in two separate shorter papers [@Bennett2013; @Schwartz2015]. In the present work, the Gamow-Teller strength, $B(GT)$, and the experimental $ft$ values are compared to theoretical calculations and to the decay of the mirror nucleus $^{26}$Na to investigate the Gamow-Teller strength and mirror asymmetry, respectively. A potential relationship between the mirror asymmetry and the existence of a proton halo in $^{26}$P is also discussed. Finally, in the last section, the calculated thermonuclear $^{25}$Al$(p,\gamma)^{26}$Si reaction rate, which was used in Ref. [@Bennett2013] to estimate the contribution of novae to the abundance of galactic $^{26}$Al, is tabulated for completeness. Experimental procedure\[sec:experiment\] ======================================== ![\[fig:Setup\] Schematic view of the experimental setup. The thick arrow indicates the beam direction. One of the 16 SeGA detectors was removed to show the placement of the GeDSSD.](Fig1.eps){width="45.00000%"} The experiment was carried out at the National Superconducting Cyclotron Laboratory (NSCL). A 150 MeV/u 75 pnA primary beam of $^{36}\mathrm{Ar}$ was delivered from the Coupled Cyclotron Facility and impinged upon a 1.55 g/cm$^2$ Be target. The $^{26}\mathrm{P}$ ions were in-flight separated from other fragmentation products according to their magnetic rigidity by the A1900 fragment separator [@Morrissey200390]. The Radio-Frequency Fragment Separator (RFFS) [@Bazin2009314] provided a further increase in beam purity before the beam was implanted into a 9-cm diameter, 1-cm thickness planar germanium double-sided strip detector (GeDSSD) [@Larson201359]. To detect signals produced by both the implanted ions and the $\beta$ particles emitted during the decay, the GeDSSD was connected to two parallel amplification chains. This allowed the different amounts of energy deposited in implantations (low gain) and decays (high gain) to be detected in the GeDSSD. The GeDSSD was surrounded by the high purity germanium detector array SeGA [@Mueller2001492] in its barrel configuration which was used to measure the $\beta$-delayed $\gamma$ rays (see Fig.\[fig:Setup\]). ![\[fig:PID\] Particle identification plot obtained for a selection of runs during the early portion of the experiment, before the beam tune was fully optimized. The energy loss was obtained from one of the PIN detectors and the time of flight between the same detector and the scintillator placed at the focal plane of the A1900 separator. A low-gain energy signal in the GeDSSD condition was used. The color scale corresponds to the number of ions.](Fig2.eps){width=".5\textwidth"} The identification of the incoming beam ions was accomplished using time-of-flight and energy loss signals. The energy loss signals were provided by a pair of silicon PIN detectors placed slightly upstream of the decay station. The time of flight was measured between one of these PINs and a plastic scintillator placed 25 m upstream, at the A1900 focal plane. Figure \[fig:PID\] shows a two-dimensional cluster plot of the energy loss versus the time of flight for the incoming beam taken prior to a re-tune that improved the beam purity substantially for the majority of the experiment. A coincidence condition requiring a low-gain signal in the GeDSSD was applied to ensure the ions were implanted in the detector. It shows that the main contaminant in our beam was the radioactive isotone $^{24}\mathrm{Al}$ ($\sim$13%). During the early portion of the experiment, a small component of $^{25}\mathrm{Si}$ was also present in the beam. We estimated its ratio and it was on average 2.1%, but this value was diluted to 0.5% after incorporating the data acquired after the re-tune. Small traces of lighter isotones like $^{22}\mathrm{Na}$ and $^{20}\mathrm{F}$ were also present ($\sim$2.5%). The total secondary beam rate was on average 80 ions/s and the overall purity of the implanted beam was 84%. This value of the beam purity differs from the previous reported values in Ref. [@Bennett2013], in which the implant condition was not applied The $^{26}\mathrm{P}$ component was composed of the ground state and the known 164.4(1) keV isomeric state [@Nishimura2014; @DPL2016]. Because of the short half-life of the isomer \[120(9) ns\] [@Nishimura2014] and the fact that it decays completely to the ground state of $^{26}\mathrm{P}$, our $\beta$-decay measurements were not affected by it. The data were collected event-by-event using the NSCL digital acquisition system [@Prokop2014]. Each channel provided its own time-stamp signal, which allowed coincidence gates to be built between the different detectors. To select $\beta$-$\gamma$ coincidence events, the high-gain energy signals from the GeDSSD were used to indicate that a $\beta$ decay occurred. The subsequent $\gamma$ rays emitted from excited states of the daughter nuclei were selected by setting a 1.5-$\mu$s coincidence window. The 16 spectra obtained by each of the elements of SeGA were then added together after they were gain matched run-by-run to account for possible gain drifts during the course of the experiment. ![image](Fig3.eps){width=".85\textwidth"} Data Analysis and Experimental Results \[sec:data\] =================================================== As mentioned in Sec. \[sec:intro\], the data presented in this paper are from the same experiment described in Refs. [@Bennett2013; @Schwartz2015], but independent sorting and analysis routines were developed and employed. The values extracted are therefore slightly different, but consistent within uncertainties. New values derived in the present work are not intended to supersede those from Refs. [@Bennett2013; @Schwartz2015], but rather to complement them. In this section, the analysis procedure is described in detail and the experimental results are presented. Figure \[fig:spec\] shows the cumulative $\gamma$-ray spectrum observed in all the detectors of the SeGA array in coincidence with a $\beta$-decay signal in the GeDSSD. We have identified 48 photopeaks, of which 30 are directly related to the decay of $^{26}$P. Most of the other peaks were assigned to the $\beta$ decay of the main contaminant of the beam, $^{24}$Al. Peaks in the spectrum have been labeled by the $\gamma$-ray emitting nuclide. Twenty-two of the peaks correspond to $^{26}$Si, while eight of them correspond to $\beta$-delayed proton decays to excited states of $^{25}$Al followed by $\gamma$-ray emission. In this work we will focus on the decay to levels of $^{26}$Si as the $^{25}$Al levels have already been discussed in Ref. [@Schwartz2015]. ![\[fig:calibr\](Upper panel) Energy calibration of SeGA $\gamma$-ray spectra using the $\beta$-delayed $\gamma$ rays emitted by $^{24}\mathrm{Al}$. The solid line is the result of a second degree polynomial fit. Energies and uncertainties are taken from [@Firestone20072319]. (Lower panel) Residuals of the calibration points with respect to the calibration line.](Fig4.eps){width=".5\textwidth"} $\bm{\gamma}$-ray Energy Calibration ------------------------------------ The energies of the $\gamma$ rays emitted during the experiment were determined from a calibration of the SeGA array. As mentioned in Sect. \[sec:experiment\] and in Refs. [@Schwartz2015; @Bennett2013] a gain-matching procedure was performed to align all the signals coming from the 16 detectors comprising the array. This alignment was done with the strongest background peaks, namely the 1460.8-keV line (from the $^{40}\mathrm{K}$ decay) and the 2614.5-keV one (from the $^{208}\mathrm{Tl}$ decay). The gain-matched cumulative spectrum was then absolutely calibrated *in situ* using the well-known energies of the $^{24}$Al $\beta$-delayed $\gamma$ rays emitted by $^{24}\mathrm{Mg}$, which cover a wide range in energy from 511 keV to almost 10 MeV [@Firestone20072319]. To account for possible non-linearities in the response of the germanium detectors, a second degree polynomial fit was used as a calibration function. Results of the calibration are shown in Fig. \[fig:calibr\]. The standard deviation for this fit is 0.3 keV, which includes the literature uncertainties associated with the energies of $^{24}\mathrm{Mg}$. The systematic uncertainty was estimated from the residuals of room background peaks not included in the fit. The lower panel of Fig. \[fig:calibr\] shows that these deviations are below 0.6 keV, with an average of 0.2 keV. Based on this, the systematic uncertainty was estimated to be 0.3 keV. ![\[fig:eff\]SeGA photopeak efficiency. (Top panel) Results of a [Geant4]{} simulation \[solid line (red)\] compared to the efficiency measured with absolutely calibrated sources (black circles) and the known $^{24}\mathrm{Mg}$ lines (empty squares). The simulation and the $^{24}\mathrm{Mg}$ data have been scaled to match the source measurements. (Bottom panel) Ratio between the simulation and the experimental data. The shaded area (yellow) shows the adopted uncertainties.](Fig5.eps){width=".5\textwidth"} Efficiencies ------------ ### $\beta$-particle Efficiency \[sec:betaeff\] The $\beta$-particle detection efficiency of the GeDSSD can be determined by taking the ratio between the number of counts under a certain photopeak in the $\beta$-gated $\gamma$-ray singles spectrum and the ungated one. In principle, the $\beta$ efficiency depends on $Q_\beta$. To investigate this effect, we calculated the ratios between the gated and the ungated spectra for all the $^{24}\mathrm{Mg}$ peaks, which have different combinations of $Q_\beta$, and found it to be independent of the end-point energy of the $\beta$ particles, with an average ratio of $\varepsilon_\beta(^{24}\mathrm{Mg})=(38.6\pm0.9)\%$. Because of the different implantation depths for $^{24}\mathrm{Al}$ and $^{26}\mathrm{P}$ ($^{24}\mathrm{Al}$ barely penetrates into the GeDSSD), we also calculated the gated to ungated ratios of the strongest peaks of $^{26}\mathrm{Si}$ (1797 keV) and its daughter $^{26}\mathrm{Al}$ (829 keV) obtaining a constant, average, value for the efficiency of $\varepsilon_\beta=(65.2\pm0.7)\%$. The singular value for $^{26}\mathrm{Si}$ and $^{26}\mathrm{Al}$ is explained by their common decay point in the GeDSSD. ### $\gamma$-ray Efficiency To obtain precise measurements of the $\gamma$-ray intensities, we determined the photopeak efficiency of SeGA. The photopeak efficiency was studied over a wide energy range between 400 keV and 8 MeV. The results of a [Geant4]{} [@Agostinelli2003250] Monte-Carlo simulation were compared with the relative intensities of the well-known $^{24}\mathrm{Mg}$ lines used also in the energy calibration. The high energy lines of this beam contaminant made it possible to benchmark the simulation for energies higher than with standard sources. In addition, the comparison of the simulation to data taken offline with absolutely-calibrated $^{154,155}\mathrm{Eu}$ and $^{56}\mathrm{Co}$ sources allowed us to scale the simulation to determine the efficiency at any energy. The scaling factor was 0.91. The statistical uncertainty of this scaling factor was inflated by a scaling factor of $\sqrt{\chi^2/\nu}$ yielding an uncertainty of 1.5%, which was propagated into the efficiency. The magnitude of this factor is consistent with [Geant4]{} simulations of the scatter associated with coincidence summing effects [@Semkow1990]. Figure \[fig:eff\] shows the adopted efficiency curve compared to the source data, and the $^{24}\mathrm{Mg}$ peak intensities. The accuracy of this photopeak efficiency was estimated to be $\delta\varepsilon/\varepsilon=1.5\%$ for energies below 2800 keV and 5% above that energy. $\bm{\gamma}$-ray intensities \[subsec:intensities\] ---------------------------------------------------- ![\[fig:fit\] (Top panel) Example of a typical fit to the 1960-keV peak, using the function of Eq. (\[eq:EMG\]). The dashed line corresponds to the background component of the fit. (Bottom panel) Residuals of the fit in terms of the standard deviation $\sigma$.](Fig6.eps){width=".5\textwidth"} The intensities of the $\gamma$ rays emitted in the $\beta$ decay of $^{26}\mathrm{P}$ were obtained from the areas of the photopeaks shown in the spectrum of Fig. \[fig:spec\]. We used an exponentially modified Gaussian (EMG) function to describe the peak shape together with a linear function to model the local background: $$%F=B+\frac{N}{2\tau}\exp\left[\frac{1}{2\tau}\left (2\mu+\frac{\sigma^2}{\tau} -2x \right )\right ]\mathrm{erfc}\left[\frac{ \sigma^2+\tau(\mu-x)}{\sqrt{2}\sigma\tau}\right ], F=B+\frac{N}{2\tau}e^{\frac{1}{2\tau}\left (2\mu+\frac{\sigma^2}{\tau} -2x \right )}\mathrm{erfc}\!\left[\frac{ \sigma^2+\tau(\mu-x)}{\sqrt{2}\sigma\tau}\right ], \label{eq:EMG}$$ where $B$ is a linear background, $N$ is the area below the curve, $\mu$ and $\sigma$ are the centroid and the width of the Gaussian, respectively, and $\tau$ is the decay constant of the exponential; erfc is the complementary error function. The parameters describing the width of the Gaussian ($\sigma$) and the exponential constant ($\tau$) were determined by fitting narrow isolated peaks at various energies. The centroids and the areas below the peaks were obtained from the fits. When multiple peaks were very close, a multi-peak fitting function was applied using the same values for the $\tau$ and $\sigma$ parameters for all the peaks in the region. In general the fits were very good, with reduced chi-squared ($\chi^2/\nu$) close to unity. In those cases where $\chi^2/\nu$ was bigger than one, the statistical uncertainties were inflated by multiplying them by $\sqrt{\chi^2/\nu}$. Fig. \[fig:fit\] shows an example of the fit to the 1960-keV peak. ### Absolute normalization The total number of $^{26}\mathrm{P}$ ions implanted and subsequently decaying in the GeDSSD is, in principle, needed to obtain an absolute normalization of the $\gamma$-ray intensities, and hence the $\beta$ branchings of $^{26}\mathrm{Si}$ levels. The number of $\gamma$ rays observed at energy $E$ is: $$N_\gamma(E)=N_0 \times \varepsilon_{\gamma}(E)\times \varepsilon_{\beta}(E) \times I_{\gamma}(E) \label{eq:abs_intensity}$$ [d d c c d d]{} & & $_iJ_n^\pi$ & $_fJ_n^\pi$ & &\ 1797.1(3) & & $2_1^+$ & $0_1^+$ & 1797.1(3) &\ 2786.4(3) & &lt;0.39 & $2_2^+$ & $2_1^+$ & 989.0(3) & 5.7(3)\ & & & $0_1^+$ & 2786.5(4) & 3.4(2)\ 3756.8(3) & 1.9(2) & $3_1^+$ & $2_2^+$ & 970.3(3) & 1.15(9)\ & & & $2_1^+$ & 1959.8(4) & 1.7(1)\ 4138.6(4) & 6.2(4) & $2_3^+$ & $2_2^+$ & 1352.2(4)& 0.48(7)\ & & & $2_1^+$ & 2341.2(4) & 4.7(3)\ & & & $0_1^+$ & 4138.0(5)& 1.0(1)\ 4187.6(4) & 4.4(3) & $3_2^+$ & $2_2^+$ & 1401.3(3) & 3.8(2)\ & & & $2_1^+$ & 2390.1(4)& 2.2(1)\ 4445.1(4) & 0.8(2)& $4_1^+$ & $2_2^+$ & & 0.08(6)\ & & & $2_1^+$ & 2647.7(5)& 1.7(1)\ 4796.4(5) & 0.56(9)& $4_2^+$ & $2_2^+$ & 2999.1(5)& 0.56(9)\ 4810.4(4) & 3.1(2)& $2_4^+$ & $2_2^+$ & 2023.9(3)& 3.1(2)\ 5146.5(6) & 0.18(5)& $2_5^+$ & $2_2^+$ & 2360.0(6)& 0.18(5)\ 5288.9(4) & 0.76(7)& $4_3^+$ & $4_1^+$ & 842.9(3)& 0.33(7)\ & & & $3_1^+$ & 1532.1(5)& 0.43(7)\ & & & $2_1^+$ & &&lt;0.12\ 5517.3(3) & 2.7(2)& $4_4^+$ & $4_1^+$ & 1072.1(5)& 0.69(9)\ & & & $3_2^+$ & 1329.9(3)& 1.4(1)\ & & & $3_1^+$ & 1759.7(5)& 0.47(6)\ & & & $2_2^+$ & 2729.9(5)& 0.29(5)\ 5929.3(6) & 0.15(5) & $3_3^+$ & $3_2^+$ & 1741.7(9) & 0.15(5)\ where $N_0$ is the total number of ions decaying, $\varepsilon_{\gamma(\beta)}$ are the efficiencies to detect $\gamma$ rays ($\beta$ particles), and $I_{\gamma}$ is the absolute $\gamma$-ray intensity. To circumvent the uncertainty associated with the total number of ions decaying, we used the ratio of the number of $\beta$ decays of $^{26}\mathrm{P}$ to its daughter $^{26}\mathrm{Si}$ \[$61(2)\%$\] [@Thomas2004], and the absolute intensity of the 829-keV $\gamma$-rays emitted in the $\beta$ decay of $^{26}\mathrm{Si}$, $[21.9(5)\%]$ [@Endt19981], to calculate the intensity of the 1797-keV line, which is the most intense $\gamma$ ray emitted in the decay of $^{26}\mathrm{P}$ (see Table \[tab:levels\]). To do so, we applied Eq. (\[eq:abs\_intensity\]) to these two $\gamma$ rays : $$\label{eq:intensity_Al} N_\gamma(829) = N_{^{26}\mathrm{Si}} \varepsilon_{\gamma}(829) \varepsilon_{\beta}(829) I_{\gamma}(829)$$ $$N_\gamma(1797) = N_{^{26}\mathrm{P}} \varepsilon_{\gamma}(1797) \varepsilon_{\beta}(1797) I_{\gamma}(1797) \label{eq:intensity_Si}$$ By taking the ratio between Eqs. (\[eq:intensity\_Al\]) and (\[eq:intensity\_Si\]), the only unknown is the intensity of the 1797-keV $\gamma$ ray, because the $\beta$ efficiencies can be obtained from the $\beta$-gated to ungated ratios discussed in Sec. \[sec:experiment\]. The value obtained for the intensity of the 1797-keV $\gamma$ ray is thus 58(3)%, which is in agreement with the value 52(11)% reported in Ref. [@Thomas2004] and more precise. The rest of the $\gamma$-ray intensities were determined with respect to this value by employing the efficiency curve and they are presented in Table \[tab:levels\]. We also report an upper limit on the intensity of one $\gamma$ ray which was expected to be near the theshold of our sensitivity given the intensity predicted by theory. $\bm{\beta}$-$\bm{\gamma}$-$\bm{\gamma}$ coincidences \[subsec:coincidences\] ----------------------------------------------------------------------------- ![\[fig:Coincidences2\] (Color online) $\beta$-$\gamma$-$\gamma$ coincidence spectrum gating on the 1797 keV $\gamma$-rays (blue). The hatched histogram (green) shows coincidences with continuum background in a relatively broad region above the peak gate. The background bins are 16 keV wide and are normalized to the expected background per 2 keV from random coincidences. The strongest peaks corresponding to $\gamma$ rays emitted in coincidence are indicated.](Fig7.eps){width=".5\textwidth"} The 16-fold granularity of SeGA allowed us to obtain $\beta$-$\gamma$-$\gamma$ coincidence spectra, which helped to interpret the $^{26}\mathrm{P}$ decay scheme. Fig. \[fig:Coincidences2\] shows the gamma coincident spectrum gated on the 1797-keV peak, where we can see several peaks corresponding to $\gamma$ rays detected in coincidence. To estimate the background from random coincidences, we have created another histogram gated on the background close to the peak and normalized to the number of counts within the gated regions. At some energies the background estimate is too high. This is because of a contribution from real $\gamma$-$\gamma$ coincidences involving Compton background, which should not be normalized according to the random assumption. ![image](Fig8.eps){width=".75\textwidth"} Fig. \[fig:Coincidences\] presents a sample of peaks observed in coincidence when gating on some other intense $\gamma$ rays observed. From this sample we can see that the coincidence technique helps to cross-check the decay scheme. For example Fig. \[fig:Coincidences\](a) shows clearly that the 1401-keV $\gamma$ ray is emitted in coincidence with the 989-keV $\gamma$ ray, indicating that the former $\gamma$ ray comes from a higher-lying level. In the same way, we can see in Fig. \[fig:Coincidences\](b) that the 1330-keV $\gamma$-ray is emitted from a level higher than the 4187-keV level. From the gated spectra, some information can also be extracted from the missing peaks. As Fig. \[fig:Coincidences\](c) shows, by gating on the 2024-keV $\gamma$ ray the 970-keV peak disappears, displaying only the 989-keV peak, which means that the 970-keV $\gamma$ ray comes from a level which is not connected with these two levels by any $\gamma$-ray cascade. Fig. \[fig:Coincidences\](d) shows clearly the coincidence between the $\gamma$ ray emitted from the first $2^+$ state at 1797 keV to the ground state of $^{26}$Si and the 2341-keV $\gamma$ ray from the third $2^+$ state to the first excited state. These coincidence procedures were systematically analyzed for all possible combinations of $\gamma$ rays and the results are summarized in Table \[tab:coincidence\] in the form of a 2D matrix, where a checkmark () means the $\gamma$ rays were detected in coincidence. The condition for a $\gamma$ ray to be listed in coincidence with another is for it to be at least 3$\sigma$ above the estimated random-coincidence background. It is worth noting that this background estimate is somewhat conservative, therefore the significance of some of the peaks is underestimated. 843 970 989 1072 1330 1352 1401 1532 1660 1742 1760 1797 1960 2024 2341 2360 2390 2648 2730 2787 2999 4138 ------ ----- ----- ----- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ 843 - - - - - - - - - - - - - - - - - 970 - - - - - - - - - - - - - - - - - 989 - - - - - - - 1072 - - - - - - - - - - - - - - - - 1330 - - - - - - - - - - - - - - - - - 1352 - - - - - - - - - - - - - - - - - - 1401 - - - - - - - - - - - - - 1532 – - - - - - - - - - - - - - - - 1660 - - - - - - - - - - - - - - - - - 1742 - - - - - - - - - - - - - - - - - - - - 1760 - - - - - - - - - - - - - - - - - 1797 - - - - - - - - - - - - - - 1960 - - - - - - - - - - - - - - - - 2024 - - - - - - - - - - - - - - - - - 2341 - - - - - - - - - - - - - - - - - - - 2360 - - - - - - - - - - - - - - - - - - - 2390 - - - - - - - - - - - - - - - - - - - 2648 - - - - - - - - - - - - - - - - - - 2730 - - - - - - - - - - - - - - - - - - - 2787 - - - - - - - - - - - - - - 2999 - - - - - - - - - - - - - - - - - - - - 4138 - - - - - - - - - - - - - - - - - - - - - Decay scheme of $\bm{^{26}\mathrm{P}}$ -------------------------------------- Fig. \[fig:decay\] displays the $^{26}\mathrm{P}$ $\beta$-decay scheme deduced from the results obtained in this experiment. Only those levels populated in the $\beta$ decay are represented. This level scheme was built in a self-consistent way by taking into account the $\gamma$-ray energies and intensities observed in the singles spectrum of Fig. \[fig:spec\] and the $\beta$-$\gamma$-$\gamma$ coincidence spectra described in Sec. \[subsec:coincidences\]. ![image](Fig9.eps){width="\textwidth"} The excitation energies of $^{26}\mathrm{Si}$ bound levels, their $\beta$-feedings, the energies of the $\gamma$ rays, and the absolute intensities measured in this work are shown in Table \[tab:levels\]. ### $^{26}\mathrm{Si}$ level energies, spins and parities Level energies of $^{26}\mathrm{Si}$ populated in the $\beta$ delayed $\gamma$ decay of $^{26}\mathrm{P}$ were obtained from the measured $\gamma$-ray energies including a correction for the nuclear recoil. The excitation energy values of the levels listed in Table \[tab:levels\] were obtained from the weighted average of all the possible $\gamma$-ray cascades coming from that level. To assign spins and parities we compared the deduced level scheme with USDB shell-model calculations and took into account $\beta$-decay angular momentum selection rules, showing a 1 to 1 correspondence for all the levels populated by allowed transitions, with a fair agreement in the level energies within theoretical uncertainties of a few hundred keV (see Fig. \[fig:decay\] ). ### $\beta$-feedings The $\beta$ branching ratio to the $i$-th excited energy level can be determined from the $\gamma$-ray intensities: $$\label{eq:BR} BR_i = I_{i,\text{out}}-I_{i,\text{in}},$$ where $I_{i,\text{out}}(I_{i,\text{in}})$ represents the total $\gamma$-ray intensity observed decaying out of (into) the $i$-th level. The $\beta$-decay branches deduced from this experiment are given in Table \[tab:BR\], where they are also compared to previous measurements of $^{26}\mathrm{P}$ $\beta$ decay [@Thomas2004]. To investigate the possible missing intensity from the Pandemonium effect [@Hardy1977], we have used a shell-model calculation to estimate the $\gamma$-ray intensities of all possible transitions from bound states feeding each particular level, and found them to be on the order of the uncertainty or (usually) much lower. [l d d d d d d]{} & &\ & & & & & &\ 1797 & 41(3) & 44(12) & 47.22 & 4.89(3) & 4.89(17) &4.81\ 2786 &&lt;0.39 & 3.3(20) & 0.37 & & 5.87(72) &6.77\ 3757 & 1.9(2) & 2.68(68)& 1.17 & 5.94(4) & 5.81(15) & 6.135\ 3842 & & 1.68(47)& & & 6.00(17) &\ 4139 & 6.2(4) & 1.78(75)& 2.97 & 5.37(3) & 5.93(32) & 5.634\ 4188 & 4.4(3) & 2.91(71)& 8.88 & 5.51(3) & 5.71(14) & 5.182\ 4445 & 0.8(2) && 1.11 & 6.23(8) & & 6.071\ 4796 & 0.56(9) && 0.06 & 6.31(7) & & 7.274\ 4810 & 3.1(2) && 4.45 & 5.57(3) & & 5.934\ 5147 & 0.18(5) && 0.03 & 6.7(1) & &7.474\ 5289 & 0.76(7) && 0.60 & 6.09(6) & &6.158\ 5517 & 2.7(2) && 3.96 & 5.51(4) & &5.262\ 5929 & 0.15(5) & 17.96(90)&10.08 & 6.7(1)& 4.60(3)&4.810\ Discussion =========== Comparison to previous values of $\bm{^{26}\mathrm{Si}}$ level energies ----------------------------------------------------------------------- We compare in Table \[tab:energies\] the energies and the spins and parities deduced in this work with previous values available in the literature [@Thomas2004; @PhysRevC.75.062801; @Komatsubara2014; @Doherty2015]. The results of Ref. [@Thomas2004] correspond to $\beta$ decay, thus the same levels are expected to be populated. We observed six levels of $^{26}\mathrm{Si}$ for the first time in the $\beta$ decay of $^{26}\mathrm{P}$. These six levels were previously reported using nuclear reactions to populate them [@PhysRevC.75.062801; @Komatsubara2014; @Doherty2015]. The previously reported energies for these levels are in good agreement with the results obtained in this work. However, it is worth mentioning a significant discrepancy (up to 6 keV) with energies obtained in Refs. [@PhysRevC.75.062801; @Doherty2015] for the two $\gamma$ rays emitted from the $4_4^+$ state to the $3_1^+$ and $2_2^+$ states (1759.7 and 2729.9 keV, respectively). Despite these discrepancies in the $\gamma$-ray energies, the excitation energy of the level reported is in excellent agreement with our results. However, it should be noted that the $\gamma$-ray branching ratios are inconsistent for the 1759.7-keV transition. The 3842-keV level reported in [@Thomas2004] was not observed in the present work. In agreement with [@PhysRevC.75.062801; @Komatsubara2014; @Doherty2015] we show that this level does not exist, as the 2045-keV $\gamma$ ray emitted from this level to the first excited state is not seen either in the spectrum of Fig. \[fig:spec\] nor the coincidence spectrum with the 1797-keV peak (Fig. \[fig:Coincidences2\]). The 4810-keV level was previously tentatively assigned to be a $2^+$ state, but this assignment was not clear, because of the proximity to another level at 4830 keV assigned as a $0^+$. The fact that the 2024-keV line appears in the spectrum confirms that the spin and parity is $2^+,3^+$ or $4^+$. If this level was $0^+$, the $\beta$-decay transition which populates this level would be second forbidden ($\Delta J=3$,$\Delta\pi=0$) and highly suppressed. We observed also the two levels located just above the proton separation energy ($S_p=5513.8$ keV). The first one corresponds to a $4^+$ state with an energy of 5517 keV. This level was also reported in Refs. [@PhysRevC.75.062801; @Komatsubara2014]. The second level at 5929 keV was previously observed in $\beta$-delayed proton emission by Thomas *et al.* [@Thomas2004] and more recently reported in our previous paper describing the present experiment [@Bennett2013]. The results presented here with the same set of data, but with an independent analysis, confirm the evidence for the observation of a $\gamma$ ray emitted from that level in the present experiment. [c d c d c d c d c d c c]{} & & & & &\ &&& &&\ $J_n^\pi$ & &$J_n^\pi$ & &$J_n^\pi$ & & $J_n^\pi$ && $J_n^\pi$ && $J_n^\pi$ &\ $2_1^+$ & 1797.1(3) & $2_1^+$ & 1795.9(2) & $2_1^+$ & 1797.3(1) & $2_1^+$ & 1797.4(4)&$2_1^+$ &1797.3(1) &$2_1^+$ &1887\ $2_2^+$ & 2786.4(3) & $2_2^+$ & 2783.5(4) & $2_2^+$ & 2786.4(2) & $2_2^+$ & 2786.8(6)&$2_2^+$ &2786.4(2) &$2_2^+$ &2948\ && && $0_2^+$ & 3336.4(6) & $0_2^+$ & 3335.3(4)&$0_2^+$ & 3336.4(2)&&\ $3_1^+$ & 3756.8(3) & $(3_1^+)$ & 3756(2) & $3_1^+$ & 3756.9(2) & $3_1^+$ & 3756.9(4)& $3_1^+$ & 3757.1(3)&$3_1^+$ &3784\ && $(4_1^+)$ & 3842(2) &&&&&&&&\ $2_3^+$ & 4138.6(4) & $2_3^+$ & 4138(1) & $2_3^+$ & 4139.3(7) & $2_3^+$ & 4138.6(4)& $2_3^+$ & 4138.8(13) &$2_3^+$ &4401\ $3_2^+$ & 4187.6(4) & $3_2^+$ & 4184(1) & $3_2^+$ & 4187.1(3) & $3_2^+$ & 4187.4(4) & $3_2^+$ & 4187.2(4) &$3_2^+$ &4256\ $4_1^+$ & 4445.1(4) & & & $4_1^+$ & 4446.2(4) & $4_1^+$ & 4445.2(4)& $4_1^+$ & 4445.5(12) &$4_1^+$ &4346\ $4_2^+$ & 4796.4(5) & & & $4_2^+$ & 4798.5(5) & $4_2^+$ & 4795.6(4) & $4_2^+$ & 4796.7(4) &$4_2^+$ &4893\ $2_4^+$ & 4810.4(4) & & & $(2_4^+)$ & 4810.7(6) &$(2_4^+)$ & 4808.8(4)&$2_4^+$ & 4811.9(4)&$2_4^+$ &4853\ && && $(0_3^+)$ & 4831.4(10) & $(0_3^+)$ & 4830.5(7) & $0_3^+$ & 4832.1(4)&&\ $2_5^+$ & 5146.5(6) & & & $2_5^+$ & 5146.7(9) & $2_5^+$ & 5144.5(4) & $2_5^+$ & 5147.4(8)&$2_5^+$ &5303\ $4_3^+$ & 5288.9(4) & & & $4_3^+$ & 5288.2(5) & $4_3^+$ & 5285.4(7)& $4_3^+$ & 5288.5(7) &$4_3^+$ &5418\ $4_4^+$ & 5517.3(3) & & & $4_4^+$ & 5517.2(5) & $4_4^+$ & 5517.8(11)& $4_4^+$ & 5517.0(5)&$4_4^+$ &5837\ && && $1_1^+$ & 5677.0(17)& $1_1^+$ & 5673.6(10) & $1_1^+$ & 5675.9(11)&&\ && &&& & $0_4^+$ & 5890.0(10)& $0_4^+$ & 5890.1(6)& &\ $3_3^+$ & 5929.3(6) & $3_1^+$ & 5929(5)[^1] & &&&&&&$3_3^+$ &6083\ $\bm{ft}$ values and Gamow-Teller strength ------------------------------------------ As mentioned in Sec. \[sec:intro\], the calculation of the experimental $ft$ values requires the measurement of three fundamental quantities: (a) the half-life, (b) the branching-ratio, and (c) the $Q$ value of the decay. The experimental value of the half-life and the semiempirical $Q$-value, are $t_{1/2}=43.7(6)$ ms and $Q_{EC}=18250(90)$ keV, respectively. Both values were taken from Ref. [@Thomas2004]. The branching ratios from the present work are listed in Table \[tab:levels\]. The partial half-lives $t_i$, are thus calculated as: $$\label{eq:partial_half_life} t_i = \frac{t_{1/2}}{BR_i}(1+P_{EC}),$$ where $BR_i$ is the $\beta$-branching ratio of the $i$-th level and $P_{EC}$ the fraction of electron capture, which can be neglected for the light nuclide $^{26}$P. The statistical phase space factors $f$ were calculated with the parametrization reported in [@Wilkinson197458] including additional radiative [@WILKINSON1973_2] and diffuseness corrections [@PhysRevC.18.401]. The uncertainty associated with this calculation is 0.1%, which is added quadratically to the uncertainty derived from the 0.5% uncertainty of the $Q_{EC}$ value. Table \[tab:BR\] shows the $\beta$ branches and $\log\!ft$ values for the transitions to excited levels of $^{26}$Si compared to the previous values reported in [@Thomas2004]. For the first excited state, our estimation of the $\beta$ feeding is consistent with the previous result. In the case of the second excited state, the previous value is one order of magnitude larger than our upper limit. This is because of the new levels we observed. The large branching ratios observed for the $2_3^+$ and the $3_2^+$ states compared to previous results, 6.2(4)% and 4.4(3)%, respectively, are noteworthy. The reason for that difference is the observation of new $\gamma$ rays emitted by those levels which have now been accounted for. The new levels together with the unobserved state at 3842 keV explain all the discrepancies between the results reported here and literature values [@Thomas2004]. As far as the $\log\! ft$ values are concerned the agreement for the first excited state is very good, but when going to higher energies, the discrepancies in the $\log\! ft$ values are directly related to those in the branching ratios. ### Comparison to theory Theoretical calculations were also performed using a shell model code. Wave functions of $^{26}$P were deduced using a full $sd$-shell model with the USDB interaction and their corresponding beta decay transitions to $^{26}$Si levels. Fig. \[fig:decay\] shows the comparison between the $^{26}\mathrm{Si}$ level energies deduced in this $^{26}$P $\beta$-decay work to the same levels predicted by the calculation. We observe a fair agreement in the level energies, but the theoretical values are systematically higher. The r.m.s. and maximum deviations between theory and experimental results are 109 and 320 keV, respectively. From a direct comparison we also see that in this work we have measured all the states populated in the allowed transitions predicted by the shell-model calculation. The experimental $\log\! ft$ values presented in Table \[tab:BR\] were determined from the measured branching ratios combined with the known values of $Q_{EC}$ and half-life [@Thomas2004]. Theoretical Gamow-Teller strengths were obtained from the matrix elements of the transitions to states of $^{26}$Si populated in the $\beta$ decay of $^{26}$P. To compare them to the experimental results, the experimental $B(GT)$ values were calculated from the $ft$ values through the expression, $$\label{eq:BGT} B(GT)=\frac{2\mathcal{F}t}{ft},$$ [d d c d d ]{} & &\ & & $I_n^\pi$ & &\ 1797 & 0.048(3) &$2_1^+$ &1887 &0.0606\ 2786 &&lt;0.0007 &$2_2^+$ &2948 &0.0007\ 3757 & 0.0044(4) &$3_1^+$ &3784 & 0.0029\ 4139 & 0.016(1) &$2_3^+$ &4401 & 0.009\ 4188 & 0.0117(1) &$3_2^+$ &4256 & 0.0256\ 4445 & 0.0023(4) &$4_1^+$ &4346 & 0.0033\ 4796 & 0.0018(3) &$4_2^+$ &4893 & 0.0002\ 4810 & 0.0103(7) &$2_4^+$ &4853 & 0.0161\ 5147 & 0.0007(2) &$2_5^+$ &5303 &0.0001\ 5289 & 0.0031(4) &$4_3^+$ &5418 & 0.0027\ 5517 & 0.012(1) &$4_4^+$ &5837 & 0.0213\ where $\mathcal{F}t= 3072.27\pm0.62$ s [@Hardy2015] is the average corrected $ft$ value from $T=1$ $0^+\rightarrow 0^+$ superallowed Fermi $\beta$ decays. Table \[tab:bgt\] shows the comparison between the experimental and theoretical $B(GT)$ values. A quenching factor $q=0.77$ ($q^2=0.6$) was applied to the shell-model calculation [@Wildenthal1983]. Theoretical predictions overestimate the experimental values for the transitions to the $2^+_1$, $3^+_2$, $4^+_1$, $2^+_4$, and $4^+_4$ states. Experimental $B(GT)$ values are slightly underestimated for the rest of the states up to 5.9 MeV. The most significant differences are in the $4_2^+$ and the $2_5^+$ levels for which the predicted $B(GT)$ values differ by almost one order of magnitude with the experimental ones. A possible explanation for this difference is the mixing between different levels. ![\[fig:BGT\] Summed Gamow-Teller strength distribution of the $\beta$ decay of $^{26}$P up to 5.9 MeV excitation energy. The results of the present experiment are compared to previous results [@Thomas2004] and Shell-Model calculations. A quenching factor $q^2=0.6$ was used in the theoretical calculation.](Fig10.eps){width=".45\textwidth"} [d c c d c d d]{} & &&\ & & $I_n^\pi$& & & &\ 1797 & 7.9(5)$\times 10^4$ & $2_1^+$ &1809 &5.23(2)$\times 10^4$ & 51(10) &50(60)\ 3757 & 8.7(8)$\times 10^5$ & $3_1^+$ &3941 &7.5(2)$\times 10^5$ &16(11)&10(40)\ 4139 & 2.4(2)$\times 10^5$ & $2_3^+$ &4332 &4.22(9)$\times 10^5$ &-43(5)&110(160)\ 4188 & 3.2(2)$\times 10^5$ & $3_2^+$ &4350 &2.16(4)$\times 10^5$ & 50(10) &110(70)\ 4445 & 1.7(7)$\times 10^6$ & $4_1^+$ &4319 &1.43(3)$\times 10^6$ &20(50)\ 4796 & 2.1(3)$\times 10^6$ & $4_2^+$ &4901 &1.63(7)$\times 10^6$ & 29(18)\ 4810 & 3.7(3)$\times 10^5$ & $2_4^+$ &4835 &1.85(2)$\times 10^5$ & 100(16)\ 5147 & 5.6(20)$\times 10^6$ & $2_5^+$ &5291 &2.0(3)$\times 10^7$ & -72(11)\ 5289 & 1.2(2)$\times 10^6$ & $4_3^+$ &5476 &7.9(40)$\times 10^7$ & -98(1)\ 5517 & 3.2(3)$\times 10^5$ & $4_4^+$ &5716& 1.71(3)$\times 10^5$&87(18)\ Fig. \[fig:BGT\] shows the summed Gamow-Teller strength distribution of the decay of $^{26}$P for bound levels up to 5517 keV. In this figure we compare the results obtained in this work with the previous results and the shell-model calculation. We can see that the agreement with the previous experimental results is good for the first excited state, with a small difference that is consistent within uncertainties. As the energy increases the differences become more significant, with our results slightly below the previous ones until the contribution of the new levels is added. For energies above 4.1 MeV, the results from the previous experiment are clearly below our results. If we compare the present data with the theoretical prediction using the typical quenching factor of $q^2=0.6$, we see that the theoretical prediction overestimates the summed Gamow-Teller strength in the excitation energy region below 5.9 MeV. If a quenching factor of 0.47 were applied to the shell model calculations instead, the agreement would be almost perfect in this energy region. However, this does not necessarily imply that the value of $q^2=0.6$ is inapplicable because only a small energy range was considered for the normalization. In fact, most of the Gamow-Teller strength is to unbound states which have not been measured in the present work. Furthermore, according to shell model calculations, only $\sim$21% of the total Gamow-Teller strength is in the $Q$-value window. Mirror asymmetry and $\bm{^{26}\mathrm{P}}$ proton halo ------------------------------------------------------- The high precision data on the $\beta$ decay of the mirror-nucleus $^{26}$Na from Ref. [@PhysRevC.71.044309], together with the results obtained in the present work made it possible to calculate finite values of the mirror asymmetry for $\beta$-decay transitions from the $A=26$, $T_z=\pm2$ mirror nuclei to low lying states of their respective daughters. Table \[tab:mirror\] shows the results of the $ft$ values obtained for the $\beta$ decay of $^{26}$P and its mirror nucleus, and the corresponding asymmetry parameter, compared to the previous experimental results reported in Ref. [@Thomas2004]. We see that for the low lying states, the agreement between previous data and our results is good, but our results are more precise, yielding the first finite values for this system. For the higher energy states, we report the first values for the mirror asymmetry. We observe large and significant mirror asymmetries with values ranging from $-98\%$ up to $+100\%$. As mentioned in Sec. \[sec:intro\], mirror asymmetries can be related to isospin mixing and/or differences in the radial wavefunctions. It was also shown that halo states produce significant mirror asymmetries. The $51(10)\%$ asymmetry observed for the transition to the first excited state could be further evidence for a proton halo in $^{26}$P [@Navin1998]. Higher lying states are not as useful because of possible mixing between nearby states. To investigate this effect more quantitatively, we performed two different shell model calculations with the USDA and USDB interactions. For the transition to the first excited state, these two interactions predict mirror asymmetries of 3% and 2.5%, respectively: far from experimental result. If we lower the energy of the $2s_{1/2}$ proton orbital by 1 MeV to account for the low proton separation energy of $^{26}$P, the mirror asymmetries we obtain for the first excited state are 60% and 50% for the USDA and USDB interactions, respectively, in agreement with the experimental result and supporting the hypothesis of a halo state [@Brown1996]. Before firm conclusions can be made, however, more detailed calculations are needed to evaluate the contributions of the other effects that may produce mirror asymmetries. $\bm{^{25}\mathrm{Al}(\mathrm{p},\gamma)^{26}\mathrm{Si}}$ Reaction rate calculation ===================================================================================== As reported in Ref. [@Wrede_2009], the $\beta$ decay of $^{26}$P to $^{26}$Si provides a convenient means for determining parameters of the astrophysically relevant reaction $^{25}$Al$(p,\gamma)^{26}$Si in novae. In these stellar environments, the nuclei are assumed to have a Maxwell-Boltzmann distribution of energies characterized by the temperature $T$ from which the resonant reaction rate can be described by a sum over the different resonances: $$\label{eq:Reaction rate} \langle \sigma v\rangle=\left (\frac{2\pi}{\mu kT}\right )^{3/2}\hbar^2\sum_r(\omega\gamma)_re^{-E_r/kT},$$ where $\hbar$ is the reduced Planck constant, $k$ is the Boltzmann constant, $\mu$ is the reduced mass, and $E_r$ is the energy of the resonance in the center-of-mass frame. $(\omega\gamma)_r$ is the resonance strength, which is defined as $$\label{eq:Res_stength} (\omega\gamma)_r=\frac{(2J_r+1)}{(2J_p+1)(2J_{\text{Al}}+1)}\left(\frac{\Gamma_p\Gamma_\gamma}{\Gamma} \right )_r.$$ [ d n[2]{}[2]{} n[2]{}[2]{} n[2]{}[2]{} ]{} & & &\ 0.01 & 1.10E-37 & 1.57E-37 & 2.04E-37\ 0.015 & 7.00E-32 & 1.00E-31 & 1.30E-31\ 0.02 & 3.19E-28 & 4.56E-28 & 5.93E-28\ 0.03 & 1.23E-23 & 1.75E-23 & 2.28E-23\ 0.04 & 9.42E-21 & 1.34E-20 & 1.75E-20\ 0.05 & 1.40E-18 & 1.93E-18 & 2.88E-18\ 0.06 & 1.16E-16 & 2.42E-16 & 6.17E-16\ 0.07 & 5.64E-15 & 1.50E-14 & 4.30E-14\ 0.08 & 1.27E-13 & 3.59E-13 & 1.06E-12\ 0.09 & 1.46E-12 & 4.23E-12 & 1.25E-11\ 0.1 & 1.03E-11 & 3.01E-11 & 8.95E-11\ 0.11 & 5.06E-11 & 1.48E-10 & 4.40E-10\ 0.12 & 1.99E-10 & 5.53E-10 & 1.64E-09\ 0.13 & 5.80E-10 & 1.68E-09 & 4.98E-09\ 0.14 & 1.55E-09 & 4.36E-09 & 1.28E-08\ 0.15 & 4.04E-09 & 1.03E-08 & 2.92E-08\ 0.16 & 1.14E-08 & 2.43E-08 & 6.24E-08\ 0.17 & 3.46E-08 & 6.23E-08 & 1.34E-07\ 0.18 & 1.02E-07 & 1.79E-07 & 3.14E-07\ 0.19 & 2.84E-07 & 5.41E-07 & 8.44E-07\ 0.2 & 7.80E-07 & 1.60E-06 & 2.42E-06\ 0.21 & 2.07E-06 & 4.47E-06 & 6.75E-06\ 0.22 & 5.21E-06 & 1.15E-05 & 1.75E-05\ 0.23 & 1.23E-05 & 2.76E-05 & 4.21E-05\ 0.24 & 2.72E-05 & 6.17E-05 & 9.40E-05\ 0.25 & 5.67E-05 & 1.29E-04 & 1.97E-04\ 0.26 & 1.12E-04 & 2.55E-04 & 3.89E-04\ 0.27 & 2.09E-04 & 4.78E-04 & 7.30E-04\ 0.28 & 3.74E-04 & 8.55E-04 & 1.31E-03\ 0.29 & 6.42E-04 & 1.47E-03 & 2.24E-03\ 0.3 & 1.06E-03 & 2.43E-03 & 3.71E-03\ 0.31 & 1.70E-03 & 3.88E-03 & 5.93E-03\ 0.32 & 2.63E-03 & 6.01E-03 & 9.19E-03\ 0.33 & 3.96E-03 & 9.06E-03 & 1.39E-02\ 0.34 & 5.82E-03 & 1.33E-02 & 2.04E-02\ 0.35 & 8.36E-03 & 1.91E-02 & 2.92E-02\ 0.36 & 1.18E-02 & 2.69E-02 & 4.10E-02\ 0.37 & 1.62E-02 & 3.70E-02 & 5.66E-02\ 0.38 & 2.19E-02 & 5.01E-02 & 7.66E-02\ 0.39 & 2.92E-02 & 6.67E-02 & 1.02E-01\ 0.4 & 3.83E-02 & 8.75E-02 & 1.34E-01\ 0.42 & 6.32E-02 & 1.44E-01 & 2.21E-01\ 0.44 & 9.94E-02 & 2.27E-01 & 3.47E-01\ 0.46 & 1.50E-01 & 3.42E-01 & 5.22E-01\ 0.48 & 2.17E-01 & 4.96E-01 & 7.58E-01\ 0.5 & 3.06E-01 & 6.97E-01 & 1.06E+00\ $J_{r(p,\mathrm{Al})}$ are the spins of the resonance (reactants), $\Gamma_{p(\gamma)}$ are the proton ($\gamma$-ray) partial widths of the resonance and $\Gamma=\Gamma_p+\Gamma_\gamma$ is the total width. It was previously predicted [@Iliadis_96] that the levels corresponding to significant resonances at nova temperatures in the $^{25}$Al$(p,\gamma)^{26}$Si reaction are the $J^\pi = 1_1^+,4_4^+,0_4^+$, and $3_3^+$ levels. In our previous work [@Bennett2013] we reported the first evidence for the observation of $\gamma$ rays emitted from the $3_3^+$ level. The determination of the strength of the $3_3^+$ resonance in $^{25}$Al$(p,\gamma)^{26}$Si based on the experimental measurements of the partial proton width ($\Gamma_p$) [@Peplowski2009] and the $\gamma$-ray branching ratio ($\Gamma_\gamma/\Gamma$) [@Bennett2013] was also performed and used to determine the amount of $^{26}$Al ejected in novae. In this work, we have confirmed the evidence for the 1742-keV $\gamma$ ray emitted from the $3_3^+$ level to the $3_2^+$ level in $^{26}$Si with an intensity of $0.15(5)\%$. To some extent, the present paper is a follow-up of our previous work, thus we present here (see Table \[tab:rate\]) for completeness the results of the full reaction rate calculation used to obtain the astrophysical results published in [@Bennett2013]. The table shows the total thermonuclear $^{25}$Al$(p,\gamma)^{26}$Si reaction rate as a function of temperature including contributions from the relevant resonances, namely $1_1^+,0_4^+$, and $3_3^+$ and the direct capture. For the $1^+$ and $0^+$ resonances and the direct capture, values are adopted from Ref. [@Wrede_2009]. Our table includes the rate limits calculated from a 1 standard deviation variation of the parameters. Conclusions =========== We have measured the absolute $\gamma$-ray intensities and deduced the $\beta$-decay branches for the decay of $^{26}$P to bound states and low-lying resonances of $^{26}$Si. We have observed six new $\beta$-decay branches and 15 $\gamma$-ray lines never observed before in $^{26}$P $\beta$ decay, likely corresponding to most of all the allowed Gamow-Teller transitions between the ground state and 5.9 MeV. The energies measured for the excited states show good agreement with previous results obtained using various nuclear reactions to populate these states. We have calculated the $\log\! ft$ values of all these new transitions and compared them to USDB shell-model calculations. The reported values show good agreement with the theoretical calculations. In addition, the Gamow-Teller strength function was calculated and compared to theoretical values, showing that the summed Gamow Teller strength is locally overestimated with the standard $sd$ shell quenching of 0.6. The mirror asymmetry was also investigated by calculating the $\beta$-decay asymmetry parameter $\delta$ for 10 transitions. The significant asymmetries observed, particularly for the transition to the first excited states of $^{26}$Si and its mirror $^{26}$Mg ($\delta=(51\pm10)\%$) might be further evidence for the existence of a proton halo in the $^{26}$P. Finally, we have tabulated the total $^{25}$Al$(p,\gamma)^{26}$Si reaction rate at nova temperatures used to estimate the galactic production of $^{26}$Al in novae in Ref. [@Bennett2013]. The authors gratefully acknowledge the contributions of the NSCL staff. This work is supported by the U.S. National Science Foundation under grants PHY-1102511, PHY-0822648, PHY-1350234, PHY-1404442, the U.S. Department of Energy under contract No. DE-FG02-97ER41020, the U.S. National Nuclear Security Agency under contract No. DE-NA0000979 and the Natural Sciences and Engineering Research Council of Canada. [58]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/PhysRevC.91.025501) [****, ()](\doibase 10.1103/PhysRevC.7.930) [****, ()](\doibase http://dx.doi.org/10.1016/0375-9474(73)90840-3) [****,  ()](\doibase 10.1103/PhysRevC.47.163) [****, ()](\doibase 10.1103/PhysRevC.28.1343) [****,  ()](\doibase 10.1103/PhysRevC.53.R2602) [****,  ()](http://stacks.iop.org/1742-6596/20/i=1/a=025) [****,  ()](\doibase 10.1140/epja/i2003-10218-8) [****, ()](\doibase http://dx.doi.org/10.1016/0370-2693(70)90150-4) [****,  ()](\doibase 10.1103/PhysRevLett.38.321) [****, ()](\doibase 10.1007/s100500050397) [****,  ()](\doibase http://dx.doi.org/10.1016/S0375-9474(02)01392-1) @noop [****,  ()]{} [****, ()](\doibase http://dx.doi.org/10.1016/0370-2693(93)91564-4) [****, ()](http://stacks.iop.org/0954-3899/24/i=1/a=018) [****,  ()](\doibase 10.1103/PhysRevC.55.R1633) [****, ()](\doibase http://dx.doi.org/10.1016/0370-2693(94)90585-1) [****, ()](\doibase http://dx.doi.org/10.1016/0375-9474(96)00241-2) [****, ()](\doibase http://dx.doi.org/10.1016/j.physletb.2003.07.050) [****, ()](\doibase http://dx.doi.org/10.1016/S0375-9474(01)00650-9) [****, ()](\doibase http://dx.doi.org/10.1016/j.physletb.2003.09.073) [****, ()](\doibase http://dx.doi.org/10.1016/S0375-9474(97)81837-4) @noop ) [****, ()](\doibase http://dx.doi.org/10.1016/0375-9474(95)00115-H) [****, ()](http://stacks.iop.org/0256-307X/27/i=9/a=092101) [****,  ()](\doibase 10.1103/PhysRevC.52.3013) @noop [****,  ()]{} [****, ()](\doibase 10.1103/PhysRevC.79.035803) [****,  ()](\doibase 10.1103/PhysRevC.53.475) [****,  ()](\doibase 10.1088/1674-1137/36/12/003) [****, ()](\doibase http://dx.doi.org/10.1016/0370-2693(83)90950-4) [****,  ()](\doibase 10.1103/PhysRevC.30.1276) [****,  ()](\doibase 10.1103/PhysRevC.92.031302) [****,  ()](\doibase 10.1103/PhysRevLett.111.232503) [****,  ()](\doibase http://dx.doi.org/10.1016/0370-2693(96)00634-X) [****,  ()](\doibase 10.1103/PhysRevC.53.R572) [****, ()](http://stacks.iop.org/0256-307X/26/i=3/a=032102) [****,  ()](\doibase 10.1103/PhysRevLett.81.5089) [****,  ()](\doibase http://dx.doi.org/10.1016/S0168-583X(02)01895-5) [****,  ()](\doibase http://dx.doi.org/10.1016/j.nima.2009.05.100) [****,  ()](\doibase http://dx.doi.org/10.1016/j.nima.2013.06.027) [****,  ()](\doibase http://dx.doi.org/10.1016/S0168-9002(01)00257-1) [****,  ()](\doibase 10.1051/epjconf/20146602072) @noop [****,  ()](\doibase http://dx.doi.org/10.1016/j.nima.2013.12.044) [****,  ()](\doibase http://dx.doi.org/10.1016/j.nds.2007.10.001) [****,  ()](\doibase http://dx.doi.org/10.1016/S0168-9002(03)01368-8) [****,  ()](\doibase http://dx.doi.org/10.1016/0168-9002(90)90561-J) [****,  ()](\doibase http://dx.doi.org/10.1016/S0375-9474(97)00613-1) [****, ()](\doibase http://dx.doi.org/10.1016/0370-2693(77)90223-4) [****,  ()](\doibase 10.1103/PhysRevC.75.062801) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevC.92.035808) [****, ()](\doibase http://dx.doi.org/10.1016/0375-9474(74)90645-9) [****,  ()](\doibase 10.1103/PhysRevC.18.401) [****,  ()](\doibase 10.1103/PhysRevC.71.044309) [****, ()](http://link.aps.org/doi/10.1103/PhysRevC.79.032801) [^1]: $^{26}\mathrm{P}(\beta\mathrm{p})$.
UT-HET 039\ \ [**Shinya Kanemura**]{}$^{(a)}$ [^1], [**Shigeki Matsumoto**]{}$^{(a)}$ [^2],\ [**Takehiro Nabeshima**]{}$^{(a)}$ [^3], and [**Nobuchika Okada**]{}$^{(b)}$ [^4]\ $^{(a)}$[*Department of Physics, University of Toyama, Toyama 930-8555, Japan*]{}\ $^{(b)}$[*Department of Physics and Astronomy, University of Alabama,\ Tuscaloosa, AL 35487, USA*]{}\ Introduction ============ In spite of the tremendous success of the Standard Model (SM) of particle physics, it is widely believed that new physics beyond the SM should appear at a certain high energy scale. The main theoretical insight on this belief is based on the hierarchy problem in the SM. In other words, the electroweak scale is unstable against quantum corrections and is, in turn, quite sensitive to the ultraviolet energy scale, which is naturally taken to be the scale of new physic beyond the SM. Therefore, in order for the SM to be naturally realized as a low energy effective theory, the scale of new physics should not be far beyond the TeV scale and the most likely at the TeV scale. After the recent success of the first collision of protons at the Large Hadron Collider (LHC) with the center of energy 7 TeV, the LHC is now taking data to explore particle physics at the TeV scale. The discovery of new physics at the TeV scale as well as the Higgs boson which is the last particle in the SM to be directly observed is the most important mission of the LHC. New physics beyond the SM, once discovered, will trigger a revolution in particle physics. However, it is generally possible that even if new physics beyond the SM indeed exists, the energy scale of new physics might be beyond the LHC reach and that the LHC could find only the Higgs boson but nothing else. This is the so-called “nightmare scenario”. The electroweak precision measurements at the LEP may support this scenario. The LEP experiment has established excellent agreements of the SM with results and has provided very severe constraints on new physics dynamics. We consider some of non-renormalizable operators invariant under the SM gauge group as effective operators obtained by integrating out some new physics effects, where the scale of new physics is characterized by a cutoff scale of the operators. It has been shown [@LEPparadox] that the lower bound on the cutoff scale given by the results of the LEP experiment is close to 10 TeV rather than 1 TeV. This fact is the so-called “LEP paradox”. If such higher dimensional operators are from tree level effects of new physics, the scale of new physics lies around 10 TeV, beyond the reach of the LHC. As the scale of new physics becomes higher, the naturalness of the SM gets violated. However, for the 10 TeV scale, the fine-tuning required to realize the correct electroweak scale is not so significant but about a few percent level [@Kolda:2000wi]. Such little hierarchy may be realized in nature. On the other hand, recent various cosmological observations, in particular, the Wilkinson Microwave Anisotropy Probe (WMAP) satellite [@Komatsu:2008hk], have established the $\Lambda$CDM cosmological model with a great accuracy. The relic abundance of the cold dark matter at 2$\sigma$ level is measured as $$\begin{aligned} \Omega_{\rm CDM} h^2 = 0.1131 \pm 0.0034. \end{aligned}$$ To clarify the nature of the dark matter is still a prime open problem in particle physics and cosmology. Since the SM has no suitable candidate for the cold dark matter, the observation of the dark matter indicate new physics beyond the SM. Many candidates for dark matter have been proposed in various new physics models. Among several possibilities, the Weakly Interacting Massive Particle (WIMP) is one of the most promising candidates for dark matter and in this case, the dark matter in the present universe is the thermal relic and its relic abundance is insensitive to the history of the early universe before the freeze-out time of the dark matter particle, such as the mechanism of reheating after inflation etc. This scenario allows us to evaluate the dark matter relic density by solving the Boltzmann equation, and we arrive at a very interesting conclusion: in order to obtain the right relic abundance, the WIMP dark matter mass lies below the TeV. Therefore, even if the nightmare scenario is realized, it is plausible that the mass scale of the WIMP dark matter is accessible to the LHC. In this paper, we extend the SM by introducing the WIMP dark matter in the context of the nightmare scenario, and investigate a possibility that the WIMP dark matter can overcome the nightmare scenario through various phenomenology such as the dark matter relic abundance, the direct detection experiments for the dark matter particle, and LHC physics. Among many possibilities, we consider the “worst case” that the WIMP dark matter is singlet under the SM gauge group, otherwise the WIMP dark matter can be easily observed through its coupling with the weak gauge boson. In this setup, the WIMP dark matter communicates with the SM particles through its coupling with the Higgs boson, so that the Higgs boson plays a crucial role in phenomenology of dark matter. The paper is organized as follows. In the next section, we introduce the WIMP dark matter which is singlet under the SM gauge group. We consider three different cases for the dark matter particle; a scalar, fermion and vector dark matter, respectively. In section 3, we investigate cosmological aspects of the WIMP dark matter and identify a parameter region which is consistent with the WMAP observation and the direct detection measurements for the WIMP dark matter. The collider signal of the dark matter particle is explored in section 4. The dark matter particles are produced at the LHC associated with the Higgs boson production. The last section is devoted to summary and discussions. The Model {#Sec2} ========= Since all new particles except a WIMP dark matter are supposed to be at the scale of 10 TeV in the nightmare scenario, the effective Lagrangian at the scale of 1 TeV involves only a field of the WIMP dark matter and those of the SM particles. We consider the worst case of the WIMP dark matter, namely the dark matter is assumed to be singlet under gauge symmetries of the SM. Otherwise, the WIMP dark matter accompanies a charged partner with mass at the scale less than 1 TeV, which would be easily detected at collider experiments in near future, and such a scenario is not nightmare. We postulate the global $Z_2$ symmetry (parity) in order to guarantee the stability of the dark matter, where the WIMP dark matter has odd charge while particles in the SM have even one. We consider three cases for the spin of the dark matter; the scalar dark matter $\phi$, the fermion dark matter $\chi$, and the vector dark matter $V_\mu$. In all cases, the dark matter is assumed to be an identical particle for simplicity, so that these are described by real Klein-Gordon, Majorana, and real Proca fields, respectively. The Lagrangian which is invariant under the symmetries of the SM is written as $$\begin{aligned} {\cal L}_S &=& {\cal L}_{\rm SM} + \frac{1}{2} \left(\partial \phi\right)^2 - \frac{M_S^2 }{2} \phi^2 - \frac{c_S}{2}|H|^2 \phi^2 - \frac{d_S}{4!} \phi^4, \label{Lagrangian S} \\ {\cal L}_F &=& {\cal L}_{\rm SM} + \frac{1}{2} \bar{\chi} \left(i\Slash{\partial} - M_F\right) \chi - \frac{c_F}{2\Lambda} |H|^2 \bar\chi \chi - \frac{d_F}{2\Lambda} \bar{\chi}\sigma^{\mu\nu}\chi B_{\mu\nu}, \label{Lagrangian F} \\ {\cal L}_V &=& {\cal L}_{\rm SM} - \frac{1}{4} V^{\mu\nu} V_{\mu \nu} + \frac{M_V^2 }{2} V_\mu V^\mu + \frac{c_V}{2} |H|^2 V_\mu V^\mu - \frac{d_V}{4!} (V_\mu V^\mu)^2, \label{Lagrangian V}\end{aligned}$$ where $V_{\mu\nu} = \partial_\mu V_\nu - \partial_\nu V_\mu$, $B_{\mu\nu}$ is the field strength tensor of the hypercharge gauge boson, and ${\cal L}_{\rm SM}$ is the Lagrangian of the SM with $H$ being the Higgs boson. The last terms in RHS in Eqs.(\[Lagrangian S\]) and (\[Lagrangian V\]) proportional to coefficients $d_S$ and $d_V$ represent self-interactions of the WIMP dark matter, which are not relevant for the following discussion. On the other hand, the last term in RHS in Eq.(\[Lagrangian F\]) proportional to the coefficient $d_F$ is the interaction between WIMP dark matter and the hypercharge gauge boson, however this term is most likely obtained by 1-loop diagrams of new physics dynamics at the scale of 10 TeV, since the dark matter particle carries no hypercharge. The term therefore can be ignored in comparison with the term proportional to $c_F$ which can be obtained by tree-level diagrams. As can be seen in the Lagrangian, the WIMP dark matter in our scenario interacts with particles in the SM only through the Higgs boson. Such a scenario is sometimes called the “[*Higgs portal*]{}” scenario. After the electroweak symmetry breaking, masses of the dark matters are given by $$\begin{aligned} m_S^2 &=& M_S^2 + c_Sv^2/2, \\ m_F &=& M_F + c_Fv^2/(2\Lambda), \\ m_V^2 &=& M_V^2 + c_Vv^2/2,\end{aligned}$$ where the vacuum expectation value of the Higgs field is set to be $\langle H \rangle = (0,v)^T/\sqrt{2}$ with $v$ being $v \simeq 246$ GeV. Although the model parameter $M_{\rm DM}$ (DM $= S$, $F$, and $V$) may be related to the parameter $c_{\rm DM}$ and may depend on details of new physics at the scale of 10 TeV, we treat $m_{\rm DM}$ and $c_{\rm DM}$ as free parameters in the following discussion. There are some examples of new physics models with dark matter, which realize the Higgs portal scenario at low energies. The scenario with the scalar Higgs portal dark matter appears in models discussed in Refs. [@higgsportal-scalar1; @higgsportal-scalar3; @higgsportal-scalar4]. R-parity invariant supersymmetric standard models with the Bino-like lightest super particle can correspond to the fermion Higgs portal dark matter scenario when the other super-partners are heavy enough [@higgsportal-fermion1]. The vector dark matter can be realized in such as the littlest Higgs model with T-parity if the breaking scale is very high [@higgsportal-vector1]. Cosmological Aspects ==================== We first consider cosmological aspects of the scenario with paying particular attention to the WMAP experiment [@Komatsu:2008hk], and direct detection measurements for the dark matter particle by using the data from CDMS II [@CDMSII] and the first data from the XENON100 [@Aprile:2010um] experiment. We also discuss whether the signal of the WIMP dark matter is observed or not in near future at XMASS [@Abe:2008zzc] and SuperCDMS [@Brink:2005ej] and XENON100 [@Aprile:2009yh] experiments. Relic abundance of dark matter ------------------------------ ![Feynman diagrams for dark matter annihilation.[]{data-label="fig:diagrams"}](Diagrams.eps) The WIMP dark matter in our scenario annihilates into particles in the SM only through the exchange of the Higgs boson. Processes of the annihilation are shown in Fig. \[fig:diagrams\], where $h$ is the physical mode of $H$, $W(Z)$ is the charged (neutral) weak gauge boson, and $f$ represents quarks and leptons in the SM. The relic abundance of the WIMP dark matter, which is nothing but the averaged mass density of the dark matter in the present universe, is obtained by integrating out the following Boltzmann equation [@Gondolo:1990dk], $$\begin{aligned} \frac{dY}{dx} = - \frac{m_{\rm DM}}{x^2}\sqrt{\frac{\pi}{45 g_*^{1/2} G_N}} \left(g_{*s} + \frac{m_{\rm DM}}{3x} \frac{dg_{*s}}{dT}\right) \langle\sigma v\rangle \left[ Y^2 - \left\{\frac{45x^2g_{\rm DM}}{4\pi^4g_{*s}} K_2(x)\right\}^2 \right], \label{Boltzmann}\end{aligned}$$ where $x \equiv m_{\rm DM}/T$ and $Y \equiv n/s$ with $m$, $T$, $n$, and $s$ being the mass of the dark matter, the temperature of the universe, the number density of the dark matter, and the entropy density of the universe, respectively. The gravitational constant is denoted by $G_N = 6.7 \times 10^{-39}$ GeV$^{-2}$. The massless degree of freedom in the energy (entropy) density of the universe is given by $g_*(g_{*s})$, while $g_{\rm DM}$ is the spin degree of freedom of the dark matter. The function $K_{2}(x)$ is the second modified Bessel function, and $\langle\sigma v\rangle$ is the thermal average of the total annihilation cross section (times relative velocity) of the dark matter. With the asymptotic value of the yield $Y(\infty)$, the cosmological parameter of the dark matter density $\Omega_{\rm DM}h^2$ is written $$\begin{aligned} \Omega_{\rm DM} h^2 = \frac{m_{\rm DM} s_0 Y(\infty)}{\rho_c/h^2},\end{aligned}$$ where $s_0 = 2890$ cm$^{-3}$ is the entropy density of the present universe, while $\rho_c/h^2 = 1.05 \times 10^{-5}$ GeV cm$^{-3}$ is the critical density. We have numerically integrated out the Boltzmann equation (\[Boltzmann\]) including the effect of temperature-dependent $g_*(T)$ and $g_{*S}(T)$ to obtain the relic abundance accurately. The result is shown in Fig.\[fig:results\] as magenta regions, where the regions are consistent with the WMAP experiment at 2$\sigma$ level in $(m_{\rm DM}, c_{\rm DM})$-plain. In upper three figures, the Higgs mass is fixed to be $m_h = $ 120 GeV, while $m_h =$ 150 GeV in lower ones. It can be seen that the coupling constant $c_{\rm DM}$ should not be so small in order to satisfy the constraint from the WMAP experiment except the region $m_{\rm DM} \simeq m_h/2$ where the resonant annihilation due to the $s$-channel Higgs boson is efficient. ![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](S120fD.eps "fig:") ![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](S150fD.eps "fig:")\ ![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](F120fD.eps "fig:") ![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](F150fD.eps "fig:")\ ![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](V120fD.eps "fig:") ![Constraints on the nightmare scenario from WMAP, Xenon100 first data, and CDMS II experiments. Higgs mass is fixed to be 120 GeV in left three figures, while 150 GeV in right three figures. Expected sensitivities to detect the signal of the dark matter at XMASS, SuperCDMS, Xenon100, and LHC experiments are also shown in these figures. See the text for the detail of the region painted by dark syan (light gray).[]{data-label="fig:results"}](V150fD.eps "fig:") Direct detection of dark matter ------------------------------- After integrating the Higgs boson out, Eqs.(\[Lagrangian S\])-(\[Lagrangian V\]) lead to effective interactions of the WIMP dark matter with gluon and light quarks such as $$\begin{aligned} {\cal L}_S^{(\rm eff)} &=& \frac{c_S}{2m_h^2} \phi^2 (\sum_q m_q \bar{q}q - \frac{\alpha_s}{4\pi}G_{\mu\nu}G^{\mu\nu}), \\ {\cal L}_F^{(\rm eff)} &=& \frac{c_F}{2\Lambda m_h^2} \bar{\chi}\chi (\sum_q m_q \bar{q}q - \frac{\alpha_s}{4\pi}G_{\mu\nu}G^{\mu\nu}), \\ {\cal L}_V^{(\rm eff)} &=& - \frac{c_V}{2m_h^2} V_\mu V^\mu (\sum_q m_q \bar{q}q - \frac{\alpha_s}{4\pi}G_{\mu\nu}G^{\mu\nu}),\end{aligned}$$ where $q$ represents light quarks (u, d, and s quarks) with $m_q$ being their current masses. Strong coupling constant is denoted by $\alpha_s$ and the field strength tensor of the gluon field is given by $G_{\mu\nu}$. Using these interactions, the scattering cross section between dark matter and nucleon for the momentum transfer being small enough is calculated as $$\begin{aligned} \sigma_S(\phi N \rightarrow \phi N) &=& \frac{c_S^2}{4 m_h^4} \frac{m_N^2}{\pi (m_S + m_N)^2}f_N^2, \\ \sigma_F(\chi N \rightarrow \chi N) &=& \frac{c_F^2}{4 \Lambda^2 m_h^4} \frac{4 m_N^2 m_F^2}{\pi (m_F + m_N)^2}f_N^2, \\ \sigma_V(V N \rightarrow V N) &=& \frac{c_V^2}{4 m_h^4} \frac{m_N^2}{\pi (m_V + m_N)^2}f_N^2, \end{aligned}$$ where $N$ represents a nucleon (proton or neutron) with the mass of the nucleon $m_N \simeq$ 1 GeV. The parameter $f_N$ depends on hadronic matrix elements, $$\begin{aligned} f_N = \sum_q m_q \langle N |\bar{q}q| N \rangle - \frac{\alpha_s}{4\pi} \langle N |G_{\mu\nu}G^{\mu\nu}| N \rangle = \sum_q m_N f_{Tq} + \frac{2}{9} m_N f_{TG}.\end{aligned}$$ The value of $f_{Tq}$ has recently been evaluated accurately by the lattice QCD simulation using the overlap fermion formulation. The result of the simulation has shown that $f_{Tu} + f_{Tu} \simeq 0.056$ and $|f_{Ts}| \leq 0.08$[^5] [@fTq]. On the other hand, the parameter $f_{TG}$ is obtained by $f_{Tq}$ trough the trace anomaly, $1 = f_{Tu} + f_{Td} + f_{Ts} + f_{TG}$ [@Trace; @anomaly]. The result from CDMS II and the new data from the XENON 100 experiment give the most severe constraint on the scattering cross section between dark matter particle and nucleon. The result of the constraint is shown in Fig.\[fig:results\], where the regions in brown are excluded by the experiments at 90% confidence level. It can be seen that most of the parameter space for a light dark matter particle has already been ruled out. In Fig.\[fig:results\], we also depict experimental sensitivities to detect the signal of the dark matter in near future experiments, XMASS, SuperCDMS, and Xenon100. The sensitivities are shown as light brown lines, where the signal can be discovered in the regions above these lines at 90% confidence level. Most of the parameter region will be covered by the future direct detection experiments. Note that the WIMP dark matter in the nightmare scenario predicts a large scattering rate in the region $m_h \lesssim 80$ GeV. It is interesting to show a region corresponding to “positive signal” of dark matter particle reported by the CDMS II experiment very recently [@CDMSII], which is depicted in dark cyan and this closed region only appears at 1$\sigma$ confidence level [@CDMSanalysis]. The parameter region consistent with the WMAP results has some overlap with the signal region. When a lighter Higgs boson mass is taken, the two regions better overlap. Signals at the LHC ================== Finally, we investigate the signal of the WIMP dark matter at the LHC experiment [@LHC]. The main purpose here is to clarify the parameter region where the signal can be detected. We first consider the case in which the mass of the dark matter is less than a half of the Higgs boson mass. In this case, the dark matter particles can be produced through the decay of the Higgs boson. Then, we consider the other case where the mass of the dark matter particle is heavier than a half of the Higgs boson mass. The case $m_{\rm DM} < m_h/2$ ----------------------------- In this case, the coupling of the dark matter particle with the Higgs boson can cause a significant change in the branching ratio of the Higgs boson while the production process of the Higgs boson at the LHC remains the same. The partial decay width of the Higgs boson into dark matter particles is given by $$\begin{aligned} \Gamma_S &=& \frac{c_S^2 v^2}{32 \pi m_h} \sqrt{1 - \frac{4 m_S^2}{m_h^2}}, \\ \Gamma_F &=& \frac{c_F^2 v^2 m_h}{16 \pi \Lambda^2} \left(1 - \frac{4 m_F^2}{m_h^2}\right)^{3/2}, \\ \Gamma_V &=& \frac{c_V^2 v^2 m_h^3}{128 \pi m_V^4} \left(1 - 4\frac{m_V^2}{m_h^2} + 12\frac{m_V^4}{m_h^4}\right) \sqrt{1 - \frac{4 m_V^2}{m_h^2}}.\end{aligned}$$ When the mass of the Higgs boson is not heavy ($m_h \lesssim 150$ GeV), its partial decay width into quarks and leptons is suppressed due to small Yukawa couplings. As a result, the branching ratio into dark matter particles can be almost 100% unless the interaction between the dark matter and the Higgs boson is too weak. In this case, most of the Higgs boson produced at the LHC decay invisibly. There are several studies on the invisible decay of the Higgs boson at the LHC. The most significant process for investigating such a Higgs boson is found to be its production through weak gauge boson fusions. For this process, the forward and backward jets with a large pseudo-rapidity gap show the missing transverse energy corresponding to the production of the invisibly decaying Higgs boson. According to the analysis in Ref. [@InvH], the 30 fb$^{-1}$ data can allow us to identify the production of the invisibly decaying Higgs boson at the 95% confidence level when its invisible branching ratio is larger than 0.250 for $m_h = 120$ GeV and 0.238 for $m_h = 150$ GeV. In this analysis [@InvH], both statistical and systematical errors are included. With the use of the analysis, we plot the experimental sensitivity to detect the signal in Fig.\[fig:results\]. The sensitivity is shown as green lines with $m_{\rm DM} \leq m_h/2$, where the signal can be observed in the regions above these lines. Most of parameter regions with $m_{\rm DM} \leq m_h/2$ can be covered by investigating the signal of the invisible decay at the LHC. It is also interesting to notice that the signal of the WIMP dark matter can be obtained in both direct detection measurement and LHC experiment, which arrow us to perform a non-trivial check for the scenario. The case $m_{\rm DM} \geq m_h/2$ -------------------------------- ![Cross section of the dark matter signal at the LHC with and without kinematical cuts in Eq.(\[kinematical cuts\]). The parameter $m_h$ and $c_{\rm DM}$ are fixed as shown in these figures.[]{data-label="fig:LHC XS"}](XS_S.eps "fig:") ![Cross section of the dark matter signal at the LHC with and without kinematical cuts in Eq.(\[kinematical cuts\]). The parameter $m_h$ and $c_{\rm DM}$ are fixed as shown in these figures.[]{data-label="fig:LHC XS"}](XS_F.eps "fig:") ![Cross section of the dark matter signal at the LHC with and without kinematical cuts in Eq.(\[kinematical cuts\]). The parameter $m_h$ and $c_{\rm DM}$ are fixed as shown in these figures.[]{data-label="fig:LHC XS"}](XS_V.eps "fig:") In this case, the WIMP dark matter cannot be produced from the decay of the Higgs boson. We consider, however, the process of weak gauge boson fusions again. With $V$ and $h^*$ being a weak gauge boson and virtual Higgs boson, the signal is from the process $qq \rightarrow qqVV \rightarrow qqh^* \rightarrow qq$DMDM, which is characterized by two energetic quark jets with large missing energy and a large pseudo-rapidity gap between them. There are several backgrounds against the signal. One is the production of a weak boson associated with two jets thorough QCD or electroweak interaction, which mimics the signal when the weak boson decays into neutrino. Another background is from the production of three jets thorough QCD interaction, which mimics the signal when one of the jets is missed to detect. Following the Ref. [@Eboli:2000ze], we apply kinematical cuts for two tagging jets in order to reduce these backgrounds, $$\begin{aligned} && p^j_T > 40~{\rm GeV}, \qquad \Slash{p}_T > 100~{\rm GeV}, \nonumber \\ && |\eta_j| < 5.0, \qquad |\eta_{j_1} - \eta_{j_2}| > 4.4, \qquad \eta_{j_1} \cdot \eta_{j_2} < 0, \nonumber \\ && M_{j_1j_2} > 1200~{\rm GeV}, \qquad \phi_{j_1j_2} < 1, \label{kinematical cuts}\end{aligned}$$ where $p^j_T$, $\Slash{p}_T$, and $\eta_j$ are the transverse momentum of $j$, the missing energy, and the pseudo-rapidity of $j$, respectively. The invariant mass of the two jets is denoted by $M_{jj}$, while $\phi_{jj}$ is the azimuthal angle between two jets. We also impose a veto of central jet activities with $p_T > 20$ GeV in the same manner of this reference. From the analysis of these backgrounds, it turns out that, at the LHC with the energy of $\sqrt{s}=14$ TeV and the integrated luminosity of 100 fb$^{-1}$, the signal will be detected at 95% confidence level when its cross section exceeds 4.8 fb after applying these kinematical cuts. Cross sections of the signal before and after applying the kinematical cuts are depicted in Fig.\[fig:LHC XS\] as a function of the dark matter mass with $m_h$ being fixed to be 120 and 150 GeV. We also fix the coupling constant between dark matter and Higgs boson as shown in these figures. It turns out that the cross section after applying the kinematical cuts exceeds 4.8 fb if the mass of the dark matter particle is small enough. With this analysis, we have estimated the experimental sensitivity to detect the signal at the LHC. The result is shown in Fig.\[fig:results\] as green lines for $m_{\rm DM} \geq m_h/2$, where with an integrated luminosity of 100 fb$^{-1}$ the signal at 95% confidence level can be observed in the regions above these lines. The sensitivity does not reach the region consistent with the WMAP observation, but it is close for fermion and vector dark matters with $m_h = 120$ GeV. When we use more sophisticated analysis or accumulate more data, the signal may be detectable. Summary and Discussions ======================= The physics operation of the LHC has begun and exploration of particle physics at the TeV scale will continue over next decades. Discovery of not only the Higgs boson but also new physics beyond the SM is highly expected for the LHC experiment. However, the little hierarchy might exist in nature and if this is the case, new physics scale can be around 10 TeV, so that the LHC could find only the SM-like Higgs boson but nothing else. This is the nightmare scenario. On the other hand, cosmological observations strongly suggest the necessity of extension of the SM so as to incorporate the dark matter particle. According to the WIMP dark matter hypothesis, the mass scale of the dark matter particle lies below the TeV, hence, within the reach of the LHC. We have investigated the possibility that the WIMP dark matter can be a clue to overcome the nightmare scenario. As the worst case scenario, we have considered the WIMP dark matter singlet under the SM gauge symmetry, which communicates with the SM particles only through the Higgs boson. Analyzing the relic density of the dark matter particle and its elastic scattering cross section with nucleon, we have identified the parameter region which is consistent with the WMAP observation and the current direct detection measurements of the dark matter particle. The direct detection measurements provide severe constraints on the parameter space and in near future almost of all parameter region can be explored except a region with a dark matter mass close to a half of Higgs boson mass. We have also considered the dark matter signal at the LHC. The dark matter particle can be produced at the LHC only through its interaction with the Higgs boson. If the Higgs boson is light, $m_h \lesssim 150$ GeV, and the dark matter particle is also light, $m_{\rm DM}^{} < m_h/2$, the Higgs boson decays into a pair of dark matter particles with a large branching ratio. Such an invisibly decaying Higgs boson can be explored at the LHC by the Higgs boson production process through the weak gauge boson fusions. When the invisible branching ratio is sizable, $B(h \to {\rm DM}{\rm DM}) \gtrsim 0.25$, the signal of invisibly decaying Higgs boson can be observed. Interestingly, corresponding parameter region is also covered by the future experiments for the direct detection measurements of dark matter particle. In the case of $m_{DM} \geq m_h/2$, we have also analyzed the dark matter particle production mediated by the virtual Higgs boson in the weak boson fusion channel. Although the detection of the dark matter particle production turns out to be challenging in our present analysis, more sophisticated analysis may enhance the ratio of the signal to background . Even if the nightmare scenario is realized in nature, the WIMP dark matter may exist and communicate with the SM particles only through the Higgs boson. Therefore, the existence of new physics may be revealed associated with the discovery of the Higgs boson. Finding the Higgs boson but nothing else would be more of a portal to new findings, the WIMP dark matter, rather than nightmare. [**Acknowledgments**]{} This work is supported, in part, by the Grant-in-Aid for Science Research, Ministry of Education, Culture, Sports, Science and Technology, Japan (Nos.19540277 and 22244031 for SK, and Nos. 21740174 and 22244021 for SM). [99]{} R. Barbieri and A. Strumia, arXiv:hep-ph/0007265. C. F. Kolda and H. Murayama, JHEP [**0007**]{}, 035 (2000). E. Komatsu [*et al.*]{} \[WMAP Collaboration\], Astrophys. J. Suppl.  [**180**]{} (2009) 330. J. McDonald, Phys. Rev.  D [**50**]{}, 3637 (1994); C. P. Burgess, M. Pospelov and T. ter Veldhuis, Nucl. Phys.  B [**619**]{}, 709 (2001). M. C. Bento, O. Bertolami, R. Rosenfeld and L. Teodoro, Phys. Rev.  D [**62**]{}, 041302 (2000); R. Barbieri, L. J. Hall and V. S. Rychkov, Phys. Rev.  D [**74**]{}, 015007 (2006); V. Barger, P. Langacker, M. McCaskey, M. J. Ramsey-Musolf and G. Shaughnessy, Phys. Rev.  D [**77**]{}, 035005 (2008); M. Aoki, S. Kanemura and O. Seto, Phys. Rev. Lett.  [**102**]{}, 051805 (2009); Phys. Rev.  D [**80**]{}, 033007 (2009). X. G. He, T. Li, X. Q. Li, J. Tandean and H. C. Tsai, Phys. Lett.  B [**688**]{}, 332 (2010); M. Farina, D. Pappadopulo and A. Strumia, Phys. Lett.  B [**688**]{}, 329 (2010); M. Kadastik, K. Kannike, A. Racioppi and M. Raidal, Phys. Lett.  B [**685**]{}, 182 (2010) arXiv:0912.3797 \[hep-ph\]; K. Cheung and T. C. Yuan, Phys. Lett.  B [**685**]{}, 182 (2010); M. Aoki, S. Kanemura and O. Seto, Phys. Lett.  B [**685**]{}, 313 (2010); M. Asano and R. Kitano, Phys. Rev.  D [**81**]{}, 054506 (2010) \[arXiv:1001.0486 \[hep-ph\]\]; A. Bandyopadhyay, S. Chakraborty, A. Ghosal and D. Majumdar, arXiv:1003.0809 \[hep-ph\]; S. Andreas, C. Arina, T. Hambye, F. S. Ling and M. H. G. Tytgat, arXiv:1003.2595 \[hep-ph\]. For a review, see the following and the references therein: G. Jungman, M. Kamionkowski and K. Griest, Phys. Rept.  [**267**]{}, 195 (1996); G. Bertone, D. Hooper and J. Silk, Phys. Rept.  [**405**]{}, 279 (2005). For a revew, see: M. Perelstein, Prog. Part. Nucl. Phys.  [**58**]{}, 247 (2007). Z. Ahmed [*et al.*]{} \[CDMS Collaboration\], \[arXiv:0912.3592 \[astro-ph\]\]; Z. Ahmed [*et al.*]{} \[CDMS Collaboration\], Phys. Rev. Lett.  [**102**]{} (2009) 011301. E. Aprile [*et al.*]{} \[XENON100 Collaboration\], arXiv:1005.0380 \[astro-ph.CO\]. K. Abe \[XMASS Collaboration\], J. Phys. Conf. Ser.  [**120**]{} (2008) 042022. P. L. Brink [*et al.*]{} \[CDMS-II Collaboration\], [*In the Proceedings of 22nd Texas Symposium on Relativistic Astrophysics at Stanford University, Stanford, California, 13-17 Dec 2004, pp 2529*]{} \[arXiv:astro-ph/0503583\]. E. Aprile, L. Baudis and f. t. X. Collaboration, arXiv:0902.4253 \[astro-ph.IM\]. P. Gondolo and G. Gelmini, Nucl. Phys.  B [**360**]{} (1991) 145. H. Ohki [*et al.*]{}, Phys. Rev.  D [**78**]{} (2008) 054502; arXiv:0910.3271 \[hep-lat\]. R. Crewther, Phys. Rev. Lett. [**28**]{} (1972) 1421; M. Chanowitz and J. Ellis, Phys. Lett. [**40B**]{} (1972) 397; Phys. Rev.  D [**7**]{} (1973) 2490; J. Collins, L. Duncan and S. Joglekar, Phys. Rev. D [**16**]{} (1977) 438; M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Phys. Lett.  B [**78**]{} (1978) 443. J. Kopp, T. Schwetz and J. Zupan, JCAP [**1002**]{}, 014 (2010). G. Aad [*et al.*]{} \[The ATLAS Collaboration\], arXiv:0901.0512 \[hep-ex\]; G. L. Bayatian [*et al.*]{} \[CMS Collaboration\], J. Phys. G [**34**]{} (2007) 995. M. Warsinsky \[ATLAS Collaboration\], J. Phys. Conf. Ser.  [**110**]{} (2008) 072046; Di Girolamo B and Neukermans L, 2003 Atlas Note ATL-PHYS-2003-006. O. J. P. Eboli and D. Zeppenfeld, Phys. Lett.  B [**495**]{} (2000) 147. [^1]: kanemu@sci.u-toyama.ac.jp [^2]: smatsu@sci.u-toyama.ac.jp [^3]: nabe@jodo.sci.u-toyama.ac.jp [^4]: okadan@ua.edu [^5]: For conservative analysis, we use $ f_{Ts} = 0$ in our numerical calculations.
--- abstract: 'We present a fast and versatile method to calculate the characteristic spectrum $h_c$ of the gravitational wave background (GWB) emitted by a population of eccentric massive black hole binaries (MBHBs). We fit the spectrum of a reference MBHB with a simple analytic function and show that the spectrum of any other MBHB can be derived from this reference spectrum via simple scalings of mass, redshift and frequency. We then apply our calculation to a realistic population of MBHBs evolving via 3-body scattering of stars in galactic nuclei. We demonstrate that our analytic prescription satisfactorily describes the signal in the frequency band relevant to pulsar timing array (PTA) observations. Finally we model the high frequency steepening of the GWB to provide a complete description of the features characterizing the spectrum. For typical stellar distributions observed in massive galaxies, our calculation shows that 3-body scattering alone is unlikely to affect the GWB in the PTA band and a low frequency turnover in the spectrum is caused primarily by high eccentricities.' author: - | Siyuan Chen,$^1$[^1] Alberto Sesana$^1$[^2] and Walter Del Pozzo$^{1,2}$\ $^1$School of Physics & Astronomy, University of Birmingham, Birmingham, B15 2TT, UK\ $^2$Dipartimento di Fisica “Enrico Fermi”, Università di Pisa, Pisa I-56127, Italy bibliography: - 'bibliography.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: Efficient computation of the gravitational wave spectrum emitted by eccentric massive black hole binaries in stellar environments --- \[firstpage\] black hole physics – gravitational waves – galaxies: kinematics and dynamics – methods: analytical Introduction {#sec:Introduction} ============ It is now well established that most (potentially all) massive galaxies harbour massive black holes (MBHs) in their centre [see @KormendyHo:2013 and references therein]. In the standard hierarchical cosmological model [@WhiteRees:1978], present day galaxies grow in mass and size by accreating cold gas from the cosmic web [@2009Natur.457..451D] and by merging with other galaxies [@1993MNRAS.264..201K]. In a favoured scenario in which MBHs are ubiquitous up to high redshift, following the merger of two galaxies, the central MBHs hosted in their nuclei sink to the centre of the merger remnant eventually forming a bound binary system [@BegelmanBlandfordRees:1980]. The binary orbit shrinks because of energy and angular momentum exchange with the surrounding ambient, stars and cold gas [see @DottiSesanaDecarli:2012 for a recent review], to the point at which gravitational wave (GW) emission takes over, efficiently bringing the pair to coalescence. Since galaxies are observed to merge quite frequently and the observable Universe encompasses several billions of them, a sizeable cosmological population of MBHBs is expected to be emitting GWs at any time [@SesanaVecchioColacino:2008 hereinafter SVC08]. At nHz frequencies, their signal is going to be captured by pulsar timing arrays [PTAs @FosterBacker:1990]. Passing GWs leaves an imprint in the time of arrival of ultra-stable millisecond pulsars. By cross correlating data from an ensemble of millisecond pulsars (i.e. from a PTA), this signature can be confidently identified [@HellingsDowns:1983]. Because pulsars are timed on weekly basis ($\Delta{t}=1$week) over a period ($T$) of many years (almost 30yr for some of them), PTAs are sensitive to GW in the frequency window $[1/T,1/(2\Delta{t})]\approx [1{\rm nHz},1\mu{\rm Hz}]$. The European Pulsar Timing Array [EPTA @2016MNRAS.458.3341D] and the Parkes Pulsar Timing Array [PPTA @2016MNRAS.455.1751R] and North American Nanohertz Observatory for Gravitational Waves [NANOGrav @2015ApJ...813...65T] have made considerable advances in increasing the sensitivity of their datasets. And the first data release of the International Pulsar Timing Array [IPTA @VerbiestEtAl_IPTA1stData:2016] is paving the way towards an effective combination of all PTA observation into a dataset that has the potential to detect GWs within the next ten years [@2015MNRAS.451.2417R; @2016ApJ...819L...6T]. Moreover, new powerful telescopes such as the SKA pathfinder MeerKAT in South Africa [@2009arXiv0910.2935B] and the 500-mt FAST in China [@2011IJMPD..20..989N] will be online in the next couple of years, boosting the odds of GW detection with their exquisite timing capabilities. The frequency spectrum of the overall GW signal is given by the superposition of all sources emitting at a given frequency. Because of the abundance of MBHBs in the Universe this has been generally described as a stochastic GW background (GWB), characterized, in case of circular GW driven binaries by a power-law spectrum $h_c\propto f^{-2/3}$ [@2001astro.ph..8028P]. However, two facts have became clear in the past decade. Firstly, to get to the PTA band, MBHBs need to efficiently interact with their stellar environment, which potentially has a double effect on the shape of the GW spectrum. If at the lower end of the PTA sensitivity window, MBHBs shrink because of interaction with their environment more efficiently than because of GW emission, then the spectrum is attenuated, even showing a potential turnover [@KocsisSesana:2011; @Sesana:2013CQG; @2014MNRAS.442...56R; @2016arXiv160601900K]. Moreover, both scattering of ambient stars and interaction with a circumbinary disk tend to excite the binary eccentricity [@1996NewA....1...35Q; @2005ApJ...634..921A; @2011MNRAS.415.3033R]. This also results in a loss of power at low frequency (eccentric binaries evolve faster and generally emit at higher frequencies), potentially leading to a turnover in the spectrum [@EnokiNagashima:2007; @HuertaEtAl:2015]. Secondly, for $f>10$nHz, the bulk of the signal is provided by sparse massive and/or nearby sources, and cannot be described simply as a stochastic signal. This was first noticed by SVC08, who came to the conclusion that at high frequency the GW signal will be dominated by sparse, individually resolvable sources, leaving behind a stochastic GWB at a level falling below the nominal $f^{-2/3}$ power law. With the constant improvement of their timing capabilities, PTAs are placing increasingly stringent limits on the amplitude of the expected GWB [@ShannonEtAl_PPTAgwbg:2015; @LentatiEtAl_EPTAgwbg:2015; @ArzoumanianEtAl_NANOGRAV9yrData:2016], and detection is possible within the next decade. One crucial question is then: what astrophysics do we learn from a GWB detection with PTA? This question has been sparsely tackled by a number of authors [see, e.g., @Sesana:2013CQG] but the answers have been mostly qualitative. A full assessment of what we can learn from PTA detection will stem from a combination of all the measurements PTA will be able to make, including: amplitude and shape of the unresolved GW signal, possible non Gaussianity and non-stationarity, statistics and properties of individually resolvable sources. With this long-term goal in mind, a first step is to investigate what information can be retrieved from the [*amplitude and shape*]{} of the GWB. As part of the common effort of the EPTA collaboration [@2016MNRAS.458.3341D] to detect GWs with pulsar timing, in this paper, we derive the expected spectrum of a GWB for a generic population of eccentric MBHBs evolving in typical stellar environments. Expanding on the work of [@MiddletonEtAl:2016], the goal is to define a model that links the MBHB mass function and eccentricity distribution to the shape of the GWB spectrum. In particular, we find that the astrophysical properties of the MBHBs are reflected in two features of the spectrum. The efficiency of environmental coupling and the MBHB eccentricity might cause a low frequency flattening (or even a turnover) in the spectrum. The shape of the MBHB mass function affects the statistics of bright sources at high frequency, causing a steepening of the unresolved level of the GWB. We develop an efficient (mostly analytical) formalism to generate GW spectra given a minimal number of parameters defining the MBHB mass function, the efficiency of environmental coupling and the eccentricity distribution. In a companion paper we will show how the formalism developed here is suitable to an efficient exploration of the model parameter space, allowing, for the first time, a quantitative estimate of the MBHB population parameters from a putative GWB measurement. The paper is organized as follows. In Section \[sec:Model\], we derive a versatile and quick analytic approximation to the shape of a GW spectrum produced by eccentric GW driven binaries. In Section \[sec:Coupling\] we study the evolution of eccentric MBHBs in stellar environments with properties constrained by observations of massive spheroids. We derive typical transition frequencies at which GWs take over, coalescence timescales, and we construct a simplified but robust framework to include the scattering-driven phase in the computation of the GW spectrum. Section \[sec:Population\] reports the main results of our investigation. By employing a range of MBHB populations, we demonstrate that our quick approximation is applicable in the PTA frequency window, with little dependence on the detailed properties of the stellar environment. Moreover, we derive a fast way to compute the high frequency steepening of the spectrum to account for the small number statistics of massive, high frequency MBHBs. We discuss our results and describe future applications of our findings in Section \[sec:Conclusions\]. Analytical modelling of the GW spectrum {#sec:Model} ======================================= The GWB generated by a population of eccentric binaries was first investigated by [@EnokiNagashima:2007] and more recently by [@HuertaEtAl:2015]. In this section we follow the same approach and review their main results. Following [@2001astro.ph..8028P], the characteristic strain $h_c(f)$ of the GW spectrum produced by a population of cosmological MBHBs can be written as $$h_c^2(f) = \frac{4G}{\pi c^2 f} \int_{0}^{\infty} dz \int_{0}^{\infty} d{\cal M} \frac{d^2n}{dzd{\cal M}} \frac{dE}{df_r}. \label{eq:hc}$$ Here, $d^2n/dzd{\cal M}$ defines the comoving differential number density (i.e. number of systems per Mpc$^3$) of merging MBHBs per unit redshift and unit chirp mass ${\cal M}=(M_1M_2)^{3/5}/(M_1+M_2)^{1/5}$ – where $M_1>M_2$ are the masses of the binary components – and the [*observed*]{} GW frequency at Earth $f$ is related to the [*emitted*]{} frequency in the source frame $f_r$ via $f_r=(1+z)f$. The evaluation of equation (\[eq:hc\]) involves a double integral in mass and redshift, generally to be performed numerically, and the computation of the energy spectrum $dE/df_r$. For an eccentric MBHB, this is given by a summation of harmonics as: $$\frac{dE}{df_r} = \sum_{n=1}^{\infty} \frac{1}{n} \frac{dE_n}{dt} \frac{dt}{de_n} \frac{de_n}{df_n}, \label{eq:dEdf}$$ where now $f_n=f_r/n$ is the restframe [*orbital*]{} frequency of the binary for which the $n$-th harmonic has an observed frequency equal to $f$ and $e_n$ is the eccentricity of the binary at that orbital frequency. We used the concatenation rule of derivation to highlight the role of the eccentricity. The first differential term in the rhs of equation (\[eq:dEdf\]) is the luminosity of the $n$-th GW harmonic given by $$\frac{dE_n}{dt} = \frac{32}{5} \frac{G^{7/3}}{c^5} \mathcal{M}^{10/3} (2\pi f_n)^{10/3} g_n(e_n) \label{eq:dEdt}$$ where $$\begin{split} & g_n(e) = \\ & \frac{n^4}{32} \Big[\Big(J_{n-2}(ne)-2eJ_{n-1}(ne)+\frac{2}{n}J_n(ne)+2eJ_{n+1}(ne)-J_{n+2}(ne)\Big)^2 \\ & +(1-e^2)\Big(J_{n-2}(ne)-2J_n(ne)+J_{n+2}(ne)\Big)^2 + \frac{4}{3n^2} J_n^2(ne)\Big], \end{split}$$ and $J_n$ is the $n$-th Bessel function of the first kind. The other two differential terms describe the evolution of the binary frequency and eccentricity with time, and for an eccentric MBHB driven by GW emission only, are given by $$\begin{aligned} \frac{df_n}{dt} & = \frac{96}{5} (2\pi)^{8/3} \frac{G^{5/3}}{c^5} \mathcal{M}^{5/3} f_n^{11/3} F(e_n) \label{eq:dfdt} \\ \frac{de_n}{dt} & = -\frac{1}{15} (2\pi)^{8/3} \frac{G^{5/3}}{c^5} \mathcal{M}^{5/3} f_n^{8/3} G(e_n) \label{eq:dedt}\end{aligned}$$ where $$\begin{aligned} F(e) & = \frac{1+(73/24)e^2+(37/96)e^4}{(1-e^2)^{7/2}} \\ G(e) & = \frac{304e+121e^3}{(1-e^2)^{5/2}}.\end{aligned}$$ By plugging (\[eq:dEdt\]), (\[eq:dfdt\]) and (\[eq:dedt\]) into the expression (\[eq:dEdf\]), equation (\[eq:hc\]) one obtains [@HuertaEtAl:2015] $$\begin{split} h_c^2(f) = & \frac{4G}{\pi c^2 f} \int_{0}^{\infty} dz \int_{0}^{\infty} d{\cal M}\frac{d^2n}{dzd{\cal M}} \\ & \frac{{\cal M}^{5/3}(\pi G)^{2/3}}{3(1+z)^{1/3} f^{1/3}}\sum_{n=1}^{\infty}\frac{g_n(e_n)}{F(e_n)(n/2)^{2/3}}. \label{eq:hcgw} \end{split}$$ We note that Equation is strictly valid only if the merger happens at a fixed redshift. However, we will see later that the typical merger timescale, $t_c$, of MBHBs can be Gyrs (cf Equation (\[eq:tcoal\]) and figure \[sec:fttcsingle\]), which is comparable to the cosmic expansion time $t_{\rm Hubble}$. Despite this fact, what actually matters is only the last phase of the MBHB inspiral, when the GW power is emitted in PTA band. Let us consider an optimistic PTA able to probe frequencies down to $\approx 1$nHz (i.e. observing for 30 years). If binaries are circular, then they start to emit in the PTA band only when their orbital frequency is $f_{\rm orb}=0.5$nHz. For typical MBHBs of ${\cal M}>3\times 10^{8}$ [which are those dominating the GWB, see e.g. @SesanaVecchioColacino:2008], the coalescence time at that point is $\tilde{t}_c<0.15$Gyr. The bulk of the PTA signal comes from $z<1.5$ [@2015MNRAS.447.2772R; @2016ApJ...826...11S], where the typical cosmic expansion time is already $t_{\rm Hubble}(z)>1$Gyr. This is almost an order of magnitude larger than $\tilde{t}_c$, which we also stress becomes much shorter with increasing MBHB masses. On the other hand, if binaries are very eccentric, they start to emit significant GW power in the PTA band when their orbital frequency is much lower than the minimum frequency probed by the array. Figure \[fig:speccompare\] shows that, if $e=0.9$, considering only the power emitted since $f_{\rm orb}=0.1$nHz provides a good approximation to the overall spectrum from $f \approx 1$nHz onwards. Although $f_{\rm orb}$ is much lower in this case, eccentric binaries coalesce much faster (see again Equation (\[eq:tcoal\])). For typical MBHBs of ${\cal M}>3\times 10^{8}$ with $e=0.9$, the coalescence time at that point is $\tilde{t}_c<10$Myr. Therefore, $\tilde{t}_c\ll t_{\rm Hubble}$ becomes a better approximation with increasing eccentricity, and Equation (\[eq:hcgw\]) generally provide a good approximation to the GWB. In practice, equation (\[eq:hcgw\]) is evaluated numerically. For each integration element, the sum in the expression has to be computed by solving numerically equations (\[eq:dfdt\]) and (\[eq:dedt\]) to evaluate $e_n$ at each of the orbital frequencies $f_n$ contributing to the spectrum observed at frequency $f$, and by then computing the appropriate $g_n(e_n)$ function. This procedure is extremely cumbersome and time consuming. [@2009PhRvD..80h4001Y] proposed an analytical approximation for $e(f)$ that helps in speeding up the computation. However, it is accurate only for $e<0.9$, and even then one is left with the computation of the $n$ harmonics and the evaluation of the Bessel functions. Note that the GW energy spectrum of a binary with eccentricity $e$ peaks at the $n_p\approx(1-e)^{-3/2}$ harmonic, with still significant contributions at $n\sim 10n_p$ [@2010PhRvD..82j7501B]. For a MBHB with $e=0.9$ this implies the computation of several hundreds of harmonics. Fitting formula and scaling properties {#sec:fit} -------------------------------------- ![characteristic amplitude spectrum for different eccentricities calculated with $n = 12500$ harmonics computed with no lower limit on $f_n$.[]{data-label="fig:specint"}](images/specint){width="45.00000%"} Our first goal is to compute an efficient and accurate way to numerically calculate $h_c^2(f)$. Although the double integral might be solvable analytically for a suitable form of $d^2n/dzd{\cal M}$, a numerical evaluation is generally required. We therefore concentrate on the computation of the single integral element. We thus consider a reference system with a unity number density per Mpc$^3$ characterized by selected chirp mass and redshift. This corresponds to setting $$\frac{d^2n}{dzd{\cal M}}=\delta({\cal M}-{\cal M}_0)\delta(z-z_0)/ \text{Mpc}^3.$$ Equation (\[eq:hcgw\]) then becomes $$\begin{split} h_{c,0}^2(f) & = \frac{4G^{5/3} \text{Mpc}^{-3}}{3\pi^{1/3} c^2 f^{4/3}} \frac{{\cal M}_0^{5/3}}{(1+z_0)^{1/3}}\sum_{n=1}^{\infty}\frac{g(n,e_n)}{F(e_n)(n/2)^{2/3}} \end{split} \label{eq:hc0}$$ To fully define the system we need to specify an initial MBHB eccentricity $e_0$ at a reference orbital frequency $f_0$, so that the eccentricity $e_n=e_n(n,f_0,e_0)$ can to be evaluated for the appropriate $n-$th harmonic at the orbital frequency $f_n=f(1+z_0)/n$ via equations (\[eq:dfdt\]) and (\[eq:dedt\]). We study the behaviour of equation (\[eq:hc0\]) by taking a fiducial binary with ${\cal M}_0=4.16\times10^8{{\rm M}_\odot}$, $z_0=0.02$, $f_0=0.1$nHz and different eccentricities $e_0=0.3, 0.6, 0.9, 0.99$. Results are shown in figure \[fig:specint\]. Obviously, since the binary circularizes because of GW emission, at high frequency all the spectra eventually sit on the same power law. Moreover, the spectra look self-similar, as also noted by [@HuertaEtAl:2015]. This property allows the spectra to be shifted on the $f^{-2/3}$ diagonal, given an analytic fitting expression for one reference spectrum. Self similarity has to be expected because equations (\[eq:dfdt\]) and (\[eq:dedt\]) combine to give [@EnokiNagashima:2007] $$\frac{f}{f_0}=\left(\frac{1-e_0^2}{1-e^2}\left(\frac{e}{e_0}\right)^{12/19}\left(\frac{1+\frac{121}{304}e^2}{1+\frac{121}{304}e_0^2}\right)^{870/2299}\right)^{-3/2}.$$ This means that the eccentricity evolution is just a function of the frequency ratio $f/f_0$ and there is no intrinsic scale in the problem. Any inspiral will thus pass through any given eccentricity at some frequency during the process. A reference binary with $e_0=0.9$ at $f_0=10^{-10}$Hz is simply an earlier stage in the evolution of a binary with a smaller $e$ at a higher $f$, see figure \[fig:specshift\]. Therefore, the spectrum of a binary with a different initial eccentricity $e_t$ specified at a different initial frequency $f_t$ can be simply obtained by shifting the spectrum of the reference binary. What one needs to know is by how much the spectrum has to be shifted along the $f^{-2/3}$ diagonal. To answer this question we identify a reference point of the spectrum. The obvious choice is the peak frequency defined by [@HuertaEtAl:2015]. They showed that the deviation of the spectrum of an eccentric binary, defined by fixing the eccentricity $e$ at a given orbital frequency $f$, with respect to its circular counterpart peaks at a frequency $f_p$ given by[^3] $$\frac{f_p}{f} = \frac{1293}{181} \Big[\frac{e^{12/19}}{1-e^2}\big(1+\frac{121e^2}{304}\big)^{870/2299}\Big]^{3/2} \label{eq:fpeak}$$ ![Analytical spectral shift. The upper panel shows the eccentricity evolution over frequency for a fiducial spectrum characterized by the initial conditions $(e_0 = 0.9, \ f_0 = 10^{-10}$Hz$)$ (blue) and a generic spectrum characterized by $(e_t, \ f_t)$ (green). The lower panel shows the respective GW spectra (again, blue for fiducial and green for generic) and the steps involved in the shifting. The two vertical dashed lines mark the ’peak frequencies’ defined in [@HuertaEtAl:2015], the horizontal arrow shifts the fiducial spectrum by $f_{p,t}/f_{p,0}$ (black dashed spectrum), and the vertical arrow moves it up by a factor $(f_{p,t}/f_{p,0})^{-2/3}$, as described in the main text.[]{data-label="fig:specshift"}](images/specshift){width="45.00000%"} Let us consider two spectra as shown in figure \[fig:specshift\]. The first one is a reference spectrum $h_{c,0}(f)$ defined by $e_0=0.9$ at $f_0=10^{-10}$Hz, the second one is a generic spectrum $h_c(f)$ characterized by a generic value of $e_t$ at a transition frequency $f_t$ typically different from $f_0$. By feeding these input into we directly get the two peak frequencies $f_{p,0}$ and $f_{p,t}$ respectively, marked in the lower panel of the figure. We want to compute $h(f)$ from $h_{c,0}(f)$. It is clear that the peak frequency at $f_{p,0}$ has to shift to $f_{p,t}$, therefore $h_c(f)$ has to correspond to $h_{c,0}(f')$ where $f'=f(f_{p,0}/f_{p,t})$. However, this transformation just shifts the spectrum horizontally. To get to $h_{c,0}(f)$ we still need to multiply $h_{c,0}(f')$ by a factor $(f_{p,t}/f_{p,0})^{-2/3}$. The total shift has therefore the form $$h_c(f) = h_{c,0}\Big(f\frac{f_{p,0}}{f_{p,t}}\Big)\left(\frac{f_{p,t}}{f_{p,0}}\right)^{-2/3}, \label{eq:hshift}$$ In fact, it is easy to verify that by applying equation (\[eq:hshift\]) to any of the spectra in figure \[fig:specint\] all the other spectra are recovered. All we need then is a suitable fit for a reference MBHB. For this, we take the reference case $f_0=10^{-10}$Hz and $e_0=0.9$ and, based of the visual appearance on the spectrum, we fit a trial analytic function of the form $$h_{c,{\rm fit}}(f) = a_0 \bar{f}^{a_1} e^{-a_2 \bar{f}}+b_0 \bar{f}^{b_1} e^{-b_2 \bar{f}}+c_0 \bar{f}^{-c_1} e^{-c_2/\bar{f}} \label{eq:hcfit}$$ where $a_i, b_i, c_i$ are constants to be determined by the fit and $\bar{f}=f/(10^{-8}{\rm Hz})$. We find that setting $$\begin{aligned} a_0&= 7.27\times 10^{-14}\,\,\,\,\,\,\,\,\,\, & a_1&=0.254 & a_2&=0.807\\ b_0&= 1.853\times 10^{-12}\,\,\,\,\,\,\,\,\,\, & b_1&=1.77 & b_2&=3.7\\ c_0&= 1.12\times 10^{-13}\,\,\,\,\,\,\,\,\,\, & c_1&=0.676 & c_2&=0.6\end{aligned}$$ reproduces the spectrum within a maximum error of 1.5% in log-amplitude (i.e. 3.5% in amplitude), as shown in figure \[fig:specfit\]. It also shows the difference between the analytical fit presented in this paper versus [@HuertaEtAl:2015]. The lower frequency shape (left to the peak) is recovered more accurately by equation . ![Gravitational wave spectrum $h_c(f)$ for the reference binary described in the text, computed by summing $n = 12500$ harmonics (dashed line) compared to the best fit $h_{c,{\rm fit}}(f)$ with an analytic function of the form given by equation (\[eq:hcfit\]) (solid line) and by [@HuertaEtAl:2015] (dotted line). The lower panel shows the difference ${\rm log}_{10}h_{c,{\rm fit}}-{\rm log}_{10}h_{c}$ as a function of frequency.[]{data-label="fig:specfit"}](images/specfit){width="45.00000%"} With this fitting formula in hand, equation (\[eq:hshift\]) readily enables the analytical evaluation of the spectrum for any desired pair of reference values $f_t$, $e_t=e(f_t)$ (note that those can be function of the MBHB parameters, e.g. its chirp mass, or of the environment in which the binary evolve, as we will see in Section \[sec:Coupling\]). Moreover, equation (\[eq:hc0\]) shows that the spectrum of a binary with different chirp mass and redshift can be simply obtained by multiplying $h_{c,{\rm fit}}(f)$ by $({\mathcal{M}}/{\mathcal{M}_0})^{5/3}$ and $(({1+z})/({1+z_0}))^{-1/3}$, respectively. Therefore, the overall spectrum of the MBHB population can be generated from $h_{c,{\rm fit}}(f)$ as $$\begin{split} h_c^2(f) = & \int_{0}^{\infty} dz \int_{0}^{\infty} d{\cal M} \frac{d^2n}{dzd{\cal M}} h_{c,{\rm fit}}^2\Big(f\frac{f_{p,0}}{f_{p,t}}\Big) \\ & \Big(\frac{f_{p,t}}{f_{p,0}}\Big)^{-4/3} \Big(\frac{\mathcal{M}}{\mathcal{M}_0}\Big)^{5/3} \Big(\frac{1+z}{1+z_0}\Big)^{-1/3}, \label{eq:hcanalytic} \end{split}$$ where the ratio $f_{p,0}/f_{p,t}$ is calculated by means of equation (\[eq:fpeak\]). Range of applicability {#sec:applicability} ---------------------- The assumption behind the above derivation is that the dynamics of the MBHB is purely driven by GW emission, i.e., its evolution is defined by equations (\[eq:dfdt\]) and (\[eq:dedt\]) formally back to $f= -\infty$. This of course cannot be true in practice, the question is whether the derivation provides a good approximation in the frequency range relevant to PTA detection. ![characteristic amplitude spectrum for different eccentricities calculated with $n = 12500$ harmonics where only frequencies $f_n \geq 10^{-10}$ contribute (dashed lines) compared to the spectrum computed with no limitations on $f_n$ (solid lines). The lower panel shows the difference ${\rm log}_{10}h_{c,{\rm fit}}-{\rm log}_{10}h_{c}$ as a function of frequency for the different cases, and it is always $<0.1$ for $f>1\,$nHz.[]{data-label="fig:speccompare"}](images/speccompare){width="45.00000%"} MBHBs are driven by coupling with their environment up to a certain [*transition orbital frequency*]{}, $f_t$. At lower frequencies the evolution is faster than what is predicted by GW emission only and the eccentricity does not indefinitely grow to approach $e=1$. If the lowest frequency probed by PTA is $f_{\rm min}$ (which is $1/T$, where $T$ is the observation time, as defined in the introduction), then a necessary requirement for the applicability of equation (\[eq:hcanalytic\]) is $f_t<f_{\rm min}$. This is, however, not a sufficient condition because for an eccentric MBHB population, the spectrum at $f_{\rm min}$ is defined by the contribution of binaries emitting at $f_n<f_{\rm min}$ satisfying the requirement $f_n=f_{\rm min}(1+z)/n$ for some $n$. If $f_t=f_{\rm min}$, and therefore the binary evolves faster and is less eccentric at $f_n<f_{\rm min}$, then the contribution of the $n$-th harmonics of systems emitting at $f_n$ is smaller, affecting the overall spectrum at $f_{\rm min}$ and above. To investigate the impact of this fact on the spectrum we consider the same reference binaries with transition frequency $f_t=f_0=0.1$nHz and $e_t=e_0=0.3, 0.6, 0.9, 0.99$, but now assuming they [*form*]{} at $f_t$, i.e., discarding the contribution of lower frequencies to the computation of the spectrum. The result is compared to the full spectrum in figure \[fig:speccompare\]. As expected, the absence of binaries at $f<f_t$ partially suppresses the signal observed at $f>f_t$. However three things should be noticed: i) the suppression is relevant only up to $f\sim 10f_t$, ii) the effect is small for highly eccentric binaries – this is because for large $e$, the gravitational wave strain $h_c$ is dominated by the first, rather than the second harmonic, see figure 4 in [@2016ApJ...817...70T]–, and iii) this is the most pessimistic case, since for a realistic orbital evolution, binaries do emit also at $f<f_t$, but their contribution to the spectrum at $f>f_t$ is smaller due to the faster evolution and lower eccentricity. Therefore, our approximation should hold in the PTA band as long as the typical transition frequency $f_t$ is few time smaller than $f_{\rm min}$. In the next section we will show that for a typical MBHB population driven by scattering of stars this is indeed generally the case. Binaries in stellar environments {#sec:Coupling} ================================ Following galaxy mergers, MBHBs sink to the centre because of dynamical friction [@1943ApJ....97..255C] eventually forming a bound pair when the mass in star and gas enclosed in their orbit is of the order of the binary mass. For MBHBs with $M=M_1+M_2>10^8{{\rm M}_\odot}$ relevant to PTA, this occurs at an orbital separation of few parsecs, and the corresponding GW emission is well outside the PTA band. At this point, dynamical friction becomes highly inefficient, and further hardening of the binary proceeds via exchange of energy and angular momentum with the dense gaseous and stellar environment [see @DottiSesanaDecarli:2012 and references therein]. The bulk of the PTA GW signal is produced by MBHBs hosted in massive galaxy (generally spheroids) at redshift $<1$. [@Sesana:2013] and [@2015MNRAS.447.2772R] further showed that the vast majority of the signal comes from ’red’ systems, featuring old stellar populations and only a modest amount of cold gas. This fact does not immediately imply that MBHBs cannot be driven by interaction with cold gas in a form of a massive circumbinary disk. After all, because of the observed MBH-host galaxy relations [see, e.g. @KormendyHo:2013], even a mere 1% of the galaxy baryonic mass in cold gas is still much larger than the MBHB mass, and therefore sufficient to form a circumbinary disk with mass comparable to the binary, if concentrated in the very centre of the galaxy. On the other hand, the relative fraction of observed bright quasars declines dramatically at $z<1$ [e.g. @2007ApJ...654..731H], implying that accretion of large amounts of cold gas, and hence a scenario in which MBHBs evolve in massive circumbinary disks, is probably not the norm. We therefore concentrate here on MBHBs evolving via interaction with stars. [@SesanaKhan:2015] have shown that, following the merger of two stellar bulges, the evolution of the bound MBHBs can be approximately described by the scattering experiment formalism developed by [@1996NewA....1...35Q]. In Quinlan’s work, the binary semimajor axis evolution follows the simple equation $$\frac{da}{dt}=-\frac{HG\rho a^2}{\sigma}, \label{eq:dadt}$$ where $\rho$ is a fiducial stellar background density and $\sigma$ the characteristic value of the Maxwellian distribution describing the velocity of the stars. $H$ is a dimensionless constant (empirically determined by the scattering experiments) of order $15-20$, largely independent on the MBHB mass ratio $q=M_2/M_1$ and eccentricity $e$. [@SesanaKhan:2015] found that equation (\[eq:dadt\]) is applicable to post merger stellar distributions providing that $\sigma$ is the typical velocity dispersion of the stellar bulge and $\rho$ is the average stellar density at the MBHB influence radius, $\rho_i=\rho(r_i)$, defined approximately as the radius enclosing a stellar mass twice the total MBHB mass $M=M_1+M_2$. In the stellar dynamic jargon, this corresponds to a situation where the MBHB ’loss-cone’ is full at the binary influence radius. By using different methods, [@2015ApJ...810...49V] came to similar conclusions stressing, however, that in the long term the MBHB hardening rate tends to drop compared to equation (\[eq:dadt\]), a hint that the loss-cone might not be kept full in the long term. The evolution of the Keplerian orbital frequency $f_K$ of the MBHB can therefore be written as: $$\frac{df_K}{dt}=\frac{df_K}{dt}\Big{|}_* + \frac{df_K}{dt}\Big{|}_{gw}, \label{eq:fcombined}$$ where $$\begin{aligned} \frac{df_K}{dt}\Big{|}_* & = \frac{3}{2 (2\pi)^{2/3}} \frac{H\rho_i}{\sigma} G^{4/3} M^{1/3} f_K^{1/3}, \label{eq:fstar} \\ \frac{df_K}{dt}\Big{|}_{gw} & = \frac{96}{5} (\pi)^{8/3} \frac{G^{5/3}}{c^5} \mathcal{M}^{5/3} f_K^{11/3} F(e). \label{eq:fgw}\end{aligned}$$ Equation (\[eq:fstar\]) is readily obtained from equation (\[eq:dadt\]) by using Kepler’s law, and equation (\[eq:fgw\]) is the standard GW frequency evolution already seen in the previous section. It is easy to show that at low frequency stellar hardening dominates and GW takes over at a transition frequency that can be calculated by equating the two evolution terms to obtain: $$\begin{split} f_{t} & = (2\pi)^{-1} \Big(\frac{5H\rho_i}{64\sigma F(e)}\Big)^{3/10} \frac{c^{3/2}}{G^{1/10}} \frac{(1+q)^{0.12}}{q^{0.06}} \mathcal{M}^{-2/5}\\ & \approx 0.56\pi^{-1} \Big(\frac{5H\rho_i}{64\sigma F(e)}\Big)^{3/10} \frac{c^{3/2}}{G^{1/10}} \mathcal{M}^{-2/5} \\ & = 0.356\, {\rm nHz}\, \left(\frac{1}{F(e)}\frac{\rho_{i,100}}{\sigma_{200}}\right)^{3/10}\mathcal{M}_9^{-2/5} \label{eq:ft} \end{split}$$ Where $\rho_{i,100}=\rho_i/(100\,{{\rm M}_\odot}{\rm pc}^{-3})$, $\sigma_{200}=\sigma/(200\,{\rm km\,s}^{-1})$, $\mathcal{M}_9=\mathcal{M}/(10^9\,{{\rm M}_\odot})$ and we assume $H=16$ in the last line. We notice that in the mass ratio $0.1<q<1$, that by far dominates the PTA GW signal [see, e.g. figure 1 in @2012MNRAS.420..860S], the function $(1+q)^{0.12}/q^{0.06}$ falls in the range $[1.08,1.15]$. Therefore, in the last two lines of equation (\[eq:ft\]) we neglected the mass ratio dependence by substituting $(1+q)^{0.12}/q^{0.06}=1.12$. A fair estimate of the MBHB coalescence timescale is provided by the evolution timescale at the transition frequency, $t_c=f_t(dt/df_t)$. By using equations (\[eq:ft\]) and (\[eq:fstar\]) one obtains $$\begin{split} t_c & = \frac{5}{96} (2\pi)^{-8/3} \frac{c^5}{G^{5/3}} \mathcal{M}^{-5/3} f_t^{-8/3} F(e)^{-1}\\ & = \frac{2}{3} \frac{c}{G^{7/5}} \Big(\frac{\sigma}{H\rho_i}\Big)^{4/5} \Big(\frac{5}{64F(e)}\Big)^{1/5} \frac{q^{0.16}}{(1+q)^{0.32}} \mathcal{M}^{-3/5}\\ & \approx 0.5 \frac{c}{G^{7/5}} \Big(\frac{\sigma}{H\rho_i}\Big)^{4/5} \Big(\frac{5}{64F(e)}\Big)^{1/5} \mathcal{M}^{-3/5}\\ & = 0.136 \ {\rm Gyr} \ F(e)^{-1/5}\left(\frac{\rho_{i,100}}{\sigma_{200}}\right)^{4/5}\mathcal{M}_9^{-3/5} \label{eq:tcoal} \end{split}$$ where, once again, we omitted mild $q$ dependences in the last approximation by substituting $q^{0.16}/(1+q)^{0.32}=0.75$ ($0.67 < q^{0.16}/(1+q)^{0.32} < 0.8$ for $0.1<q<1$). For an operational definition of $f_t$ and $t_c$, we need to define $\rho_i$ and $\sigma$. The density profile of massive spheroidals is well captured by the Dehnen’s density profile family [@Dehnen:1993] which takes the form $$\rho(r) = \frac{(3-\gamma)M_* a}{4\pi} r^{-\gamma} (r+a)^{\gamma-4}$$ where $0.5 < \gamma < 2$ determines the inner slope of the stellar density distribution, $M_*$ is the total mass of the bulge in star, $a$ is its characteristic radius. The influence radius $r_i$ of the MBHB is then set by the condition $$2M = \int_0^{r_i} 4\pi r^2 \rho(r) dr \label{eq:ricondition}$$ which gives $$r_i = \frac{a}{(2M/M_*)^{1/(\gamma-3)}-1}.$$ Inserting $r_i$ back into the Dehnen profile gives $$\rho_i \approx \frac{(3-\gamma)M_*}{4\pi a^3} \Big(\frac{2M}{M_*}\Big)^{\gamma/(\gamma-3)}$$ where we used the fact that $2M << M_*$. It is possible to reduce the number of effective parameters $M, M_*, a, \gamma, \sigma$ by employing empirical relations connecting pairs of them, valid for stellar spheroids. In particular we use the $a-M_*$ relation of [@DabringhausenHilkerKroupa:2008], and the $M-\sigma$ and $M-M_*$ of [@KormendyHo:2013] relations: $$\begin{aligned} a & = 239 \,\text{pc}\, (2^{1/(3-\gamma)}-1) \Big(\frac{M_*}{10^9{{\rm M}_\odot}}\Big)^{0.596} \label{eq:ascale} \\ \sigma & = 261\, \text{km s}^{-1}\, \Big(\frac{M}{10^9{{\rm M}_\odot}}\Big)^{0.228} \label{eq:msigma} \\ M_* & = 1.84\times 10^{11}\,{{\rm M}_\odot}\, \Big(\frac{M}{10^9{{\rm M}_\odot}}\Big)^{0.862}. \label{eq:mbulge}\end{aligned}$$ This allows to express $\rho_i$ as a function of $M$ and $\gamma$ only in the form $$\rho_i = 0.092 \,{{\rm M}_\odot}\text{pc}^{-3} \,{\cal F}(\gamma) \Big(\frac{M}{10^9 M_\odot}\Big)^{{\cal G}(\gamma)}, \label{eq:rhoi}$$ where $$\begin{aligned} {\cal F}(\gamma) & = \frac{(3-\gamma) 92^{\gamma/(3-\gamma)}}{(2^{1/(3-\gamma)}-1)^3}\nonumber\\ {\cal G}(\gamma) & = -0.68-0.138\frac{\gamma}{3-\gamma}\nonumber\end{aligned}$$ Equations (\[eq:msigma\]) and (\[eq:rhoi\]) are expressed as a function of $M$. However, we notice from equation (\[eq:ft\]) that $f_t\propto M^{-0.3-0.041\gamma/(3-\gamma)}$. Since ${\cal M}=Mq^{3/5}/(1+q)^{6/5}$, if $0.1<q<1$, then $2.32 {\cal M}<M< 3.57{\cal M}$. It is easy to show that for $0.5<\gamma<2$, by substituting $M=2.9{\cal M}$, equation (\[eq:ft\]) returns $f_t$ within 10% of the correct value when $0.1<q<1$. Finally, equation (\[eq:fcombined\]) defines only the frequency evolution of the MBHB. For a complete description of the system, tracking of the eccentricity evolution is also required. Both scattering experiments and N-body simulations have shown that MBHB-star interactions tend to increase $e$. The increase is generally mild for equal mass binaries and the eccentricity at the transition frequency largely depends on the initial eccentricity at the moment of binary pairing. Because of this mild evolution at $f_K<f_t$ and in order to keep the problem simple, we approximate the eccentricity evolution of the MBHB as: $$\frac{de}{dt}= \begin{cases} 0\,\,\,\,\,\,\,\, {\rm if}\,\,\,\,\,\,\,\, f_K<f_t\\ -\frac{1}{15} (2\pi)^{8/3} \frac{G^{5/3}}{c^5} \mathcal{M}^{5/3} f_K^{8/3} G(e)\,\,\,\,\,\,\,\, {\rm if}\,\,\,\,\,\,\,\, f_K>f_t \label{eq:ecombined} \end{cases}$$ Results: Gravitational wave spectra calculation {#sec:Population} =============================================== Dynamics of MBHBs: transition frequency and coalescence time {#sec:fttcsingle} ------------------------------------------------------------ ![image](images/ft10){width="0.85\columnwidth"} ![image](images/tc10){width="0.85\columnwidth"}\ ![image](images/ft15){width="0.85\columnwidth"} ![image](images/tc15){width="0.85\columnwidth"}\ ![image](images/fta2){width="0.85\columnwidth"} ![image](images/tca2){width="0.85\columnwidth"}\ ![image](images/ftm3){width="0.85\columnwidth"} ![image](images/tcm3){width="0.85\columnwidth"}\ Before going into the computation of the GW spectrum, we can have a look at how transition frequency $f_t$ and coalescence timescale $t_c$ change as a function of ${\cal M}$ and $e_t$. In the following we consider four selected models representative of a range of physical possibilities having a major impact on the MBHB dynamics. Results are shown in figure \[fig:ftandtc\]. The top panel shows a model with $\gamma=1$ and $\rho_i$ given by equation (\[eq:rhoi\]). We consider this as our default model, because most of the PTA signal is expected to come from MBHBs hosted in massive elliptical galaxies with relatively shallow density profiles. The GW signal is generally dominated by MBHBs with ${\cal M}>3\times 10^8{{\rm M}_\odot}$, which are therefore our main focus. At low $e_t$ those systems have $f_t<0.3$nHz and coalescence timescales in the range $1.5-4$Gyr. For $e_t=0.9$, $f_t$ is ten times lower, nonetheless $t_c$ is roughly an order of magnitude shorter, in virtue of the $F(e)$ factor appearing in equation (\[eq:tcoal\]). The effect of a steeper density profile is shown in the second row of plots in figure \[fig:ftandtc\], where we now assume $\gamma=1.5$. The effect of a steeper inner power law, is to make the stellar distribution more centrally concentrated, thus enhancing $\rho_i$. This makes stellar hardening more efficient and shifts $f_t$ by a factor $\approx 1.3$ upwards making $t_c$ a factor of $\approx 2$ shorter (using a shallower profile $\gamma=0.5$ would have an opposite effect of the same magnitude). We recognize that $\rho_i$ given by equation (\[eq:rhoi\]) relies on a number of scaling relations that are constructed on a limited sample of local, non-merging, galaxies. We therefore also explore the effect of a bias in some of those relations. For example, merging galaxies might be more centrally concentrated and we explore this possibility by arbitrarily reducing the typical scale radius $a$ by a factor of two compared to equation (\[eq:ascale\]). The effect is shown in the third row of panels of figure \[fig:ftandtc\] assuming $\gamma=1$, and it is very similar (slightly larger) to the effect of the steeper ($\gamma=1.5$) density profile shown in the second row. Finally, it has been proposed that the MBH-galaxy relations might be biased high because of selection effects in the targeted galaxy samples. [@2016MNRAS.460.3119S] propose that the typical MBH mass might be in fact a factor $\approx 3$ lower than what is implied by equations (\[eq:msigma\]) and (\[eq:mbulge\]). We therefore explore a model featuring $\gamma=1$ but with MBH mass decreased by a factor of three for given galaxy properties. Results are shown in the bottom panels of figure \[fig:ftandtc\]. For a given MBHB mass, this model implies just a minor change in $\rho_i$ and $\sigma$, with negligible effects of $f_t$ and $t_c$, compared to the fiducial model. GW spectra of fiducial MBHBs ---------------------------- --------------------------------------------------- -- --------------------------------------------------- ![image](images/spec10){width="0.95\columnwidth"} ![image](images/spec90){width="0.95\columnwidth"} --------------------------------------------------- -- --------------------------------------------------- The GW spectrum generated by a MBHB evolving in a fiducial stellar background can now be computed by evaluating $dE/df_r$ in equation (\[eq:dEdf\]), where the frequency and eccentricity evolution of the pair are now given by equations (\[eq:fcombined\]) and (\[eq:ecombined\]), instead of equations (\[eq:dfdt\]) and (\[eq:dedt\]), and the system is defined by the transition frequency $f_t$ as given in equation (\[eq:ft\]) at which $e_t$ must be specified. We consider a fiducial MBHB with ${\cal{M}}=10^9{{\rm M}_\odot}$ at $z=0.02$ ,and compare the real spectrum including stellar scattering to our approximated formula given by equation (\[eq:hcfit\]) and appropriately re-scaled as described in section \[sec:fit\]. Results are shown in figure \[fig:specsingle\] for all the environment models of figure \[fig:ftandtc\]. In this and the following plots, solid lines are spectra computed via equation , whereas dashed lines are spectra that includes stellar scattering driving the binary evolution at low frequency. We start by discussing the outcome of our fiducial model with $\gamma=1$, as a function of $e_t$, which is shown in the left plot. For circular binaries $f_t\approx0.2$nHz, well below the minimum PTA frequency $f_{\rm min}=1$nHz, appropriate for an PTA baseline of 30yrs, achievable within 2030. By increasing $e_t$, $f_t$ is pushed at lower values, eventually becoming irrelevant. Obviously, the real spectrum diverges from our analytic fit at $f<f_t$. Moreover, for moderately eccentric binaries ($e_t=0.5$ panel) the two spectra differ significantly up to almost $f=1$ nHz. This is mostly because the presence of the stellar environment ’freezes’ the eccentricity to 0.5 at $f<f_t$; the real spectrum at $f\gtrsim f_t$ is missing the contribution from the very eccentric phase at $f<f_t$ that occurs when the environment is not taken into account and the binary is evolved back in time assuming GW emission only. The problem becomes less severe for larger values of $e_t$. Even though the presence of the environment freezes the binary eccentricity, $e_t$ is large enough that most of the relevant contribution from the higher harmonics emitted at low frequencies is kept. Most importantly, in all cases, at all $f>f_{\rm min}=1$nHz, our analytical fit perfectly describes the emitted spectrum. The right plot in figure \[fig:specsingle\] shows the spectrum assuming $e_t=0.9$ for the four different environment models outlined in the previous subsection. Again, we notice that in all cases the GW spectrum is well described by our fitting formula in the relevant PTA frequency range, and the peak of the spectrum is only mildly affected (within a factor of two) by the different host models. Stochastic background from a cosmic MBHB population --------------------------------------------------- Having studied the signal generated by a fiducial system, we turn now to the computation of the overall GW spectrum expected from a cosmological population of MBHBs. To do this, we simply need to specify the distribution $d^2n/dzd{\cal M}$. We consider two population models: - [*model-NUM*]{}: the $d^2n/dzd{\cal M}$ population is numerically constructed on the basis of a semi-analytic galaxy formation model implemented on the Millennium simulation [@SpringelEtAl_MilleniumSim:2005], as described in [@SesanaVecchioVolonteri:2009]. In particular, we use a model implementing the $M_{\rm BH}-M_{\rm bulge}$ relation of [@2003ApJ...589L..21M], with accretion occurring [*before*]{} the final MBHB coalescence on both MBHs in the pair. - [*model-AN*]{}: employs a parametrised population function of the form [@MiddletonEtAl:2016] $$\frac{d^2 n}{dz d \log_{10} \mathcal{M}} = \dot{n_0} \Big(\frac{\mathcal{M}}{10^7 M_\odot}\Big)^{-\alpha} e^{-\mathcal{M}/\mathcal{M}_*} (1+z)^\beta e^{-z/z_*} \frac{dt_r}{dz} \label{eq:pop},$$ where $t_r$ is time measured in the source reference frame and $$\frac{dt_r}{dz} = \frac{1}{H_0 (1+z) (\Omega_M(1+z)^3 + \Omega_k(1+z)^2 + \Omega_\Lambda)^{1/2}} \label{eq:dtrdz}$$ Based on loose cosmological constraints [see @MiddletonEtAl:2016 for details], parameters lie in the range $\dot{n}_0 \in [10^{-20},10^3] \ \text{Mpc}^{-3} \text{Gyr}^{-1}, \ \alpha \in [-3,3], \ \mathcal{M}_* \in [10^6,10^{11}] \ M_\odot, \ \beta \in [-2,7], \ z_* \in [0.2,5]$. $H_0 = 70 {\,\text{km}}\,{\text{Mpc}^{-1}\text{s}^{-1}}$ is the Hubble constant and $\Omega_M = 0.3, \ \Omega_k = 0, \ \Omega_\Lambda = 0.7$ are the cosmological energy density ratios. We specialize our calculation to a fiducial mass function with $\log_{10} \dot{n}_0 = -4, \ \alpha = 0, \ \mathcal{M}_* = 10^8 M_\odot, \ \beta = 2, \ z_* = 2$. ![Same as the left half of figure \[fig:specsingle\], but the signal has now been integrated over the [*model-NUM*]{} MBHB population described in the text.[]{data-label="fig:specpopnum"}](images/specpopnum10){width="0.95\columnwidth"} [0.45]{} $model-NUM$ ![image](images/specpopnum90){width="0.95\columnwidth"} [0.45]{} $model-AN$ ![image](images/specpopan90){width="0.95\columnwidth"} \ To construct the spectrum we still need to specify a reference eccentricity $e_t$ at a reference binary orbital frequency $f_{t}$. Assuming MBHBs evolving in stellar bulges, We take $f_{t}$ from equation (\[eq:ft\]) assuming the four environment models presented in Section \[sec:fttcsingle\]. As for $e_t$ we make the simplifying assumption that, regardless of redshift, mass and environment, all MBHBs share the same eccentricity at the transition frequency. We take $e_t=0.01, 0.5, 0.9, 0.99$. For each model $h_c(f)$ is computed either via equations (\[eq:hc\],\[eq:dEdf\],\[eq:dEdt\],\[eq:fcombined\],\[eq:ecombined\]), i.e., by solving the binary evolution numerically –including the stellar driven phase– and summing-up all the harmonics, or via equation (\[eq:hcanalytic\]), i.e., by employing our fitting spectrum for GW driven binaries defined by $f_{t},e_t$. Results are presented in figures \[fig:specpopnum\] and \[fig:specpopcomp\]. Figure \[fig:specpopnum\] shows the impact of $e_t$ on the spectrum. We notice that in the true spectrum (the dashed lines), changing the population from almost circular to highly eccentric shifts the peak of the spectrum by more then one order of magnitude. As already described, our model does not represent well the low frequency turnover for small $e_t$, however in all cases, the GW signal is well described by equation (\[eq:hcanalytic\]) in the relevant PTA frequency band ($f>1\,$nHz), and the factor of $\approx 10$ peak shift in the eccentricity range $0.5<e_t<0.99$ is fairly well captured. As anticipated, typical turnover frequencies due to three body scattering are at sub-nHz scales, and flattening (and eventually turnover) in the GW spectrum is observable only if MBHBs have relatively high eccentricities at transition frequency. Figure \[fig:specpopcomp\] shows the impact of changing the physical parameters describing the efficiency of stellar driven MBHB hardening. Those parameters are fixed to a fiducial value in our model, but can in principle have an impact on the spectrum of the signal. When directly compared to \[fig:specpopnum\], the left panel, showing [*model-NUM*]{}, clarifies that none of those parameters affect the signal to a level comparable to $e_t$. The reason is that essentially all of them cause a change in $\rho_i$, and the $f_t$ dependence on $\rho_i$ is extremely mild ($f_t\propto\rho_i^{3/10}$). For example, shrinking the characteristic galaxy radius $a$ by a factor of two, is equivalent to increasing $\rho_i$ by a factor of eight, which still results in a $<2$ shift of $f_t$. In the right set of panels we see that the same applies to [*model-AN*]{}. However, there is a striking difference of almost an order of magnitude in the location of the peak. This is because [*model-NUM*]{} and [*model-AN*]{} have a very different underlying MBHB mass function. This means that the GWB is dominated by MBHBs with different typical masses, which decouple at different $f_t$. So even if the underlying MBHB dynamics and eccentricity at transition $e_t$ is the same, the resulting peak frequency can be significantly shifted. It is therefore clear that the location of the GWB spectrum turnover is sensitive to both $e_t$ and to the parameters defining the MBHB cosmological mass function, and much less sensitive to the details of the stellar hardening process. This also means, however, that in absence of additional features in the spectrum, the determination of $e_t$ is highly degenerate with the shape of the MBHB mass function. Removal of individual sources ----------------------------- ![Comparison of the spectrum of a population of binaries with different parameters for the [*model-AN*]{}. Parameters in the plot are specified in the sequence $\{{\rm log}_{10}(\dot{n}_0),\beta,z_*,\alpha,{\rm log}_{10}({\cal M}_*),e_t\}$. The solid lines represent the spectrum with the drop in upper mass limit in the high frequency regime, the dashed lines represent the spectrum with no mass limit change.[]{data-label="fig:specmdrop"}](images/specmdrop){width="45.00000%"} Interestingly, as mentioned in the introduction, another feature appearing in the GW spectrum at high frequencies has been pointed out by SVC08, and depends on the shape of the cosmic MBHB mass function. Let us consider circular binaries. In an actual observation, the GW signal generated by a cosmic population of MBHBs at a given observed frequency bin, is given by the sum of all MBHBs emitting at that frequency. This is related to the cosmic density of merging binaries via standard cosmology transformations: $$\frac{d^2n}{dzd \log_{10} \mathcal{M}}= \frac{d^3 N}{dz d \log_{10} \mathcal{M} df} \frac{df}{df_r} \frac{df_r}{dt_r} \frac{dt_r}{dz} \frac{dz}{dV_c}$$ where [@Hogg:1999] $$\begin{aligned} \frac{dV_c}{dz} & = \frac{4\pi c}{H_0} \frac{D_M^2}{(\Omega_M(1+z)^3 + \Omega_k(1+z)^2 + \Omega_\Lambda)^{1/2}}, \\ D_M & = \frac{c}{H_0} \int_0^z \frac{dz'}{(\Omega_M(1+z')^3 + \Omega_k(1+z')^2 + \Omega_\Lambda)^{1/2}}, \\ \frac{df_r}{dt_r} & = \frac{96}{5} (\pi)^{8/3} \frac{G^{5/3}}{c^5} \mathcal{M}^{5/3} f_r^{11/3}, \\ \frac{df}{df_r} & = \frac{1}{1+z},\end{aligned}$$ and $dt_r/dz$ is given by equation (\[eq:dtrdz\]). The number of sources emitting in a given observed frequency bin of width $\Delta{f}=1/T$ is therefore given by: $$N_{\Delta{f}}=\int_{f-\Delta f/2}^{f+\Delta f/2} \int_0^{\infty} \int_{0}^{\infty} \frac{d^3 N}{df dz d \log_{10} \mathcal{M}}. \label{eq:Nf}$$ Each chirp mass and redshift bin contribute to $h_c$ in a measure that is proportional to ${\cal M}^{5/6}/(1+z)^{1/6}$ (see, e.g., equation (\[eq:hc0\])). Therefore, it is possible to rank systems in order of decreasing contribution to the GWB. Because of the very small dependence on redshift – $1<(1+z)^{1/6}<1.3$ for $0<z<5$ considered in our models – we simplify the problem by integrating over $z$ and rank systems based on mass only. It is easy to show that $d^2N/{dfd \log_{10} \mathcal{M}}$ is a strong decreasing function of mass, and is in general $\ll 1$ for the most massive systems when $f>10$nHz. This means that the contribution to the GWB coming from those massive sources at that frequency, is in fact given by ’less than one source’. Since the actual GW signal is given by a discrete population of sources, having less than a source in a given frequency bin means that in a typical realization of the Universe that source might or might not be there with a given probability. For the practical purpose of the GWB computation, the contribution from those systems at those frequencies is actually not there, at least not in the form of a stochastic GWB (we defer the reader to SVC08 for a rigorous mathematical treatment of this issue). One can therefore assume that in each bin $\Delta{f}$ the most massive sources integrating to $1$ in number do not contribute to the GWB. The value $\bar{M}$ corresponding to this condition is implicitly given by imposing $$\begin{split} 1 = & \int_{\bar{M}}^{\infty} \int_{f-\Delta f/2}^{f+\Delta f/2} \int_0^{z_{\rm max}} \frac{d^3 N}{df dz d\log_{10} \mathcal{M}} \\ = & \dot{n}_0 \int_{\bar{M}}^{\infty} \Big(\frac{\mathcal{M}}{10^7 M_\odot}\Big)^{-\alpha} e^{-\mathcal{M}/\mathcal{M}_*} \mathcal{M}^{-5/3} d\log_{10} \mathcal{M} \\ & \int_0^{\bar{z}} (1+z)^{\beta+1} e^{-z/z_*} \frac{dV_c}{dz} dz \int_{f-\Delta f/2}^{f+\Delta f/2} \frac{dt_r}{df_r} \mathcal{M}^{5/3} df \end{split} \label{eq:Mmax}$$ Where in the last equation we substituted the analytical merger rate density given by equation (\[eq:pop\]). Given an observation time $T$, the frequency spectrum is divided in bins $\Delta{f}=1/T$. $h_c(f)$ is therefore calculated at the centroid of each frequency bin by substituting the upper limit $\bar{M}$ defined by equation (\[eq:Mmax\]) in equation (\[eq:hc\]). Note that in equation (\[eq:Mmax\]) mass and frequency integrals are analytic, and only the redshift integral has to be evaluated numerically. Examples of the GW spectrum obtained including the $\bar{M}$ cut-off are shown in figure \[fig:specmdrop\] for two different mass functions assuming [*model-AN*]{}. Note that the spectrum is significantly reduced only at $f>10$nHz. This justifies a posteriori our assumption of circular GW driven binaries; at such high frequencies even MBHBs that were very eccentric at $f_t$ had become almost circular because of GW backreaction. The figure illustrates that a detection of both spectral features (low frequency turnover and high frequency steepening) might help breaking degeneracies between $e_t$ and MBHB mass function. The two displayed models have very different $e_t$ (0.2 vs 0.9), but also quite different mass functions, so that the GWB turnover occurs around the same frequency. If the signal can be detected up to $f\approx 10^{-7}$Hz, however, differences in the high frequency slope might help pinning down the MBHB mass function and disentangle it from $e_t$. In a companion paper [@2017MNRAS.468..404C], we explore the feasibility of this approach and the implication for astrophysical inference. Discussion and conclusions {#sec:Conclusions} ========================== In this paper we developed a semi-analytical model that allows the fast computation of the stochastic GWB from a population of eccentric GW driven MBHBs. The spectrum computation does not directly take into account for any coupling of the MBHB with its stellar and gaseous environment and therefore cannot provide a trustworthy description of the GW signal at all frequencies. The coupling enters in the calculation only by setting the characteristic binary population eccentricity $e_t$ at the transition (or decoupling) frequency $f_t$. We showed, however, that in the plausible astrophysical scenario of MBHBs driven by three body scattering of ambient stars, $f_t<1$nHz (for MBHBs with ${\cal M}>10^8{{\rm M}_\odot}$, that dominate the PTA signal), which is a plausible lower limit for future PTA efforts. Therefore environmental coupling only affects the direct computation of the GW signal in a frequency range that is likely inaccessible to current and near future PTAs, justifying our strategy. Our simple semi-analytic model therefore provides a quick and accurate way to construct the GWB from a population of eccentric MBHBs evolving in stellar environment [*in the frequency range relevant for PTA*]{} (see figure \[fig:specpopnum\]). Compared to the standard $f^{-2/3}$ power-law, the GWB shows two prominent spectral features: i) a low frequency turnover defined by the coupling with the environment and typical eccentricity of the MBHBs at the transition frequency (figure \[fig:specpopnum\]), and ii) a high frequency steepening due to small number statistics affecting the most massive MBHBs contributing to the GW signal at high frequency (figure \[fig:specmdrop\], see SVC08). We consider stellar driven MBHBs and we employ for the first time in a PTA related investigation realistic density profiles, appropriate for massive elliptical galaxies (which are the typical hosts of PTA sources). For example, both [@Sesana:2013CQG] and [@2014MNRAS.442...56R] used a simplistic double power-law model matching an isothermal sphere at $r>r_i$ defined by equation (\[eq:ricondition\]). This model is more centrally concentrated and results in much higher $\rho_i$ and $f_t$ than what found in the present study. We find that in density profiles that are typical for massive ellipticals, MBHBs can coalesce on timescales of few Gyr or less (depending on mass and eccentricity) and the typical transition frequency (from stellar driven to GW driven binaries) is located well below $1\,$nHz. Therefore, an observed turnover in the GWB spectrum in the PTA relevant frequency range is likely to be due to high eccentricities rather than coupling with the environment. In particular, we find that a low frequency bending is likely to be seen for $e_t>0.5$, whereas a proper turnover is characteristic of MBHB populations with $e_t>0.9$. These findings are robust against a variety of plausible host galaxy models; i.e. the properties of the stellar environment affect the location of the bending/turnover of the spectrum only within a factor of two within the cases examined here. This latter point deserves some further consideration. All the physical parameters describing the environment of the MBHB affect the location of $f_t$ through $\rho_i$. Essentially it is the density at the influence radius of the binary (together with the MBHB mass and eccentricity) that determines $f_t$. Although for a range of astrophysically plausible scenarios the typical $\rho_i$ for a given MBHB is found to vary within a factor of ten, it might be worth considering the possibility of more extreme scenarios. This can be easily incorporated in our treatment as a free multiplicative parameter to $\rho_i$, and we plan to expand our model in this direction in future investigations. This has a number of interesting consequences in terms of astrophysical inference from PTA observations. Firstly, for $e_t<0.5$ no low frequency signature in the GWB spectrum is likely to be seen, making it impossible to distinguish circular from mildly eccentric MBHBs [*on the basis of the GWB spectral shape only*]{}. Secondly, because of the $(\rho_i/\sigma)^{3/10}$ dependence of equation (\[eq:ft\]) it will be difficult to place strong constraints on the stellar environment of MBHBs via PTA observations. Lastly, a turnover in the PTA band would be indicative of an highly eccentric ($e_t>0.9$) MBHB population. The turnover frequency depends on both $e_t$ and the MBHB mass function (through the ${\cal M}$ dependence of $f_t$), therefore the detection of a low frequency turnover alone might not place strong constraints on the typical MBHB eccentricity. The high frequency steepening, on the other hand, generally occur at $f>10\,$nHz, where MBHBs have mostly circularized. Therefore it depends exclusively on the MBHB mass function. A measurement of such steepening can therefore constrain the MBHB mass function and break the mass function eccentricity degeneracy affecting the location of the low frequency turnover. Looking at the prospects of performing astrophysical inference from PTA data, our model has several advantages. First, it directly connects the relevant astrophysical parameters of the MBHB population to the shape of the GWB. As mentioned above, in this first paper, we keep the MBHB mass function and eccentricity at decoupling as free parameters, arguing that other factors affecting the dynamics likely have a minor impact on the signal. Those can, however, be incorporated in our scheme as additional free parameters, if needed. This will eventually allow to perform astrophysical inference from PTA measurements exploiting a model that self consistently includes all the relevant physics defining the MBHB population. This improves upon the ’proof of principle’ type of analysis performed in [@ArzoumanianEtAl_NANOGRAV9yrData:2016], where limits on different model ingredients were placed by adding them individually to the model. For example, by assuming a standard $f^{-2/3}$ power-law, limits were placed on the MBH-host relation. Then a prior on the amplitude was assumed and an ad-hoc broken power law was constructed to put constraints on environmental coupling. Finally, the latter was put aside and eccentricity was added to the model to be constrained separately. Although this is a useful exercise, eventually all ingredients have to be considered at the same time, to be meaningfully constrained, and our modelling takes a step in this direction. Second, the model is mostly analytical, involving only few numerical integrals. The most computationally expensive operations, namely the integration of the MBHB orbital evolution and the summation over all the harmonics of the GW signal, are captured by the simple fitting formula given in equation (\[eq:hcfit\]), together with its simple scaling properties. Therefore, for a given set of parameters $\{{\rm log}_{10}(\dot{n}_0),\beta,z_*,\alpha,{\rm log}_{10}({\cal M}_*),e_t\}$ the GWB can be numerically computed within few ms. This makes the model suitable for large parameter space exploration via parallel Markov Chain Monte Carlo or Nested Sampling searches. In a companion paper [@2017MNRAS.468..404C] we explore this possibility and demonstrate which MBHB parameters and with which accuracy, one can constrain from PTA observations. Before closing we stress again that we consider the [*shape of the GWB only*]{}. Additional information about the MBHB population will be enclosed in the statistical nature of this background (whether, for example, it is non-Gaussian, anisotropic, non-stationary) and in the properties of individually resolvable sources. A comprehensive account of astrophysical inference from PTA observations will necessarily have to take into account for all this combined information, and our current investigation is only the first step in this direction. acknowledgements {#acknowledgements .unnumbered} ================ We acknowledge the support of our colleagues in the European Pulsar Timing Array. A.S. is supported by a University Research Fellow of the Royal Society. \[lastpage\] [^1]: E-mail: [^2]: E-mail: [^3]: Note that $f_p$ does not coincide with the peak of the characteristic amplitude, as also clear from figure \[fig:specshift\].
--- abstract: 'We analyze heavy quark free energies in $2$-flavor QCD at finite temperature and the corresponding heavy quark potential at zero temperature. Static quark anti-quark sources in color singlet, octet and color averaged channels are used to probe thermal modifications of the medium. The temperature dependence of the running coupling, $\alpha_{qq}(r,T)$, is analyzed at short and large distances and is compared to zero temperature as well as quenched calculations. In parts we also compare our results to recent findings in $3$-flavor QCD. We find that the characteristic length scale below which the running coupling shows almost no temperature dependence is almost twice as large as the Debye screening radius. Our analysis supports recent findings which suggest that $\chi_c$ and $\psi\prime$ are suppressed already at the (pseudo-) critical temperature and thus give a probe for quark gluon plasma production in heavy ion collision experiments, while $J/\psi$ may survive the transition and will dissolve at higher temperatures.' author: - Olaf Kaczmarek - Felix Zantow bibliography: - 'paper.bib' title: | Static quark anti-quark interactions in zero and finite temperature QCD.\ I. Heavy quark free energies, running coupling and quarkonium binding --- Introduction ============ The study of the fundamental forces between quarks and gluons is an essential key to the understanding of QCD and the occurrence of different phases which are expected to show up when going from low to high temperatures ($T$) and/or baryon number densities. For instance, at small or vanishing temperatures quarks and gluons get confined by the strong force while at high temperatures asymptotic freedom suggests a quite different QCD medium consisting of rather weakly coupled quarks and gluons, the so-called quark gluon plasma (QGP) [@PLP]. On quite general grounds it is therefore expected that the interactions get modified by temperature. For the analysis of these modifications of the strong forces the change in free energy due to the presence of a static quark anti-quark pair separated by a distance $r$ in a QCD-like thermal heat bath has often been used since the early work [@McLerran:1980pk; @McLerran:1981pb]. In fact, the static quark anti-quark free energy which is obtained from Polyakov loop correlation functions calculated at finite temperature plays a similar important role in the discussion of properties of the strong force as the static quark potential does at zero temperature. The properties of this observable (at $T=0$: potential, at $T\neq0$: free energy) at short and intermediate distances ($rT\;\lsim\;1$) is important for the understanding of in-medium modifications of heavy quark bound states. A quantitative analysis of heavy quark free energies becomes of considerable importance for the discussion of possible signals for the quark gluon plasma formation in heavy ion collision experiments [@Matsui:1986dk; @RR]. For instance, recent studies of heavy quarkonium systems within potential models use the quark anti-quark free energy to define an appropriate finite temperature potential which is considered in the non-relativistic Schrödinger equation [@Digal:2001iu; @Digal:2001ue; @Wong:2001kn; @Wong:2001uu]. Such calculations, however, do not quite match the results of direct lattice calculations of the quarkonium dissociation temperatures which have been obtained so far only for the pure gauge theory [@Datta:2003ww; @Asakawa:2003re]. It was pointed out [@Kaczmarek:2002mc] that the free energy ($F$) of a static quark anti-quark pair can be separated into two contributions, the internal energy ($U$) and the entropy ($S$). The separation of the entropy contribution from the free energy, [*i.e.*]{} the variable $U=F+TS$, could define an appropriate effective potential at finite temperature[^1] [@Kaczmarek:2002mc; @Zantow:2003ui], $V_{\text{eff}}(r,T)\equiv U$, to be used as input in model calculations and might explain in parts the quantitative differences found when comparing solutions of the Schrödinger equation with direct calculations of spectral functions [@Datta:2003ww; @Asakawa:2003re]. First calculations which use the internal energy obtained in our calculations [@Shuryak:2004tx; @Wong:2004kn; @Brown:2004qi; @Park:2005nv] support this expectation. Most of these studies consider so far quenched QCD. Using potentials from the quenched theory, however, will describe the interaction of a heavy quark anti-quark pair in a thermal medium made up of gluons only. It is then important to understand how these results might change for the case of a thermal heat bath which also contains dynamical quarks. On the other hand, it is the large distance property of the heavy quark interaction which is important for our understanding of the bulk properties of the QCD plasma phase, [*e.g.*]{} the screening property of the quark gluon plasma [@Kaczmarek:1999mm; @Kaczmarek:2004gv], the equation of state [@Beinlich:1997ia; @Karsch:2000ps] and the order parameter (Polyakov loop) [@Kaczmarek:2003ph; @Kaczmarek:2002mc; @Dumitru:2004gd; @Dumitru:2003hp]. In all of these studies deviations from perturbative calculations and the ideal gas behavior are expected and were indeed found at temperatures which are only moderately larger than the deconfinement temperature. This calls for quantitative non-perturbative calculations. Also in this case most of todays discussions of the bulk thermodynamic properties of the QGP and its apparent deviations from the ideal gas behavior rely on results obtained in lattice studies of the pure gauge theory, although several qualitative differences are to be expected when taking into account the influence of dynamical fermions; for instance, the phase transition in full QCD will appear as an crossover rather than a ’true’ phase transition with related singularities in thermodynamic observables. Moreover, in contrast to a steadily increasing confinement interaction in the quenched QCD theory, in full QCD the strong interaction below deconfinement will show a qualitatively different behavior at large quark anti-quark separations. Due to the possibility of pair creation the stringlike interaction between the two test quarks can break leading to a constant potential and/or free energy already at temperatures below deconfinement [@DeTar:1998qa]. Thus it is quite important to extend our recently developed concepts for the analysis of the quark anti-quark free energies and internal energies in pure gauge theory [@Kaczmarek:2002mc; @Kaczmarek:2003dp; @Kaczmarek:2004gv; @Phd] to the more complex case of QCD with dynamical quarks, and to quantify the qualitative differences which will show up between pure gauge theories and QCD. $\beta$ $T/T_c$ \# conf. $\beta$ $T/T_c$ \# conf. --------- --------- ---------- --------- --------- ---------- 3.52 0.76 2000 3.72 1.16 2000 3.55 0.81 3000 3.75 1.23 1000 3.58 0.87 3500 3.80 1.36 1000 3.60 0.90 2000 3.85 1.50 1000 3.63 0.96 3000 3.90 1.65 1000 3.65 1.00 4000 3.95 1.81 1000 3.66 1.02 4000 4.00 1.98 4000 3.68 1.07 3600 4.43 4.01 1600 3.70 1.11 2000 : Sample sizes at each $\beta$ value and the temperature in units of the (pseudo-) critical temperature $T_c$. \[tab:configs\] For our study of the strong interaction in terms of the quark anti-quark free energies in full QCD lattice configurations were generated for $2$-flavor QCD ($N_f$=2) on $16^3\times 4$ lattices with bare quark mass $ma$=0.1, [*i.e.*]{} $m/T$=0.4, corresponding to a ratio of pion to rho masses ($m_{\pi}/m_{\rho}$) at the (pseudo-) critical temperature of about $0.7$ ($a$ denotes the lattice spacing) [@Karsch:2000kv]. We have used Symanzik improved gauge and p4-improved staggered fermion actions. This combination of lattice actions is known to reduce the lattice cut-off effects in Polyakov loop correlation functions at small quark anti-quark separations seen as an improved restoration of the broken rotational symmetry. For any further details of the simulations with these actions see [@Allton:2002zi; @Allton:2003vx]. In Table \[tab:configs\] we summarize our simulation parameters, [*i.e.*]{} the lattice coupling $\beta$, the temperature $T/T_c$ in units of the pseudo critical temperature and the number of configurations used at each $\beta$-value. The pseudo critical coupling for this action is $\beta_c=3.649(2)$ [@Allton:2002zi]. To set the physical scale we use the string tension, $\sigma a^2$, measured in units of the lattice spacing, obtained from the large distance behavior of the heavy quark potential calculated from smeared Wilson loops at zero temperature [@Karsch:2000kv]. This is also used to define the temperature scale and $a\sqrt{\sigma}$ is used for setting the scale for the free energies and the physical distances. For the conversion to physical units, $\sqrt{\sigma}=420$MeV is used. For instance, we get $T_c=202(4)$ MeV calculated from $T_c/\sqrt{\sigma}=0.48(1)$ [@Karsch:2000kv]. In parts of our analysis of the quark anti-quark free energies we are also interested in the flavor and finite quark mass dependence. For this reason we also compare our $2$-flavor QCD results to the todays available recent findings in quenched ($N_f$=0) [@Kaczmarek:2002mc; @Kaczmarek:2004gv] and $3$-flavor QCD ($m_\pi/m_\rho\simeq0.4$ [@Peterpriv]) [@Petreczky:2004pz]. Here we use $T_c=270$ MeV for quenched and $T_c=193$ MeV [@Petreczky:2004pz] for the $3$-flavor case. Our results for the color singlet quark anti-quark free energies, $F_1$, and color averaged free energies, $F_{av}$, are summarized in Fig. \[fes\] as function of distance at several temperatures close to the transition. At distances much smaller than the inverse temperature ($rT\ll1$) the dominant scale is set by distance and the QCD running coupling will be controlled by the distance. In this limit the thermal modification of the strong interaction will become negligible and the finite temperature free energy will be given by the zero temperature heavy quark potential (solid line). With increasing quark anti-quark separation, however, thermal effects will dominate the behavior of the finite temperature free energies ($rT\gg1$). Qualitative and quantitative differences between quark anti-quark free energy and internal energy will appear and clarify the important role of the entropy contribution still present in free energies. The quark anti-quark internal energy will provide a different look on the inter-quark interaction and thermal modifications of the finite temperature quark anti-quark potential. Further details of these modifications on the quark anti-quark free and internal energies will be discussed. This paper is organized as follows: We start in section \[sect0\] with a discussion of the zero temperature heavy quark potential and the coupling. Both will be calculated from $2$-flavor lattice QCD simulations. We analyze in section \[secfreee\] the thermal modifications on the quark anti-quark free energies and discuss quarkonium binding. Section \[seccon\] contains our summary and conclusions. A detailed discussion of the quark anti-quark internal energy and entropy will be given separately [@pap2]. The zero temperature heavy quark potential and coupling {#sect0} ======================================================= Heavy quark potential at $T=0$ ------------------------------ For the determination of the heavy quark potential at zero temperature, $V(r)$, we have used the measurements of large smeared Wilson loops given in [@Karsch:2000kv] for the same simulation parameters ($N_f$=2 and $ma=0.1$) and action. To eliminate the divergent self-energy contributions we matched these data for all $\beta$-values (different $\beta$-values correspond to different values of the lattice spacing $a$) at large distances to the bosonic string potential, $$\begin{aligned} V(r) &=& - \frac{\pi}{12}\frac{1}{r} + \sigma r \nonumber\\ &\equiv&-\frac{4}{3}\frac{\alpha_{\text{str}}}{r}+\sigma r\;, \label{string-cornell}\end{aligned}$$ where we already have separated the Casimir factor so that $\alpha_{\text{str}}\equiv\pi/16$. In this normalization any divergent contributions to the lattice potential are eliminated uniquely. In Fig. \[peik\] we show our results together with the heavy quark potential from the string picture (dashed line). One can see that the data are well described by Eq. (\[string-cornell\]) at large distances, [*i.e.*]{} $r\sqrt{\sigma}\;\gsim\;0.8$, corresponding to $r\;\gsim\;0.4$ fm. At these distances we see no major difference between the 2-flavor QCD potential obtained from Wilson loops and the quenched QCD potential which can be well parameterized within the string model already for $r\;\gsim\;0.4$ fm [@Necco:2001xg; @Luscher:2002qv]. In fact, we also do not see any signal for string breaking in the zero temperature QCD heavy quark potential. This is expected due to the fact that the Wilson loop operator used here for the calculation of the $T=0$ potential has only small overlap with states where string breaking occurs [@Bernard:2001tz; @Pennanen:2000yk]. Moreover, the distances for which we analyze the data for the QCD potential are below $r\;\lsim\;1.2$ fm at which string breaking is expected to set in at zero temperature and similar quark masses [@Pennanen:2000yk]. The coupling at $T=0$ {#couplt=0} --------------------- Deviations from the string model and from the pure gauge potential, however, are clearly expected to become apparent in the 2-flavor QCD potential at small distances and may already be seen from the short distance part in Fig. \[peik\]. These deviations are expected to arise from an asymptotic weakening of the QCD coupling, [*i.e.*]{} $\alpha=\alpha(r)$, and to some extent also due to the effect of including dynamical quarks, [*i.e.*]{} from leading order perturbation theory one expects $$\begin{aligned} \alpha(r) \simeq \frac{1}{8\pi} \frac{1}{\beta_0 \log \left(1/(r \Lambda_{\text{ QCD}})\right)}\;, \label{runningcoupling}\end{aligned}$$ with $$\begin{aligned} \beta_0 = \frac{33-2N_f}{48 \pi^2}\;,\end{aligned}$$ where $N_f$ is the number of flavors and $\Lambda_{\text{QCD}}$ denotes the corresponding QCD-$\Lambda$-scale. The data in Fig. \[peik\](b) show a slightly steeper slope at distances below $r\sqrt{\sigma}\simeq0.5$ compared to the pure gauge potential given in Ref. [@Necco:2001xg] indicating that the QCD coupling gets stronger in the entire distance range analyzed here when including dynamical quarks. This is in qualitative agreement with (\[runningcoupling\]). To include the effect of a stronger Coulombic part in the QCD potential we test the Cornell parameterization, $$\begin{aligned} \frac{V(r)}{\sqrt{\sigma}} = -\frac{4}{3}\frac{\alpha}{r\sqrt{\sigma}} + r \sqrt{\sigma} \label{t=0ansatz}\;,\end{aligned}$$ with a free parameter $\alpha$. From a best-fit analysis of Eq. (\[t=0ansatz\]) to the data ranging from $0.2\;\lsim\;r\sqrt{\sigma}\;\lsim\;2.6$ we find $$\begin{aligned} \alpha&=&0.212 (3)\;.\label{res}\end{aligned}$$ This already may indicate that the logarithmic weakening of the coupling with decreasing distance will not too strongly influence the properties of the QCD potential at these distances, [*i.e.*]{} at $r\;\gsim\;0.1$ fm. However, the value of $\alpha$ is moderately larger than $\alpha_{\text{str}}\;\simeq\;0.196$ introduced above. To compare the relative size of $\alpha$ in full QCD to $\alpha$ in the quenched theory we again have performed a best-fit analysis of the quenched zero temperature potential given in [@Necco:2001xg] using the Ansatz given in Eq. (\[t=0ansatz\]) and a similar distance range. Here we find $\alpha_{\text{quenched}} = 0.195(1)$ which is again smaller than the value for the QCD coupling but quite comparable to $\alpha_{\text{str}}$. In earlier studies of the heavy quark potentials in pure gauge theories and full QCD even larger values for the couplings were reported [@Glassner:1996xi; @Allton:1998gi; @Aoki:1998sb; @Bali:2000vr; @AliKhan:2001tx; @Aoki:2002uc]. To avoid here any confusions concerning the value of $\alpha$ we should stress that $\alpha$ should not be mixed with some value for the QCD coupling constant $\alpha_{QCD}$, it simply is a fit parameter indicating the ’average strength’ of the Coulomb part in the Cornell potential. The QCD coupling could be identified properly only in the entire perturbative distance regime and will be a running coupling, [*i.e.*]{} $\alpha_{\text{QCD}}=\alpha_{\text{QCD}}(r)$. When approaching the short distance perturbative regime the Cornell form will overestimate the value of the coupling due to the perturbative logarithmic weakening of the latter, $\alpha_{\text{QCD}}=\alpha_{\text{QCD}}(r)$. To analyze the short distance properties of the QCD potential and the coupling in more detail, [*i.e.*]{} for $r\;\lsim\;0.4$ fm, and to firmly establish here the onset of its perturbative weakening with decreasing distance, it is customary to do so using non-perturbative definitions of running couplings. Following the discussions on the running of the QCD coupling [@Bali:1992ru; @Peter:1997me; @Schroder:1998vy; @Necco:2001xg; @Necco:2001gh], it appears most convenient to study the QCD force, [*i.e.*]{} $dV(r)/dr$, rather than the QCD potential. In this case one defines the QCD coupling in the so-called $qq$-scheme, $$\begin{aligned} \alpha_{qq}(r)&\equiv&\frac{3}{4}r^2\frac{dV(r)}{dr}\;. \label{alp_qq}\end{aligned}$$ In this scheme any undetermined constant contribution to the heavy quark potential cancels out. Moreover, the large distance, non-perturbative confinement contribution to $\alpha_{qq}(r)$ is positive and allows for a smooth matching of the perturbative short distance coupling to the non-perturbative large distance confinement signal. In any case, however, in the non-perturbative regime the value of the coupling will depend on the observable used for its definition. We have calculated the derivatives of the potential with respect to the distance, $dV(r)/dr$, by using finite difference approximations for neighboring distances on the lattice for each $\beta$-value separately. Our results for $\alpha_{qq}(r)$ as a function of distance in physical units for 2-flavor QCD are summarized in Fig. \[peiks\]. The symbols for the $\beta$-values are chosen as in Fig. \[peik\](a). We again show in that figure the corresponding line for the Cornell fit (solid line). At large distances, $r\;\gsim\;0.4$ fm, the data clearly mimic the non-perturbative confinement part of the QCD force, $\alpha_{qq}(r)\simeq3r^2\sigma/4$. We also compare our data to the recent high statistics calculation in pure gauge theory (thick solid line) [@Necco:2001xg]. These data are available for $r\;\gsim\;0.1$ fm and within the statistics of the QCD data no significant differences could be identified between the QCD and pure gauge data for $r\;\gsim\;0.4$ fm. At smaller distances ($r\;\lsim\;0.4$ fm), however, the data show some enhancement compared to the coupling in quenched QCD. The data below $0.1$ fm, moreover, fall below the large distance Cornell fit. This may indicate the logarithmic weakening of the coupling. At smaller distances than $0.1$ fm we therefore expect the QCD potential to be influenced by the weakening of the coupling and $\alpha_{qq}(r)$ will approach values clearly smaller than $\alpha$ deduced from the Cornell Ansatz. Unfortunately we can, at present, not go to smaller distances to clearly demonstrate this behavior with our data in 2-flavor QCD. Moreover, at small distances cut-off effects may also influence our analysis of the coupling and more detailed studies are required here. Despite these uncertainties, however, in earlier studies of the coupling in pure gauge theory [@Necco:2001xg; @Necco:2001gh; @Kaczmarek:2004gv] it is shown that the perturbative logarithmic weakening becomes already important at distances smaller than $0.2$ fm and contact with perturbation theory could be established. As most of our lattice data for the finite temperature quark anti-quark free energies do not reach distances smaller than $0.1$ fm we use in the following the Cornell form deduced in (\[t=0ansatz\]) as reference to the zero temperature heavy quark potential. Quark anti-quark free energy {#secfreee} ============================ We will analyze here the temperature dependence of the change in free energy due to the presence of a heavy (static) quark anti-quark pair in a 2-flavor QCD heat bath. The static quark sources are described by the Polyakov loop, $$\begin{aligned} L(\vec{x})&=&\frac{1}{3}{{\rm Tr}}W(\vec{x})\;,\label{pol}\end{aligned}$$ with $$\begin{aligned} W(\vec{x}) = \prod_{\tau=1}^{N_\tau} U_0(\vec{x},\tau)\;,\label{loop}\end{aligned}$$ where we already have used the lattice formulation with $U_0(\vec{x},\tau) \in SU(3)$ being defined on the lattice link in time direction. The change in free energy due to the presence of the static color sources in color singlet ($F_1$) and color octet ($F_8$) states can be calculated in terms of Polyakov loop correlation functions [@McLerran:1981pb; @Philipsen:2002az; @Nadkarni:1986as; @Nadkarni:1986cz], $$\begin{aligned} e^{-F_1(r)/T+C}&=&\frac{1}{3} {{\rm Tr}}\langle W(\vec{x}) W^{\dagger}(\vec{y}) \rangle \label{f1}\;,\\ e^{-F_8(r)/T+C}&=&\frac{1}{8}\langle {{\rm Tr}}W(\vec{x}) {{\rm Tr}}W^{\dagger}(\vec{y})\rangle- \nonumber \\ && \frac{1}{24} {{\rm Tr}}\langle W(\vec{x}) W^{\dagger}(\vec{y}) \rangle\; , \label{f8}\end{aligned}$$ where $r=|\vec{x}-\vec{y}|$. As it stands, the correlation functions for the color singlet and octet free energies are gauge dependent quantities and thus gauge fixing is needed to define them properly. Here, we follow [@Philipsen:2002az] and fix to Coulomb gauge. In parts we also consider the so-called color averaged free energy defined through the manifestly gauge independent correlation function of two Polyakov loops, $$\begin{aligned} e^{-F_{\bar q q}(r)/T+C}&=&\frac{1}{9}\langle {{\rm Tr}}W(\vec{x}) {{\rm Tr}}W^{\dagger}(0) \rangle \nonumber\\ &=&\langle L(\vec{x})L^\dagger(\vec{y})\rangle\; . \label{fav}\end{aligned}$$ The constant $C$ appearing in (\[f1\]), (\[f8\]) and (\[fav\]) also includes divergent self-energy contributions which require renormalization. Following [@Kaczmarek:2002mc] the free energies have been normalized such that the color singlet free energy approaches the heavy quark potential (solid line) at the smallest distance available on the lattice, $F_1(r/a=1, T)=V(r)$. In Sec. \[renormalization\] we will explain the connection of this procedure to the the renormalized Polyakov loop and show the resulting renormalization constants in Table \[tab:ren\]. Some results for the color singlet, octet and averaged quark anti-quark free energies are shown in Fig. \[saos\] for one temperature below and one temperature above deconfinement, respectively. The free energies calculated in different color channels coincide at large distances and clearly show the effects from string breaking below and color screening above deconfinement. The octet free energies above $T_c$ are repulsive for all distances while below $T_c$ the distances analyzed here are not small enough to show the (perturbatively) expected repulsive short distance part. Similar results are obtained at all temperatures analyzed here. In the remainder of this section we study in detail the thermal modifications of these free energies from short to large distances. We begin our analysis of the free energies at small distances in Sec. \[couplatt\] with a discussion of the running coupling which leads to the renormalization of the free energies in Sec. \[renormalization\]. The separation of small and large distances which characterizes sudden qualitative changes in the free energy will be discussed in Sec. \[secshort\]. Large distance modifications of the quark anti-quark free energy will be studied in Sec. \[colorscreening\] at temperatures above and in Sec. \[stringbreaking\] at temperatures below deconfinement. Our analysis of thermal modifications of the strong interaction will mainly be performed for the color singlet free energy. In this case a rather simple Coulombic $r$-dependence is suggested by perturbation theory at $T=0$ and short distances as well as for large distances at high temperatures. In particular, a proper $r$-dependence of $F_{\bar q q}$ is difficult to establish [@Kaczmarek:2002mc]. This maybe is attributed to contributions from higher excited states [@Jahn:2004qr] or to the repulsive contributions from states with static charges fixed in an octet configuration. The running coupling at $T\neq0$ {#couplatt} -------------------------------- We extend here our studies of the coupling at zero temperature to finite temperatures below and above deconfinement following the conceptual approach given in [@Kaczmarek:2004gv]. In this case the appropriate observable is the color singlet quark anti-quark free energy and its derivative. We use the perturbative short and large distance relation from one gluon exchange [@Nadkarni:1986as; @Nadkarni:1986cz; @McLerran:1981pb], [*i.e.*]{} in the limit $r\Lambda_{\text{QCD}}\ll1$ zero temperature perturbation theory suggests $$\begin{aligned} F_1(r,T)\;\equiv\;V(r)&\simeq&-\frac{4}{3}\frac{\alpha(r)}{r}\;,\label{alp_rT1}\end{aligned}$$ while high temperature perturbation theory, [*i.e.*]{} $rT\gg1$ and $T$ well above $T_c$, yields $$\begin{aligned} F_1(r,T)&\simeq&-\frac{4}{3}\frac{\alpha(T)}{r}e^{-m_D(T)r}\;.\label{alp_rT2}\end{aligned}$$ In both relations we have neglected any constant contributions to the free energies which, in particular, at high temperatures will dominate the large distance behavior of the free energies. Moreover, we already anticipated here the running of the couplings with the expected dominant scales $r$ and $T$ in both limits. At finite temperature we define the running coupling in analogy to $T=0$ as (see [@Kaczmarek:2002mc; @Kaczmarek:2004gv]),$$\begin{aligned} \alpha_{qq}(r,T)&\equiv&\frac{3}{4}r^2 \frac{dF_1(r,T)}{dr}\;.\label{alp_rT}\end{aligned}$$ With this definition any undetermined constant contributions to the free energies are eliminated and the coupling defined here at finite temperature will recover the coupling at zero temperature defined in (\[alp\_qq\]) in the limit of small distances. Therefore $\alpha_{qq}(r,T)$ will show the (zero temperature) weakening in the short distance perturbative regime. In the large distance limit, however, the coupling will be dominated by Eq. (\[alp\_rT2\]) and will be suppressed by color screening, $\alpha_{qq}(r,T)\simeq\alpha(T)\exp(-m_D(T)r)$, $rT\gg1$. It thus will exhibit a maximum at some intermediate distance. Although in the large distance regime $\alpha_{qq}(r,T)$ will be suppressed by color screening and thus non-perturbative effects will strongly control the value of $\alpha_{qq}(r,T)$, in this limit the temperature dependence of the coupling, $\alpha(T)$, can be extracted by directly comparing the singlet free energy with the high temperature perturbative relation above deconfinement. Results from such an analysis will be given in Sec. \[colorscreening\]. We calculated the derivative, $dF_1/dr$, of the color singlet free energies with respect to distance by using cubic spline approximations of the $r$-dependence of the free energies for each temperature. We then performed the derivatives on basis of these splines. Our results for $\alpha_{qq}(r,T)$ calculated in this way are shown in Fig. \[couplt\] and are compared to the coupling at zero temperature discussed already in Sec. \[couplt=0\]. Here the thin solid line corresponds to the coupling in the Cornell Ansatz deduced in Eq. (\[t=0ansatz\]). We again show in this figure the results from $SU(3)$-lattice (thick line) and perturbative (dashed line) calculations at zero temperature from [@Necco:2001gh; @Necco:2001xg]. The strong $r$-dependence of the running coupling near $T_c$ observed already in pure gauge theory [@Kaczmarek:2004gv] is also visible in 2-flavor QCD. Although our data for 2-flavor QCD do not allow for a detailed quantitative analysis of the running coupling at smaller distances, the qualitative behavior is in quite good agreement with the recent quenched results. At large distances the running coupling shows a strong temperature dependence which sets in at shorter separations with increasing temperature. At temperatures close but above $T_c$, $\alpha_{qq}(r,T)$ coincides with $\alpha_{qq}(r)$ already at separations $r\;\simeq\;0.4$ fm and clearly mimics here the confinement part of $\alpha_{qq}(r)$. This is also apparent in quenched QCD [@Kaczmarek:2004gv]. Remnants of the confinement part of the QCD force may survive the deconfinement transition and could play an important role for the discussion of non-perturbative aspects of quark anti-quark interactions at temperatures moderately above $T_c$ [@Shuryak:2004tx; @Brown:2004qi]. A clear separation of the different effects usually described by the concept of color screening ($T\;\gsim\;T_c$) and effects usually described by the concept of string-breaking ($T\;\lsim\;T_c$) is difficult to establish at temperatures in the close vicinity of the confinement deconfinement crossover. We also analyzed the size of the maximum that the running coupling $\alpha_{qq}(r,T)$ at fixed temperature exhibits at a certain distance, $r_{max}$, [*i.e.*]{} we identify a temperature dependent coupling, $\tilde{\alpha}_{qq}(T)$, defined as $$\begin{aligned} \tilde{\alpha}_{qq}(T)&\equiv&\alpha_{qq}(r_{max},T)\;.\label{alp_Tdef} \end{aligned}$$ The values for $r_{max}$ will be discussed in Sec \[secshort\] (see Fig. \[onset\]). Values for $\tilde{\alpha}_{qq}(T)$ are also available in pure gauge theory [@Kaczmarek:2004gv] at temperatures above deconfinement[^2]. Our results for $\tilde{\alpha}_{qq}(T)$ in $2$-flavor QCD and pure gauge theory are shown in Fig. \[alp\_qqT\] as function of temperature, $T/T_c$. At temperatures above deconfinement we cannot identify significant differences between the data from pure gauge and 2-flavor QCD[^3]. Only at temperatures quite close but above the phase transition small differences between full and quenched QCD become visible in $\tilde{\alpha}_{qq}(T)$. Nonetheless, the value of $\tilde{\alpha}_{qq}(T)$ drops from about $0.5$ at temperatures only moderately larger than the transition temperature, $T\;\gsim\;1.2T_c$, to a value of about $0.3$ at $2T_c$. This change in $\tilde{\alpha}_{qq}(T)$ with temperature calculated in $2$-flavor QCD does not appear to be too dramatic and might indeed be described by the $2$-loop perturbative coupling, $$\begin{aligned} g_{\text{2-loop}}^{-2}(T)=2\beta_0\ln\left(\frac{\mu T} {\Lambda_{\overline{MS}}}\right)+\frac{\beta_1}{\beta_0} \ln\left(2\ln\left(\frac{\mu T}{\Lambda_{\overline{MS}}}\right)\right),\nonumber\\ \label{2loop}\end{aligned}$$ with $$\begin{aligned} \beta_0&=&\frac{1}{16\pi^2}\left(11-\frac{2N_f}{3}\right)\;,\nonumber\\ \beta_1&=&\frac{1}{(16\pi^2)^2}\left(102-\frac{38N_f}{3}\right)\;,\nonumber\end{aligned}$$ assuming vanishing quark masses. In view of the ambiguity in setting the scale in perturbation theory, $\mu T$, we performed a best-fit analysis to fix the scale for the entire temperature range, $1.2\;\lsim\;T/T_c\;\lsim\;2$. We find here $\mu=1.14(2)\pi$ with $T_c/\Lambda_{\overline{MS}}=0.77(21)$ using $T_c\simeq202(4)$ MeV [@Karsch:2000ps] and $\Lambda_{\overline{MS}}\simeq 261(17)$ MeV [@Gockeler:2005rv], which is still in agreement with the lower limit of the range of scales one commonly uses to fix perturbative couplings, $\mu=\pi,...,4\pi$. This is shown by the solid line (fit) in Fig. \[alp\_qqT\] including the error band estimated through $\mu=\pi$ to $\mu=4\pi$ and the error on $T_c/\Lambda_{\overline{MS}}$ (dotted lines). We will turn back to a discussion of the temperature dependence of the coupling above deconfinement in Sec. \[colorscreening\]. At temperatures in the vicinity and below the phase transition temperature, $T\;\lsim\;1.2T_c$, the behavior of $\tilde{\alpha}_{qq}(T)$ is, however, quite different from the perturbative logarithmic change with temperature. The values for $\tilde{\alpha}_{qq}(T)$ rapidly grow here with decreasing temperature and approach non-perturbatively large values. This again shows that $\tilde{\alpha}_{qq}(r,T)$ mimics the confinement part of the zero temperature force still at relatively large distances and that this behavior persists up to temperatures close but above deconfinement. This again demonstrates the persistence of confinement forces at $T\;\gsim\;T_c$ and intermediate distances and demonstrates the difficulty to separate clearly the different effects usually described by color screening and string breaking in the vicinity of the phase transition. We note here, however, that similar to the coupling in quenched QCD [@Kaczmarek:2004gv] the coupling which describes the short distance Coulombic part in the free energies is almost temperature independent in this temperature regime, [*i.e.*]{} even at relatively large distances the free energies shown in Fig. \[fes\] show no or only little temperature dependence below deconfinement. Renormalization of the quark anti-quark free energies and Polyakov loop {#renormalization} ----------------------------------------------------------------------- On the lattice the expectation value of the Polyakov loop and its correlation functions suffer from linear divergences. This leads to vanishing expectation values in the continuum limit, $a\to0$, at all temperatures. To become a meaningful physical observable a proper renormalization is required [@Kaczmarek:2002mc; @Dumitru:2004gd; @deForcrand:2001nd]. We follow here the conceptual approach suggested in [@Kaczmarek:2002mc; @Phd] and extend our earlier studies in pure gauge theory to the present case of 2-flavor QCD. First experiences with this renormalization method in full QCD were already reported in [@Kaczmarek:2003ph; @Petreczky:2004pz]. In the limit of short distances, $r\ll1/T$, thermal modifications of the quark anti-quark free energy become negligible and the running coupling is controlled by distance only. Thus we can fix the free energies at small distances to the heavy quark potential, $F_1(r\ll1/T,T)\simeq V(r)$, and the renormalization group equation (RGE) will lead to $$\begin{aligned} \lim_{r\to0}T\frac{dF_1(r,T)}{dT}&=&0\;,\label{RGEf}\end{aligned}$$ where we already have assumed that the continuum limit, $a\to0$, has been taken. On basis of the analysis of the coupling in Sec. \[couplatt\] and experiences with the quark anti-quark free energy in pure gauge theory [@Kaczmarek:2002mc; @Kaczmarek:2004gv] we assume here that the color singlet free energies in 2-flavor QCD calculated on finite lattices with temporal extent $N_\tau=4$ already have approached appropriate small distances, $r\ll1/T$, allowing for renormalization. The (renormalized) color singlet quark anti-quark free energies, $F_1(r,T)$, and the heavy quark potential, $V(r)$ (line), were already shown in Fig. \[fes\](a) as function of distance at several temperatures close to the phase transition. From that figure it can be seen that the quark anti-quark free energy fixed at small distances approaches finite, temperature dependent plateau values at large distances signaling color screening ($T\;\gsim\;T_c$) and string breaking ($T<T_c$). These plateau values, $F_\infty(T)\equiv F_1(r\to\infty,T)$, are decreasing with increasing temperature in the temperature range analyzed here. In general it is expected that $F_\infty(T)$ will continue to increase at smaller temperature and will smoothly match $V(r\equiv\infty)$ [@Digal:2001iu] at zero temperature while it will become negative at high temperature and asymptotically is expected to become proportional to $g^3T$ [@Gava:1981qd; @Kaczmarek:2002mc]. The plateau value of the quark anti-quark free energy at large distances can be used to define non-perturbatively the renormalized Polyakov loop [@Kaczmarek:2002mc], [ *i.e.*]{} $$\begin{aligned} L^{\text{ren}}(T)&=&\exp\left(-\frac{F_\infty(T)}{2T}\right)\;.\label{renPloop}\end{aligned}$$ As the unrenormalized free energies approach $|\langle L\rangle|^2$ at large distances, this may be reinterpreted in terms of a renormalization constant that has been determined by demanding (\[RGEf\]) to hold at short distances [@Dumitru:2003hp; @Zantow:2003uh], $$\begin{aligned} L^{\text{ren}}&\equiv&|\langle \left(Z(g,m)\right)^{N_\tau} L \rangle|\;.\end{aligned}$$ The values for $Z(g,m)$ for our simulation parameters are summarized in Table \[tab:ren\]. The normalization constants for the free energies appearing in (\[f1\]-\[fav\]) are then given by $$\begin{aligned} C=-2 N_\tau Z(g,m).\end{aligned}$$ An analysis of the renormalized Polyakov loop expectation value in high temperature perturbation theory [@Gava:1981qd] suggests at (resummed) leading order[^4], the behavior $$\begin{aligned} L^{\text{ren}}(T)&\simeq&1+\frac{2}{3}\frac{m_D(T)}{T}\alpha(T)\label{lrenpert}\;\end{aligned}$$ in the fundamental representation. Thus high temperature perturbation theory suggests that the limiting value at infinite temperature, $L^{\text{ren}}(T\to\infty)=1$ is approached from above. An expansion of (\[renPloop\]) then suggests $F_\infty(T)\simeq-\frac{4}{3}m_D(T)\alpha(T)\simeq-{\cal O}(g^3T)$. We thus expect $F_\infty(T)\to-\infty$ in the high temperature limit. $\beta$ $Z(g,m)$ $T/T_c$ $L^{\text{ren}}(T)$ --------- ----------- --------- --------------------- 3.52 1.333(19) 0.76 0.033(2) 3.55 1.351(10) 0.81 0.049(2) 3.60 1.370(08) 0.90 0.093(2) 3.63 1.376(07) 0.96 0.160(3) 3.65 1.376(07) 1.00 0.241(5) 3.66 1.375(06) 1.02 0.290(5) 3.68 1.370(06) 1.07 0.398(7) 3.72 1.374(02) 1.16 0.514(3) 3.75 1.379(02) 1.23 0.575(2) 3.80 1.386(01) 1.36 0.656(2) 3.85 1.390(01) 1.50 0.722(2) 3.90 1.394(01) 1.65 0.779(1) 3.95 1.396(13) 1.81 0.828(3) 4.00 1.397(01) 1.98 0.874(1) 4.43 1.378(01) 4.01 1.108(2) : Renormalization constants, $Z(g,m)$, versus $\beta$ and the renormalized Polyakov loop, $L^{\text{ren}}$, versus $T/T_c$ for 2-flavor QCD with quark mass $m/T=0.4$. \[tab:ren\] To avoid here any fit to the complicated $r$- and $T$-dependence of the quark anti-quark free energy we estimate the value of $F_\infty(T)$ from the quark anti-quark free energies at the largest separation available on a finite lattice, $r=N_\sigma/2$. As the free energies in this renormalization scheme coincide at large distances in the different color channels we determine $F_\infty(T)$ from the color averaged free energies, [*i.e.*]{} $F_\infty(T)\equiv F_{\bar q q}(r=N_\sigma/2,T)$. This is a manifestly gauge invariant quantity. In Fig. \[renpol\] we show the results for $L^{\text{ren}}$ in 2-flavor QCD (filled symbols) compared to the quenched results (open symbols) obtained in [@Kaczmarek:2002mc]. In quenched QCD $L^{\text{ren}}$ is zero below $T_c$ as the quark anti-quark free energy signals permanent confinement, [*i.e.*]{} $F_\infty(T\;\lsim\; T_c)=\infty$ in the infinite volume limit, while it jumps to a finite value just above $T_c$. The singularity in the temperature dependence of $L^{\text{ren}}(T)$ located at $T_c$ clearly signals the first order phase transition in $SU(3)$ gauge theory. The renormalized Polyakov loop in $2$-flavor QCD, however, is no longer zero below $T_c$. Due to string breaking the quark anti-quark free energies approach constant values leading to non-zero values of $L^{\text{ren}}$. Although the renormalized Polyakov loop calculated in full QCD is no longer an order parameter for the confinement deconfinement phase transition, it still shows a quite different behavior in the two phases and a clear signal for a qualitative change in the vicinity of the transition. Above deconfinement $L^{\text{ren}}(T)$ yields finite values also in quenched QCD. In the temperature range $1\;\lsim\;T/T_c\;\lsim\;2$ we find that in 2-flavor QCD $L^{\text{ren}}$ lies below the results in quenched QCD. This, however, may change at higher temperatures. The value for $L^{\text{ren}}$ at $4T_c$ is larger than unity and we find indication for $L^{\text{ren}}_{\mathrm{2-flavor}}(4T_c)\;\gsim\;L^{\text{ren}}_{\mathrm{quenched}}(4T_c)$. The properties of $L^{\text{ren}}$, however, clearly depend on the relative normalization of the quark anti-quark free energies in quenched and full QCD. Short vs. large distances {#secshort} ------------------------- Having discussed the quark anti-quark free energies at quite small distances where no or only little temperature effects influence the behavior of the free energies and at quite large distances where aside from $T$ no other scale controls the free energy, we now turn to a discussion of medium effects at intermediate distances. The aim is to gain insight into distance scales that can be used to quantify at which distances temperature effects in the quark anti-quark free energies set in and may influence the in-medium properties of heavy quark bound states in the quark gluon plasma. It can be seen from Fig. \[fes\](a) that the color singlet free energy changes rapidly from the Coulomb-like short distance behavior to an almost constant value at large distances. This change reflects the in-medium properties of the heavy quark anti-quark pair, [*i.e.*]{} the string-breaking property and color screening. To characterize this rapid onset of in-medium modifications in the free energies we introduced in Ref. [@Kaczmarek:2002mc] a scale, $r_{med}$, defined as the distance at which the value of the $T=0$ potential reaches the value $F_\infty(T)$, [*i.e.*]{} $$\begin{aligned} V(r_{med})&\equiv&F_\infty(T)\;.\label{rmed}\end{aligned}$$ As $F_\infty(T)$ is a gauge invariant observable this relation provides a non-perturbative, gauge invariant definition of the scale $r_{med}$. While in pure gauge theory the color singlet free energies signal permanent confinement at temperatures below $T_c$ leading to a proper definition of this scale only above deconfinement, in full QCD it can be deduced in the whole temperature range. On the other hand, the change in the coupling $\alpha_{qq}(r,T)$ as function of distance at fixed temperature mimics the qualitative change in the interaction when going from small to large distances and the coupling exhibits a maximum at some intermediate distance. The location of this maximum, $r_{max}$, can also be used to identify a scale that characterizes separation between the short distance vacuumlike and the large distance medium modified interaction between the static quarks [@Kaczmarek:2004gv]. Due to the rapid crossover from short to large distance behavior (see Fig. \[fes\](a)) it should be obvious that $r_{med}$ and $r_{max}$ define similar scales, however, by construction $r_{max}\;\lsim\;r_{med}$. To gain important information about the flavor and quark mass dependence of our analysis of the scales in QCD, we also took data for $F_\infty(T)$ from Ref. [@Petreczky:2004pz] at smaller quark mass, $m_\pi/m_\rho\simeq0.4$ [@Peterpriv], and calculated $r_{med}$ in $3$-flavor QCD with respect to the parameterization of $V(r)$ given in [@Petreczky:2004pz]. It is interesting to note here that a study of the flavor and quark mass dependence of $r_{med}$ and $r_{max}$ is independent of any undetermined and maybe flavor and/or quark mass dependent overall normalization of the corresponding $V(r)$ at zero temperature. Our results for $r_{max}$ ($N_f$=0,2) and $r_{med}$ ($N_f$=0,2,3) are summarized in Fig. \[onset\] as function of $T/T_c$. It can be seen that the value $r_{max}\simeq 0.6$ fm is approached almost in common in quenched and $2$-flavor QCD at the phase transition and it commonly drops to about $0.25$ fm at temperatures about $2T_c$. No or only little differences between $r_{max}$ calculated from pure gauge and $2$-flavor QCD could be identified at temperatures above deconfinement. The temperature dependence of $r_{med}$ is similar to that of $r_{max}$ and again we see no major differences between pure gauge ($N_f$=0) and QCD ($N_f$=2,3) results. In the vicinity of the transition temperature and above both scales almost coincide. In fact, above deconfinement the flavor and finite quark mass dependence of $r_{med}$ appears quite negligible. At high temperature we expect $r_{med}\simeq1/gT$ [@Kaczmarek:2002mc] while in terms of $r_{max}$ we found agreement with $r_{max}=0.48(1)$ fm $T_c/T$ (solid lines) at temperatures ranging up to $12T_c$ [@Kaczmarek:2004gv]. Note that both scales clearly lie well above the smallest distance attainable by us on the lattice, $rT\equiv1/N_\tau=1/4$. This distance is shown by the lower dashed line in Fig. \[onset\]. At temperatures below deconfinement $r_{max}$ and $r_{med}$ rapidly increase and fall apart when going to smaller temperatures. In fact, at temperatures below deconfinement we clearly see difference between $r_{med}$ calculated in $2$- and $3$-flavor QCD. To some extend this is expected due to the smaller quark mass used in the $3$-flavor QCD study as the string breaking energy gets reduced. It is, however, difficult to clearly separate here a finite quark mass effect from flavor dependence. In both cases $r_{med}$ approaches, already at $T\simeq0.8T_c$, quite similar values to those reported for the distance where string breaking at $T=0$ is expected at similar quark masses. In $2$-flavor QCD at $T=0$ and quark mass $m_\pi/m_\rho\simeq0.7$ the string is expected to break at about $1.2-1.4$ fm [@Pennanen:2000yk] while at smaller quark mass, $m_\pi/m_\rho\simeq0.4$ it might break earlier [@Bernard:2001tz]. In contrast to the complicated $r$- and $T$-dependence of the free energy at intermediate distances high temperature perturbation theory suggests a color screened Coulomb behavior for the singlet free energy at large distances. To analyze this in more detail we show in Fig. \[screeningf\] the subtracted free energies, $r(F_1(\infty,T)-F_1(r,T))$. It can be seen that this quantity indeed decays exponentially at large distances, $rT\;\gsim\;1$. This allows us to study the temperature dependence of the parameters $\alpha(T)$ and $m_D(T)$ given in Eq. (\[alp\_rT2\]). At intermediate and small distances, however, deviations from this behavior are expected and can clearly be seen and are to some extent due to the onset of the $r$-dependence of the coupling at small distances. These deviations from the simple exponential decay become important already below some characteristic scale, $r_d$, which we can roughly identify here as $r_dT\simeq0.8\;-\;1$. This scale which defines a lower limit for the applicability of high temperature perturbation theory is shown by the upper dashed line in Fig. \[onset\] ($r_dT=1$). It lies well above the scales $r_{med}$ and $r_{max}$ which characterize the onset of medium modifications on the quark anti-quark free energy. Screening properties above deconfinement and the coupling {#colorscreening} --------------------------------------------------------- ### Screening properties and quarkonium binding We follow here the approach commonly used [@Attig:1988ey; @Nakamura:2004wr; @Kaczmarek:2004gv] and define the non-perturbative screening mass, $m_D(T)$, and the temperature dependent coupling, $\alpha(T)$, from the exponential fall-off of the color singlet free energies at large distances, $rT\;\gsim\;0.8\;$-$\;1$. A consistent definition of screening masses, however, is accompanied by a proper definition of the temperature dependent coupling and only at sufficiently high temperatures contact with perturbation theory is expected [@Kaczmarek:2004gv; @Laine:2005ai]. A similar discussion of the color averaged quark anti-quark free energy is given in Refs. [@Kaczmarek:1999mm; @Petreczky:2001pd; @Zantow:2001yf]. We used the Ansatz (\[alp\_rT2\]) to perform a best-fit analysis of the large distance part of the color singlet free energies, [*i.e.*]{} we used fit functions with the Ansatz $$\begin{aligned} F_1(r,T)-F_1(r=\infty,T) = - \frac{4a(T)}{3r}e^{-m(T)r}, \label{screenfit}\end{aligned}$$ where the two parameters $a(T)$ and $m(T)$ are used to estimate the coupling $\alpha(T)$ and the Debye mass $m_D(T)$, respectively. The fit-range was chosen with respect to our discussion in Sec. \[secshort\], [*i.e.*]{} $rT\;\gsim\;0.8\;$-$\;1$, where we varied the lower fit limit within this range and averaged over the resulting values. The temperature dependent coupling $\alpha(T)$ defined here will be discussed later. Our results for the screening mass, $m_D(T)/T$, are summarized in Fig. \[screenmass\] as function of $T/T_c$ and are compared to the results obtained in pure gauge theory [@Kaczmarek:2004gv]. The data obtained from our $2$-flavor QCD calculations are somewhat larger than in quenched QCD. Although we are not expecting perturbation theory to hold at these small temperatures, this enhancement is in qualitative agreement with leading order perturbation theory, [*i.e.*]{} $$\begin{aligned} \frac{m_D(T)}{T}&=&\left(1 + \frac{N_f}{6}\right)^{1/2}\;g(T)\;.\label{LOscreen}\end{aligned}$$ However, using the $2$-loop formula (\[2loop\]) to estimate the temperature dependence of the coupling leads to significantly smaller values for $m_D/T$ even when setting the scale by $\mu=\pi$ which commonly is used as an upper bound for the perturbative coupling. We therefore follow [@Kaczmarek:2004gv; @Kaczmarek:1999mm] and introduce a multiplicative constant, $A$, [*i.e.*]{} we allow for a non-perturbative correction defined as $$\begin{aligned} \frac{m_D(T)}{T}&\equiv& A\;\left(1 + \frac{N_f}{6}\right)^{1/2}\;g_{2-loop}(T)\;,\end{aligned}$$ and fix this constants by best agreement with the non-perturbative data for $m_D(T)/T$ at temperatures $T\;\gsim\;1.2$. Here the scale in the perturbative coupling is fixed by $\mu=2\pi$. This analysis leads to $A=1.417(19)$ and is shown as solid line with error band (dotted lines). Similar results were already reported in [@Kaczmarek:1999mm; @Kaczmarek:2004gv] for screening masses in pure gauge theory. Using the same fit range, i.e. $T=1.2T_c\;-\;4.1T_c$, for the quenched results, we obtain $A=1.515(17)$. To avoid here any confusion concerning $A$ we note that its value will crucially depend on the temperature range used to determine it. When approaching the perturbative high temperature limit, $A\to 1$ is expected. It is interesting to note here that the difference in $m_D/T$ apparent in Fig. \[screenmass\] between $2$-flavor QCD and pure gauge theory disappears when converting $m_D(T)$ to physical units. This is obvious from Fig. \[screenradius\] which shows the Debye screening radius, $r_D\equiv1/m_D$. In general $r_D$ is used to characterize the distance at which medium modifications of the quark anti-quark interaction become dominant. It often is used to describe the screening effects in phenomenological inter-quark potentials at high temperatures. From perturbation theory one expects that the screening radius will drop like $1/gT$. A definition of a screening radius, however, will again depend on the ambiguities present in the non-perturbative definition of a screening mass, $m_D(T)$. A different quantity that characterizes the onset of medium effects, $r_{med}$, has already been introduced in Sec. \[secshort\]; this quantity is also expected to drop like $1/(gT)$ at high temperatures and could be considered to give an upper limit for the screening radius [@Kaczmarek:2004gv]. In Fig. \[screenradius\] we compare both length scales as function of temperature, $T/T_c$, and compare them to the findings in quenched QCD [@Kaczmarek:2002mc; @Kaczmarek:2004gv]. It can be seen that in the temperature range analyzed here $r_D(T)\;<\;r_{med}(T)$ and no or only little differences between the results from quenched ($N_f$=0) and full ($N_f$=2,3) QCD could be identified. Again we stress that in the perturbative high temperature limit differences are expected to arise as expressed by Eq. (\[LOscreen\]). It is important to realize that at distances well below $r_{med}$ medium effects become suppressed and the color singlet free energy almost coincides with the zero temperature heavy quark potential (see Fig. \[fes\](a)). In particular, the screening radius estimated from the inverse Debye mass corresponds to distances which are only moderately larger than the smallest distance available in our calculations (compare with the lower dotted line in Fig. \[onset\]). In view of the almost temperature independent behavior of the color singlet free energies at small distances (Fig. \[fes\](a)) it could be misleading to quantify the dominant screening length of the medium in terms of $r_D\equiv1/m_D$. On the other hand the color averaged free energies show already strong temperature dependence at distances similar to $r_D$ (see Fig. \[fes\](b)). Following [@Karsch:2005ex] we also included in Fig. \[screenradius\] the mean charge radii of the most prominent charmonium states, $J/\psi$, $\chi_c$ and $\psi\prime$, as horizontal lines. These lines characterize the averaged separation $r$ which enters the effective potential in potential model calculations. It thus is reasonable to expect that the temperature at which these radii equal $r_{med}$ could give a rough estimate for the onset of thermal effects in the charmonium states. It appears quite reasonable from this view that $J/\psi$ indeed may survive the phase transition [@Asakawa:2003re; @Datta:2003ww], while $\chi_c$ and $\psi\prime$ are supposed to show significant thermal modifications at temperatures close to the transition. Recent potential model calculations support this analysis [@Wong:2004kn]. The wave functions for these states, however, will also reach out to larger distances [@Jacobs:1986gv] and this estimate can only be taken as a first indication for the relevant temperatures. Further details on this issue including also bottomonium states have been given in Ref. [@Karsch:2005ex]. We will turn again to a discussion of thermal modifications of quarkonium states in Ref. [@pap2] using finite temperature quark anti-quark energies. ### Temperature dependence of $\alpha_s$ We finally discuss here the temperature dependence of the QCD coupling, $\alpha(T)$, extracted from the fits used to determine also $m_D$, [*i.e.*]{} from Eq. (\[screenfit\]). From fits of the free energies above deconfinement we find the values shown in Fig. \[cTcomp\] as function of $T/T_c$ given by the filled circles. We again show in this figure also the temperature dependent coupling $\tilde{\alpha}_{qq}(T)$ introduced in Sec. \[couplatt\]. It can clearly be seen that the values for both couplings are quite different, $\tilde{\alpha}_{qq}(T)\;\gsim\;\alpha(T)$, at temperatures close but above deconfinement while this difference rapidly decreases with increasing temperature. This again demonstrates the ambiguity in defining the coupling in the non-perturbative temperature range due to the different non-perturbative contributions to the observable used for its definition [@Kaczmarek:2004gv]. In fact, at temperatures close to the phase transition temperature we find quite large values for $\alpha(T)$, [*i.e.*]{} $\alpha(T)\simeq 2\; -\; 3$ in the vicinity of $T_c$, while it drops rapidly to values smaller than unity, [*i.e.*]{} $\alpha(T)\;\lsim\;1$ already at temperatures $T/T_c\;\gsim\;1.5$. A similar behavior was also found in [@Kaczmarek:2004gv] for the coupling in pure gauge theory (open symbols). In fact, no or only a marginal enhancement of the values calculated in full QCD compared to the values in quenched QCD could be identified here at temperatures $T\;\lsim\;1.5T_c$. We stress again that the large values for $\alpha(T)$ found here should not be confused with the coupling that characterizes the short distance Coulomb part of $F_1(r,T)$. The latter is almost temperature independent at small distances and can to some extent be described by the zero temperature coupling. String breaking below deconfinement {#stringbreaking} ----------------------------------- We finally discuss the large distance properties of the free energies below $T_c$. In contrast to the quark anti-quark free energy in quenched QCD where the string between the quark anti-quark pair cannot break and the free energies are linearly rising at large separations, in full QCD the string between two static color charges can break due to the possibility of spontaneously generating $q\bar q$-pairs from the vacuum. Therefore the quark anti-quark free energy reaches a constant value also below $T_c$. In Fig. \[fes\] this behavior is clearly seen. The distances at which the quark anti-quark free energies approach an almost constant value move to smaller separations at higher temperatures. This can also be seen from the temperature dependence of $r_{med}$ in Fig. \[onset\] at temperature below $T_c$. By construction $r_{med}$ describes a distance which can be used to estimate a lower limit for the distance where the string breaking will set in. An estimate of the string breaking radius at $T=0$ can be calculated from the lightest heavy-light meson, $r_{\text{breaking}}\simeq1.2-1.4$ fm [@Pennanen:2000yk] and is shown on the left side in Fig. \[onset\] within the dotted band. It can be seen that $r_{med}$ in $2$-flavor QCD does indeed approach such values at temperatures $T\;\lsim\;0.8T_c$. This suggests that the dependence on temperature in $2$-flavor QCD is small below the smallest temperature analyzed here, $0.76T_c$. This can also be seen from the behavior of $F_\infty(T)$ shown in Fig. \[string1\] (see also Fig. \[fes\](a)) compared to the value commonly expected at $T=0$. We use $V(r_{\text{breaking}})\simeq1000-1200$ MeV as reference to the zero temperature string breaking energy with quark mass $m_\pi/m_\rho\simeq0.7$. This estimate is shown on the left side in Fig. \[string1\] as the dotted band. A similar behavior is expected for the free energies in $3$-flavor QCD and smaller quark mass, $m_\pi/m_\rho\simeq0.4$. As seen also in Fig. \[string1\] the values for $F_\infty(T)$ are smaller than in $2$-flavor QCD and larger quark mass. This may indicate that string breaking sets in at smaller distances for smaller quark masses. However, in [@Karsch:2000kv] no mass dependence (in the color averaged free energies) was observed below the quark mass analyzed by us ($m/T$=0.4). At present it is, however, difficult to judge whether the differences seen for $2$- and $3$-flavor QCD for $T/T_c<1$ are due to quark mass or flavor dependence of the string breaking. Although $F_\infty(T)$ still is close to $V(r_{\text{breaking}})$ at $T\sim0.8T_c$, it rapidly drops to about half of this value in the vicinity of the phase transition, $F_\infty(T_c)\simeq575$ MeV. This value is almost the same in $2$- and $3$-flavor QCD; we find $F_\infty^{N_f=2}(T_c)\simeq575(15)$ MeV and $F_\infty^{N_f=3}(T_c)\simeq548(20)$ MeV. It is interesting to note that also the values of $F_\infty(T)$ in quenched QCD ($N_f$=0) approach a similar value at temperatures just above $T_c$. We find $F_\infty(T_c^+)\simeq481(4)$ MeV where $T_c^+\equiv1.02T_c$ denotes the closest temperature above $T_c$ analyzed in quenched QCD. Of course, the value for $F_\infty(T_c^+)$ will increase when going to temperatures even closer to $T_c$. The flavor and quark mass dependence of $F_\infty(T)$ including also higher temperatures will be discussed in more detail in Ref. [@pap2]. Summary and Conclusions {#seccon} ======================= Our analysis of the zero temperature heavy quark potential, $V(r)$, calculated in $2$-flavor lattice QCD using large Wilson loops [@Karsch:2000kv] shows no signal for string breaking at distances below $1.3$ fm. This is quite consistent with earlier findings [@Bernard:2001tz; @Pennanen:2000yk]. The $r$-dependence of $V(r)$ becomes comparable to the potential from the bosonic string picture already at distances larger than $0.4$ fm. Similar findings have also been reported in lattice studies of the potential in quenched QCD [@Necco:2001xg; @Luscher:2002qv]. At those distances, $0.4$ fm$\;\lsim\;r\;\lsim\;1.5$ fm, we find no or only little differences between lattice data for the potential in quenched ($N_f$=0) given in Ref. [@Necco:2001xg] and full ($N_f$=2) QCD. At smaller distances, however, deviations from the large distance Coulomb term predicted by the string picture, $\alpha_{\text{str}}\simeq0.196$, are found here when performing best fit analysis with a free Cornell Ansatz. We find $\alpha\simeq0.212(3)$ which could describe the data down to $r\;\gsim\;0.1$ fm. By analyzing the coupling in the $qq$-scheme defined through the force, $dV(r)/dr$, small enhancement compared to the coupling in quenched QCD is found for $r\;\lsim\;0.4$ fm. At distances substantially smaller than $0.1$ fm the logarithmic weakening of the coupling enters and will dominate the $r$-dependence of $V(r)$. The observed running of the coupling may already signal the onset of the short distance perturbative regime. This is also evident from quenched QCD lattice studies of $V(r)$ [@Necco:2001gh]. The running coupling at finite temperature defined in the $qq$-scheme using the derivative of the color singlet quark anti-quark free energy, $dF_1(r,T)/dr$, shows only little qualitative and quantitative differences when changing from pure gauge [@Kaczmarek:2004gv] to full QCD at temperatures well above deconfinement. Again, at small distances the running coupling is controlled by distance and becomes comparable to $\alpha_{qq}(r)$ at zero temperature. The properties of $\alpha_{qq}(r,T)$ at temperatures in the vicinity of the phase transition are to large extent controlled by the confinement signal at zero temperature. A clear separation of the different effects usually described by the concepts of color screening ($T\;\gsim\;T_c$) and the concept of string breaking ($T\;\lsim\;T_c$) is difficult in the crossover region. Remnants of the confinement part of the QCD forces may in parts dominate the non-perturbative properties of the QCD plasma at temperatures only moderately larger than $T_c$. This supports similar findings in recent studies of the quark anti-quark free energies in quenched QCD [@Kaczmarek:2004gv]. The properties of the quark anti-quark free energy and the coupling at small distances thus again allow for non-perturbative renormalization of the free energy and Polyakov loop [@Kaczmarek:2002mc]. The crossover from confinement to deconfinement is clearly signaled by the Polyakov loop through a rapid increase at temperatures close to $T_c$. String breaking dominates the quark anti-quark free energies at temperatures well below deconfinement in all color channels leading to finite values of the Polyakov loop. The string breaking energy, $F_\infty(T)$, and the distance where string breaking sets in, are decreasing with increasing temperatures. The plateau value $F_\infty(T)$ approaches about $95\%$ of the value one usually estimates at zero temperature, $V(r_{\text{breaking}})\simeq1.1$ GeV [@Pennanen:2000yk; @Bernard:2001tz], already for $T\;\simeq\;0.8T_c$. We thus expect that the change in quark anti-quark free energies is only small when going to smaller temperatures and the quark anti-quark free energy, $F_1(r,T)$, will show only small differences from the heavy quark potential at $T=0$, $V(r)$. Significant thermal modifications on heavy quark bound states can thus be expected only for temperatures above $0.8T_c$. Our analysis of $r_{med}$ suggests indeed a qualitative similar behavior for the free energies in $3$-flavor QCD. This can also be seen from the behavior of $r_{med}$ shown in Fig. \[screenradius\]. At temperatures well above the (pseudo-) critical temperature, [*i.e.*]{} $1.2\;\lsim\;T/T_c\;\lsim\;4$, no or only little qualitative differences in the thermal properties of the quark anti-quark free energies calculated in quenched ($N_f$=0) and full ($N_f$=2,3) QCD could be established here when converting the observables to physical units. Color screening clearly dominates the quark anti-quark free energy at large distances and screening masses, which are non-perturbatively determined from the exponential fall-off of the color singlet free energies, could be extracted (for $N_f$=2). In accordance with earlier findings in quenched QCD [@Kaczmarek:1999mm; @Kaczmarek:2004gv] we find substantially larger values for the screening masses than given by leading order perturbation theory. The values of the screening masses, $m_D(T)$, again show only marginal differences as function of $T/T_c$ compared to the values found in quenched QCD (see also Fig. \[screenradius\]). The large screening mass defines a rather small screening radius, $r_D\equiv1/m_D$, which refers to a length scale where the singlet free energy shows almost no deviations from the heavy quark potential at zero temperature. It thus might be misleading to quantify the length scale of the QCD plasma where temperature effects dominate thermal modifications on heavy quark bound states with the observable $r_D\equiv1/m_D$ in the non-perturbative temperature regime close but above $T_c$. On the other hand the color averaged free energies show indeed strong temperature dependence at distances which could be characterized by $1/m_D$. In view of color changing processes as a mechanism for direct quarkonium dissociation [@Kharzeev:1994pz] the discussion of the color averaged free energy could become important. We have also compared $r_D$ and $r_{med}$ in Fig. \[screenradius\] to the expected mean squared charge radii of some charmonium states. It is reasonable that the temperatures at which these radii equal $r_{med}$ give a first indication of the temperature at which thermal modifications become important in the charmonium states. It appears thus quite reasonable that $J/\psi$ will survive the transition while $\chi_c$ and $\psi\prime$ are expected to show strong thermal effects at temperatures in the vicinity of the transition and this may support recent findings [@Wong:2004kn; @Asakawa:2003re; @Petreczky:2003js]. Of course the wave functions of these states will also reach out to larger distances and thus our analysis can only be taken as a first indication of the relevant temperatures. We will turn back to this issue in Ref. [@pap2]. The analysis of bound states using, for instance, the Schrödinger equation will do better in this respect. It can, however, clearly be seen from Fig. \[screenradius\] that although $r_{med}(T_c)\simeq0.7$ fm is approached almost in common for $N_f$=0,2,3, it falls apart for $N_f$=2,3 at smaller temperatures. It thus could be difficult to determine suppression patterns from free energies for quarkonium states which are substantially larger than $0.7$ fm independently from $N_f$ and/or finite quark mass. The analysis presented here has been performed for a single quark mass value that corresponds to a pion mass of about $770$ MeV ($m_\pi/m_\rho\simeq0.7$). In Ref. [@Karsch:2000kv], however, no major quark mass effects were visible in color averaged free energies below this quark mass value. The comparisons of $r_{med}$ and $F_\infty(T)$ calculated in $2$-flavor ($m_\pi/m_\rho\simeq0.7$) with results calculated in $3$-flavor QCD ($m_\pi/m_\rho\simeq0.4$ [@Peterpriv]) supports this property. While at temperatures above deconfinement no or only little differences in this observable can be identified, at temperatures below $T_c$ differences can be seen. To what extend these are due to the smaller quark masses used in the $3$-flavor case or whether these differences reflect a flavor dependence of the string breaking distance requires further investigation. The present analysis was carried out on one lattice size ($16^3\times 4$) and therefore performing an extrapolation to the continuum limit could not be done with the current data. However the analysis of the quenched free energies [@Kaczmarek:2002mc; @Kaczmarek:2004gv],where no major differences between the $N_\tau=4$ and $N_\tau=8$ results were visible, and the use of improved actions suggests that cut-off effects might be small. Despite these uncertainties and the fact that parts of our comparisons to results from quenched QCD are on a qualitative level, we find quite important information for the study of heavy quark bound states in the QCD plasma phase. At temperatures well above $T_c$, [*i.e.*]{} $1.2\;\lsim\;T/T_c\;\lsim\;4$, no or only little differences appear between results calculated in quenched and QCD. This might suggest that using thermal parameters extracted from free or internal energy in quenched QCD as input for model calculations of heavy quark bound states [@Shuryak:2004tx; @Wong:2004kn] is a reasonable approximation. Furthermore this also supports the investigation of heavy quarkonia in quenched lattice QCD calculations using the analysis of spectral functions [@Datta:2003ww; @Asakawa:2003re; @Asakawa:2002xj]. On the other hand, however, most of our $2$- and $3$-flavor QCD results differ from quenched calculations at temperatures in the vicinity and below the phase transition. Due to these qualitative differences, results from quenched QCD could make a discussion of possible signals for the quark gluon plasma production in heavy ion collision experiments complicated when temperatures and/or densities close to the transition become important. We thank the Bielefeld-Swansea collaboration for providing us their configurations with special thanks to S. Ejiri. We would like to thank E. Laermann and F. Karsch for many fruitful discussions. F.Z. thanks P. Petreczky for his continuous support. We thank K. Petrov and P. Petreczky for sending us the data of Ref. [@Petreczky:2004pz]. This work has partly been supported by DFG under grant FOR 339/2-1 and by BMBF under grant No.06BI102 and partly by contract DE-AC02-98CH10886 with the U.S. Department of Energy. At an early stage of this work F.Z. has been supported through a stipend of the DFG funded graduate school GRK881. Some of the results discussed in this article were already presented in proceeding contributions [@Kaczmarek:2003ph; @Kaczmarek:2005uv; @Kaczmarek:2005uw]. [^1]: While a definition of the quark anti-quark potential can be given properly at zero temperature using large Wilson loops, at finite temperature a definition of the thermal modification of an appropriate potential energy between the quark anti-quark pair is complicated [@Karsch:2005ex]. [^2]: In pure gauge theory $r_{max}$ and $\tilde{\alpha}_{qq}(T)$ would be infinite below $T_c$. [^3]: Note here, however, the change in temperature scale from $T_c=202$ MeV in full and $T_c=270$ MeV in quenched QCD. [^4]: In Ref. [@Gava:1981qd] the Polyakov loop expectation value is calculated in pure gauge theory and the Debye mass, $m_D(T)/T=\sqrt{N_c/3}g(T)$, enters here through the resummation of the gluon polarization tensor. When changing from pure gauge to full QCD quark loops will contribute to the polarization tensor. In this case resummation will lead to the Debye mass given in (\[LOscreen\]). Thus the flavor dependence in Eq. (\[lrenpert\]) at this level is given only by the Debye mass.
--- abstract: 'Solid $^4$He has been created off the melting curve by growth at nearly constant mass via the “blocked capillary" technique and growth from the $^4$He superfluid at constant temperature. The experimental apparatus allows injection of $^4$He atoms from superfluid directly into the solid. Evidence for the superfluid-like transport of mass through a sample cell filled with hcp solid $^4$He off the melting curve is found. This mass flux depends on temperature and pressure.' author: - 'M. W. Ray and R. B. Hallock' title: Observation of Unusual Mass Transport in Solid hcp $^4$He --- Experiments by Kim and Chan[@Kim2004a; @Kim2004b; @Kim2005; @Kim2006], who studied the behavior of a torsional oscillator filled with hcp solid $^4$He, showed a clear reduction in the period of the oscillator as a function of temperature at temperatures below T $\approx$ 250 mK. This observation was interpreted as evidence for the presence of “supersolid" behavior in hcp solid $^4$He. Subsequent work in a number of laboratories has confirmed the observation of a period shift, with the interpretation of mass decoupling in most cases in the 0.05 - 1 percent range, but with dramatically larger decoupling seen in quench-frozen samples in small geometries[@Rittner2007]. Aoki et al.[@Aoki2007] observed sample history dependence under some conditions. These observations and interpretations, among others, have kindled considerable interest and debate concerning solid hcp $^4$He. Early measurements by Greywall[@Greywall1977], showed no evidence for mass flow in solid helium. Work by the Beamish group also showed no evidence for mass flow in two sets of experiments involving Vycor[@Day2005] and narrow channels[@Day2006]. Sasaki et al.[@Sasaki2006] attempted to cause flow through solid helium on the melting curve, using a technique similar to that used by Bonfait et al.[@Bonfait1989] (that showed no flow). Initial interpretations suggested that flow might be taking place through the solid[@Sasaki2006], but subsequent measurements have been interpreted to conclude that the flow was instead likely carried by small liquid regions at the interface between crystal faces and the surface of the sample cell[@Sasaki2007], which were shown to be present for helium on the melting curve. Recent work by Day and Beamish[@Day2007] showed that the shear modulus of hcp solid $^4$He increased at low temperature and demonstrated a temperature and $^3$He impurity dependence very similar to that shown by the torsional oscillator results. The theoretical situation is also complex, with clear analytic predictions that a supersolid cannot exist without vacancies (or interstitials)[@Prokofev2005], numerical predictions that no vacancies exist in the ground state of hcp solid $^4$He[@Boninsegni2006; @Clark2006; @Boninsegni2006a], and [*ab initio*]{} simulations that predict that in the presence of disorder the solid can demonstrate superflow[@Boninsegni2006; @Pollet2007; @Boninsegni2007] along imperfections. But, there are alternate points of view[@Anderson2007]. There has been no clear experimental evidence presented for the flow of atoms through solid hcp $^4$He. We have created a new approach, related to our “sandwich"[@Svistunov2006] design, with an important modification. The motivation was to attempt to study hcp solid $^4$He at pressures off the melting curve in a way that would allow a chemical potential gradient to be applied across the solid, but not by squeezing the hcp solid lattice directly. Rather, the idea is to inject helium atoms into the solid from the superfluid. To do this off the melting curve presents rather substantial experimental problems due to the high thermal conductivity of bulk superfluid helium. But, helium in the pores of Vycor, or other small pore geometries, is known to freeze at much higher pressures than does bulk helium[@Beamish1983; @Adams1987; @Lie-zhao1986]. Thus, the “sandwich" consists of solid helium held between two Vycor plugs, each containing superfluid $^4$He. The schematic design of our experiment is shown in figure 1. Three fill lines lead to the copper cell; two from room temperature, with no heat sink below 4K, enter via liquid reservoirs, R1, R2, atop the Vycor (1 and 2) and a third (3) is heat sunk at 1K and leads directly to the cell, bypassing the Vycor. The concept of the measurement is straightforward: (a) Create a solid sample Shcp and then (b) inject atoms into the solid Shcp by feeding atoms via line 1 or 2. So, for example, we increase the pressure on line 1 or 2 and observe whether there is a change in the pressure on the other line. We also have capacitive pressure gauges on the sample cell, C1 and C2, and can measure the pressure [*in situ*]{}. To conduct the experiment it is important that the helium in the Vycor, the liquid reservoirs atop the Vycor, and the lines that feed the Vycor contain $^4$He that does not solidify. This is accomplished by imposing a temperature gradient between R and Shcp across the Vycor, a gradient which would present insurmountable difficulties if the Vycor were not present. While the heat conducted down the Vycor rods in our current apparatus is larger than we expected, and this presently limits our lowest achievable temperature, we have none the less obtained interesting results. To study the flow characteristics of our Vycor rods, we measured the relaxation of pressure differences between line 1 and line 2 with superfluid $^4$He in the cell at $\sim$ 20 bar at 400 mK, with the tops of the Vycor rods in the range 1.7 $<$ T$_1$ = T$_2$ $<$ 2.0 K, temperatures similar to some of our measurements at higher pressures with solid helium in the sample cell. The relaxation was linear in time as might be expected for flow through a superleak at critical velocity. The pressure recorded by the capacitive gauges shifted as it should. An offset in the various pressure readings if T$_1$ $\ne$ T$_2$ was present due to a predictable fountain effect across the two Vycor superleaks. Our Vycor rods readily allow a flux of helium atoms, even for T$_1$, T$_2$ as high as 2.8K. To study solid helium, one approach is to grow from the superfluid phase (using ultra-high purity helium, assumed to have $\sim$ 300 ppb $^3$He). With the cell at T $\approx$ 400 mK, we added helium to lines 1 and 2 to increase the pressure from below the melting curve to $\approx$ 26.8 bar. Sample A grew in a few hours and was held stable for about a day before we attempted measurements on it. Then the pressure to line 1, P1, was changed abruptly from 27.1 to 28.6 bar (figure 2). There resulted a gradual decrease in the pressure in line 1 and a corresponding increase of the pressure in line 2. Note that pressure can increase in line 2 only if atoms move from line 1 to line 2, through the region of the cell occupied by solid helium, Shcp. We also observed a change in the pressure recorded on the capacitive pressure gauges on the cell, e.g. C1 (C1 and C2 typically agree). As these pressure changes evolved, we hoped to see the pressure in line 1 and line 2 converge, but the refrigerator stopped after 20 hours of operation on this particular run. Note that the change in P2 is rather linear (0.017 bar/hr) and does not show the sort of non-linear change with time that one would expect for the flow of a viscous fluid. Our conclusion is that helium has moved through the region of hcp solid $^4$He, while the solid was off the melting curve, and that this flow from line 1 to line 2 was at a limiting velocity, consistent with superflow. From the behavior of the pressure gauges on the cell, it is clear that atoms were also added to the solid. We next grew a new solid sample, B, again by growth from the superfluid, but we grew it at a faster rate and did not dwell for a day prior to measurements. This sample also demonstrated flow, with the pressure difference relaxing over about 5 hours after we stopped adding atoms to line 1. The pressure step applied to line 1 was from 26.4 to 28.0 bar. While $^4$He was slowly added to line 1, P2 increased. After the addition of atoms was stopped, the change in these pressures appeared to depend on P1-P2, with P2 showing curvature and regions of predominantly $\sim$ 0.076 and $\sim$ 0.029 bar/hr. Next, we used the same solid sample and moved it closer to the melting curve (1.25 K), but maintained it as a solid, sample C. We applied a pressure difference by increasing the pressure to line 1 from 26.0 to 28.4 bar, but in this case there was no increase in P2; the pressure difference P1-P2 appeared nearly constant, with a slight increase in pressure recorded in the cell. It is possible that this difference in behavior is an annealing effect, but it may also be due to a reduced ability to flow through the same number of conducting pathways. Next, after a warm up to room temperature, we prepared another sample, G, with P,T coordinates much like sample A, but used a time for growth, and pause prior to injection of helium, that was midway between those used for samples A and B. The results again showed flow, (P2 changing $\sim$ 0.008 bar/hr; with C1, C2 similar). Finally, we injected sample G again, but there were some modest instabilities with our temperatures. A day later, we injected again on this same sample, now two days old and termed H; and then again, denoting it sample J. Short term changes were observed in P1 and P2, but P1-P2 was essentially constant at $\approx$ 1.38 bar for more than 15 hours. In another sequence, we created sample M (like G), increased P1, observed flow, warmed it to 800 mK, saw no flow, cooled it to 400 mK, increased P1, saw no flow, decreased P1, saw flow, increased P1 again, and saw flow again. (Typically if an increase in P1 shows flow, a decrease in P1 will also show flow.) Yet another sample, Y, created similar to A, showed linear flow like A, but when warmed to 800 mK showed no flow. Whatever is responsible for the flow appears to change somewhat with time, sample history, and is clearly dependent on sample pressure and temperature. How can we reconcile such behavior when the measurements of Greywall and Day and Beamish saw no such flow[@Greywall1977; @Day2005; @Day2006]? The actual explanation is not clear to us, but there is a conceptual difference between the two types of experiments: These previous experiments pushed on the solid helium lattice; we inject atoms from the superfluid (which must have been the case for the experiments of Sasaki et al.[@Sasaki2006], on the melting curve). If predictions of superflow along structures in the solid[@Pollet2007; @Boninsegni2007; @Shevchenko1987] (e.g. dislocations of various sorts or grain boundaries) are correct, it is possible that by injecting atoms from the superfluid we can access these defects at their ends in a way that applying mechanical pressure to the lattice does not allow. We have also grown samples via the “blocked capillary" technique. In this case the valves leading to lines 1 and 2 were controlled and the helium in line 3 was frozen. Sample D was created this way and exited the melting curve in the higher pressure region of the bcc phase and settled near 28.8 bar. There then followed an injection of $^4$He atoms via line 1 (figure 3). Here we observed a lengthy period during which a substantial pressure difference between lines 1 and 2 did not relax, and to high accuracy we saw no change in the pressure of the solid as measured directly in the cell with the capacitive gauges C; C1 changed $<$ 0.0003 bar/hr. Behavior of this sort was also observed for the same sample, but with a much smaller (0.21 bar) pressure shift, with no flow observed. And, warming this sample to 900 mK produced no evidence for flow (sample E, not shown). Four other samples (F, T, V, W) were grown using the blocked capillary technique, with the lower pressure samples (T, V) demonstrating flow. Pressure appears to be an important variable, but not growth technique. To summarize the focus of our work to date, on figure 4 we show the location of some of the samples that we have created. Samples grown at higher pressure have not shown an ability to relax from an applied pressure difference over intervals longer than 10 hr.; they appear to be insulators. Samples grown at lower pressures clearly show mass flux through the solid samples, and for some samples this flux appears to be at constant velocity. Samples, warmed close to 800 mK, and one warmed near to the melting curve at 1.25 K, and a sample created from the superfluid at 800 mK all showed no flow. We interpret the absence of flow for samples warmed to or created at 800 mK to likely rule out liquid channels as the conduction mechanism. Annealing my be present for the 1.25 K sample, but we doubt that this explains the 800 mK samples. Instead we suspect that whatever conducts the flow (perhaps grain boundaries or other defects) is temperature dependent. Sample pressure and temperature are important; sample history may be. The data of figure 2 can be used to deduce the mass flux through and into the sample. From that 20-hour data record we conclude that over the course of the measurement 1 $\times$ 10$^{-4}$ grams of $^4$He must have moved through the cell from line 1 to line 2, and that about 4.5 $\times$ 10$^{-4}$ grams of $^4$He must have joined the solid. If we write M/t = $\xi$$\rho$vxy, as the mass flux from line 1 to line 2, where M is the mass that moved in time t, $\rho$ is the density of helium, $\xi$ is the fraction of the helium that can flow, v is the velocity of flow in the solid, and xy is the cross section that supports that flow, we find $\xi$vxy = 8 $\times$ 10$^{-9}$ cm$^3$/sec. We know from measurements on the Vycor filled with superfluid that it should not limit the flow. So, if we take the diameter of our sample cell (0.635 cm), presuming that the full diameter conducts, we can deduce that $\xi$v = 2.52 $\times$ 10$^{-8}$ cm/sec, which, if, for arbitrary example, v = 100 $\mu$/sec, results in $\xi$ = 2.5 $\times$ 10$^{-6}$. An alternate approach is to presume instead that what is conducting the flow from line 1 to line 2 is not the entire cross section of the sample cell but rather a collection of discrete structures (say, dislocation lines, or grain boundaries). If this were the case, with one dimension set at x = 0.5 nm, an atomic thickness, then for the flow from line 1 to line 2, $\xi$vy = 0.16 cm$^2$/sec . If we assume that $\xi$ = 1 for what moves along these structures then vy = 0.16 cm$^2$/sec. If we adopt the point of view that what can flow in such a thin dimension is akin to a helium film, we can take a critical velocity of something like 200 cm/sec[@Telschow1974]. In such a case, we find y = 8 $\times$ 10$^{-4}$ cm. If our structures conduct along an axis, where the axis is, say 0.5 nm x 0.5 nm, then we would need 1.6 $\times$ 10$^4$ such structures to act as pipe-like conduits. This, given the volume of our cell between our two Vycor rods (0.6 cm$^3$), would require a density of such structures of at least 2.67 $\times$ 10$^4$ cm$^{-2}$, and roughly five times this number (10$^5$ cm$^{-2}$) to carry the flux that also contributes mass to the solid as its pressure increases. We have conducted experiments that show the first evidence for flow of helium through a region containing solid hcp $^4$He off the melting curve. The phase diagram appears to have two regions. Samples grown at lower pressures show flow, with flow apparently dependent on sample history, with reduced flow for samples at higher temperature, which is evidence for dependence on temperature. Samples grown at higher pressures show no clear evidence for any such flow for times longer than 10 hours. The temperatures utilized for this work are well above the temperatures at which much attention has been focused, but interesting behavior is seen. Further measurements will be required to establish in more detail how such behavior depends on pressure and temperature, and on sample history, and the relevance (if any) of our observations to the torsional oscillator and shear modulus experiments that were conducted at lower temperatures. We thank B. Svistunov and N. Prokofev for illuminating discussions, which motivated us to design this experiment. We also thank S. Balibar and J. Beamish for very helpful discussions and advice on the growth of solid helium, M.C.W. Chan, R.A. Guyer, H. Kojima, W.J. Mullin, J.D. Reppy, E. Rudavskii and Ye. Vekhov for discussions. This work was supported by NSF DMR 06-50092, CDRF 2853, UMass RTF funds and facilities supported by the NSF-supported MRSEC. [26]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , ****, (). (), <http://online.itp.ucsb.edu/online/smatter_m06/svistunov/>. , , , , ****, (). , , , , ****, (). , , , , , ****, (). , ****, (). , , , ****, ().
--- abstract: 'This paper studies vector quantile regression (VQR), which is a way to model the dependence of a random vector of interest with respect to a vector of explanatory variables so to capture the whole conditional distribution, and not only the conditional mean. The problem of vector quantile regression is formulated as an optimal transport problem subject to an additional mean-independence condition. This paper provides a new set of results on VQR beyond the case with correct specification which had been the focus of previous work. First, we show that even under misspecification, the VQR problem still has a solution which provides a general representation of the conditional dependence between random vectors. Second, we provide a detailed comparison with the classical approach of Koenker and Bassett in the case when the dependent variable is univariate and we show that in that case, VQR is equivalent to classical quantile regression with an additional monotonicity constraint.' author: - 'G. Carlier[^1], V. Chernozhukov [^2], A. Galichon [^3]' title: Vector quantile regression beyond correct specification --- **Keywords:** vector quantile regression, optimal transport, duality. Introduction {#intro} ============ Vector quantile regression was recently introduced in [@ccg] in order to generalize the technique of quantile regression when the dependent random variable is multivariate. Quantile regression, pioneered by Koenker and Bassett [@kb], provides a powerful way to study dependence between random variables assuming a linear form for the quantile of the endogenous variable $Y$ given the explanatory variables $X$. It has therefore become a very popular tool in many areas of economics, program evaluation, biometrics, etc. However, a well-known limitation of the approach is that $Y$ should be scalar so that its quantile map is defined. When $Y$ is multivariate, there is no canonical notion of quantile, and the picture is less clear than in the univariate case[^4]. The approach proposed in [@ccg] is based on optimal transport ideas and can be described as follows. For a random vector vector $Y$ taking values in ${{\bf}{R}}^{d}$, we look for a random vector $U$ uniformly distributed on the unit cube $[0,1]^{d}$ and which is maximally correlated to $Y$, finding such a $U$ is an optimal transport problem. A celebrated result of Brenier [@brenier] implies that such an optimal $U$ is characterized by the existence of a convex function $\varphi $ such that $Y=\nabla \varphi (U)$. When $d=1$, of course, the optimal transport map of Brenier $\nabla \varphi =Q$ is the quantile of $Y$ and in higher dimensions, it still has one of the main properties of univariate quantiles, namely monotonicity. Thus Brenier’s map $\nabla \varphi $ is a natural candidate to be considered as the vector quantile of $Y$, and one advantage of such an approach is the pointwise relation $Y=\nabla \varphi (U)$ where $U$ is a uniformly distributed random vector which best approximates $Y$ in $L^{2}$. If, in addition, we are given another random vector $X$ capturing a set of observable explanatory variables, we wish to have a tractable method to estimate the conditional quantile of $Y$ given $X=x$, that is the map $u\in \lbrack 0,1]^{d}\mapsto Q(x,u)\in {{\bf}{R}}^{d}$. In the univariate case $d=1$, and if the conditional quantile is affine in $x$ i.e. $Q(x,u)=\alpha (u)+\beta (u)x$, the quantile regression method of Koenker and Bassett gives a constructive and powerful linear programming approach to compute the coefficients $\alpha (t)$ and $\beta (t)$ for any fixed $t\in \lbrack 0,1]$, dual to the linear programming problem: $$\sup_{(U_{t})}\{{{\bf}{E}}(U_{t}Y)\;:\;U_{t}\in \lbrack 0,1],{{\bf}{E}}(U_{t})=(1-t),\;{{\bf}{E}}(XU_{t})={{\bf}{E}}(X)\}. \label{kobas}$$Under correct specification, i.e. when the true conditional quantile is affine in $x$, this variational approach estimates the true coefficients $\alpha (t)$ and $\beta (t)$. In [@ccg], we have shown that in the multivariate case as well, when the true vector quantile is affine in $x$, one may estimate it by a variational problem which consists in finding the uniformly distributed random variable $U$ such that ${{\bf}{E}}(X|U)={{\bf}{E}}(X)$ (mean independence) and maximally correlated with $Y$. The purpose of the present paper is to understand what these variational approaches tell about the dependence between $Y$ and $X$ in the general case i.e. without assuming any particular form for the conditional quantile. Our main results are the following: - **A general representation of dependence:** we will characterize the solution of the optimal transport problem with a mean-independence constraint from [@ccg] and relate it to a relaxed form of vector quantile regression. To be more precise, our theorem \[qrasfoc\] below will provide the following general representation of the distribution of $\left( X,Y\right) $: $$\begin{split} Y& \in \partial \Phi _{X}^{\ast \ast }(U)\mbox{ with $X\mapsto \Phi_X(U)$ affine}, \\ & \Phi _{X}(U)=\Phi _{X}^{\ast \ast }(U)\mbox{ almost surely,} \\ & \mbox{ $U$ uniformly distributed on $[0,1]^d$},\;{{\bf}{E}}(X|U)={{\bf}{E}}(X), \end{split}$$where $\Phi _{x}^{\ast \ast }$ denotes the convex envelope of $u\rightarrow \Phi _{x}\left( u\right) $ for a fixed $x$, and $\partial $ denotes the subdifferential. The main ingredients are convex duality and an existence theorem for optimal dual variables. The latter is a non-trivial extension of Kantorovich duality: indeed, the existence of a Lagrange multiplier associated to the mean-independence constraint is not straightforward and we shall prove it thanks to Komlos’ theorem (theorem \[existdual\]). Vector quantile regression is *under correct specification* if $\Phi _{x}\left( u\right) $ is convex for all $x$ in the support, in which case one can write $$\begin{split} Y& =\nabla \Phi _{X}(U)\mbox{ with $\Phi_X(.)$ convex, $X\mapsto \Phi_X(U)$ affine}, \\ & \mbox{ $U$ uniformly distributed on $[0,1]^d$},\;{{\bf}{E}}(X|U)={{\bf}{E}}(X). \end{split}$$While our previous paper [@ccg] focused on the case with correct specification, the results we obtain in the present paper are general. - **A precise link with classical quantile regression in the univariate case:** it was shown in [@ccg] that in the particular case when $d=1$ and under correct specification, classical quantile regression and vector quantile regression are equivalent. Going beyond correct specification here, we shall see that the optimal transport approach is equivalent (theorem \[equivkb\]) to a variant of (\[kobas\]) where one further imposes the monotonicity constraint that $t\mapsto U_{t}$ is nonincreasing (which is consistent with the fact that the true quantile $Q(x,t)$ is nondecreasing with respect to $t$). The paper is organized as follows. In section \[mvquant\], introduces vector quantiles through optimal transport. Section \[mvqr\] is devoted to a precise, duality based, analysis of the vector quantile regression beyond correct specification. Finally, we shall revisit in section \[univar\] the univariate case and then carefully relate the Koenker and Bassett approach to that of [@ccg]. Vector quantiles and optimal transport {#mvquant} ====================================== Let $(\Omega ,{\mathcal{F}},{\bf}{P})$ be some nonatomic probability space, and let $\left( X,Y\right) $ be a random vector, where the vector of explanatory variables $X$ is valued in ${{\bf}{R}}^{N}$ and the vector of dependent variables $Y$ is valued in ${{\bf}{R}}^{d}$. Vector quantiles by correlation maximization {#mvquant1} -------------------------------------------- The notion of vector quantile was recently introduced by Ekeland, Galichon and Henry [@egh], Galichon and Henry [@gh] and was used in the framework of quantile regression in our companion paper [@ccg]. The starting point for this approach is the correlation maximization problem $$\max \{{{\bf}{E}}(V\cdot Y),\;\mathop{\mathrm{Law}}\nolimits(V)=\mu \} \label{otmd}$$where $\mu :=\mathop{\mathrm{uniform}}\nolimits([0,1]^{d})$ is the uniform measure on the unit $d$-dimensional cube $[0,1]^{d}$. This problem is equivalent to the optimal transport problem which consists in minimizing ${{\bf}{E}}(|Y-V|^{2})$ among uniformly distributed random vectors $V$. As shown in the seminal paper of Brenier [@brenier], this problem has a solution $U$ which is characterized by the condition $$Y=\nabla \varphi (U)$$for some (essentially uniquely defined) convex function $\varphi $ which is again obtained by solving a dual formulation of (\[otmd\]). Arguing that gradients of convex functions are the natural multivariate extension of monotone nondecreasing functions, the authors of [@egh] and [@gh] considered the function $Q:=\nabla \varphi $ as the vector quantile of $Y$. This function $Q=\nabla \varphi $ is by definition the Brenier’s map, i.e. the optimal transport map (for the quadratic cost) between the uniform measure on $[0,1]^{d}$ and $\mathop{\mathrm{Law}}\nolimits(Y)$: **(Brenier’s theorem)** If $Y$ is a squared-integrable random vector valued in ${\bf}{R}^{d}$, there is a unique map of the form $T=\nabla \varphi $ with $\varphi $ convex on $[0,1]^{d}$ such that $\nabla \varphi _{\#}\mu =\mathop{\mathrm{Law}}\nolimits(Y)$, this map is by definition the vector quantile function of $Y$. We refer to the textbooks [@villani], [@villani2] and [@sanbook] for a presentation of optimal transport theory, and to [@otme] for a survey of applications to economics. Conditional vector quantiles {#mvquant2} ---------------------------- Take a $N$-dimensional random vector $X$ of regressors, $\nu :=\mathop{\mathrm{Law}}\nolimits(X,Y)$, $m:=\mathop{\mathrm{Law}}\nolimits(X)$, $\nu =\nu ^{x}\otimes m$ where $m$ is the law of $X$ and $\nu ^{x}$ is the law of $Y$ given $X=x$. One can consider $Q(x,u)=\nabla \varphi (x,u)$ as the optimal transport between $\mu $ and $\nu ^{x}$, under some regularity assumptions on $\nu ^{x}$, one can invert $Q(x,.)$: $Q(x,.)^{-1}=\nabla _{y}\varphi (x,.)^{\ast }$ (where the Legendre transform is taken for fixed $x$) and one can define $U$ through $$U=\nabla _{y}\varphi ^{\ast }(X,Y),\;Y=Q(X,U)=\nabla _{u}\varphi (X,U)$$$Q(X,.)$ is then the conditional vector quantile of $Y$ given $X$. There is, as we will see in dimension one, a variational principle behind this definition: - $U$ is uniformly distributed, independent from $X$ and solves: $$\label{otmdindep} \max \{ {{\bf}{E}}(V\cdot Y), \; \mathop{\mathrm{Law}}\nolimits(V)=\mu, V\perp \! \! \! \perp X\}$$ - the conditional quantile $Q(x,.)$ and its inverse are given by $Q(x,u)=\nabla_u \varphi(x,u)$, $F(x,y)=\nabla_y \psi(x,y)$ (the link between $F$ and $Q$ being $F(x, Q(x,u))=u$), the potentials $\psi$ and $\varphi$ are convex conjugates ($x$ being fixed) and solve $$\min \int \varphi(x,u) m(dx) \mu(du) + \int \psi(x,y) \nu(dx, dy) \; : \; \psi(x,y)+\varphi(x,u)\ge y\cdot u.$$ Note that if the conditional quantile function is affine in $X$ and $Y=Q(X,U)=\alpha(U) +\beta(U) X$ where $U$ is uniform and independent from $X$, the function $u\mapsto \alpha(u)+\beta(u)x$ should be the gradient of some function of $u$ which requires $$\alpha=\nabla \varphi, \; \beta=Db^T$$ for some potential $\varphi$ and some vector-valued function $b$ in which case, $Q(x,.)$ is the gradient of $u\mapsto \varphi(u)+b(u)\cdot x$. Moreover, since quantiles are gradients of convex potentials one should also have $$u\in [0,1]^d \mapsto \varphi(u) +b(u) \cdot x \mbox{ is convex}.$$ Vector quantile regression {#mvqr} ========================== In the next paragraphs, we will impose a parametric form of the dependence of the dependence of the vector quantile $Q(x,u)$ with respect to $x$. More specifically, we shall assume that $Q(x,u)$ is affine in $x$. In the scalar case ($d=1$), this problem is called quantile regression; we shall investigate that case in section \[univar\] below. Correlation maximization {#mvqr1} ------------------------ Without loss of generality we normalize $X$ so that it is centered $${{\bf}{E}}(X)=0.$$Our approach to vector quantile regression is based on a variation of the correlation maximization problem \[otmdindep\], where the independence constraint is replaced by a mean-independence constraint, that is$$\max \{{{\bf}{E}}(V\cdot Y),\;\mathop{\mathrm{Law}}\nolimits(V)=\mu ,\;{{\bf}{E}}(X|V)=0\}. \label{maxcorrmimd}$$where $\mu =\mathop{\mathrm{uniform}}\nolimits([0,1]^{d})$ is the uniform measure on the unit $d$-dimensional cube. An obvious connection with the specification of vector quantile regression (i.e. the validity of an affine in $x$ form for the conditional quantile) is given by: If $Y=\nabla \varphi(U)+ Db(U)^T X$ with - $u\mapsto \varphi(u)+ b(u)\cdot x$ convex and smooth for $m$-a.e $x$, - $\mathop{\mathrm{Law}}\nolimits(U)=\mu$, ${{\bf}{E}}(X\vert U)=0$, then $U$ solves (\[maxcorrmimd\]). This result follows from [@ccg], but for the sake of completeness, we give a proof: $$Y=\nabla \Phi_X(U), \mbox{ with } \Phi_X(t)=\varphi(t)+b(X)\cdot t.$$ Let $V$ be such that ${\mathop{\mathrm{Law}}\nolimits}(V)=\mu$, ${{{\bf}E}}(X\vert V)=0$, then by Young’s inequality $$V\cdot Y \le \Phi_X(V)+\Phi_X^*(Y)$$ but $Y=\nabla \Phi_X(U)$ implies that $$U\cdot Y = \Phi_X(U)+\Phi_X^*(Y)$$ so taking expectations gives the desired result. Duality {#mvqr2} ------- From now on, we do not assume a particular form for the conditional quantile and wish to study which information (\[maxcorrmimd\]) can give regarding the dependence of $X$ and $Y$. Once again, a good starting point is convex duality. As explained in details in [@ccg], the dual of ([maxcorrmimd]{}) takes the form $$\label{dualmimc} \inf_{(\psi, \varphi, b)} {{\bf}{E}}(\psi(X,Y)+\varphi(U)) \; : \; \psi(x,y)+\varphi(t)+b(t)\cdot x\ge t\cdot y.$$ where $U$ is any uniformly distributed random vector on $[0,1]^d$ i.e. $\mathop{\mathrm{Law}}\nolimits(U)=\mu=\mathop{\mathrm{uniform}}\nolimits([0,1]^d)$ and the infimum is taken over continuous functions $\psi\in C(\mathop{\mathrm{spt}}\nolimits(\nu), {{\bf}{R}})$, $\varphi \in C([0,1]^d, {{\bf}{R}})$ and $b\in C([0,1]^d, {{\bf}{R}}^N)$ satisfying the pointwise constraint $$\label{constraintdudual} \psi(x,y)+\varphi(t)+b(t)\cdot x\ge t\cdot y, \; \; \forall (x,y,t)\in \mathop{\mathrm{spt}}\nolimits(\nu)\times [0,1]^d.$$ Since for fixed $(\varphi, b)$, the largest $\psi$ which satisfies the pointwise constraint in (\[dualmimc\]) is given by the convex function $$\psi(x,y):=\max_{t\in [0,1]^d} \{ t\cdot y- \varphi(t) -b(t)\cdot x\}$$ one may equivalently rewrite (\[dualmimc\]) as the minimimization over continuous functions $\varphi$ and $b$ of $$\int \max_{t\in [0,1]} \{ ty- \varphi(t) -b(t)\cdot x\} \nu(dx,dy)+\int_{[0, 1]^d} \varphi(t)\mu(dt).$$ We claim now that the infimum over continuous functions $(\varphi, b)$ coincides with the one over smooth or simply integrable functions. Indeed, let $b\in L^1((0,1)^d)^N$, $\varphi \in L^1((0,1)^d))$ and $\psi$ such that (\[constraintdudual\]) holds. Let $\eps>0$ and, extend $\varphi$ and $b$ to $Q_\eps:=[0,1]^d+B_{\eps}$ ($B_\eps$ being the closed Euclidean ball of center $0$ and radius $\eps$): $$\varphi_\eps(t):=\begin{cases} \varphi(t), \mbox{ if $t\in (0,1)^d$} \\ \frac{1}{\eps} \mbox{ if $t\in Q_\eps \setminus (0,1)^d$}\end{cases}, \; b_\eps(t):=\begin{cases} b(t), \mbox{ if $t\in (0,1)^d$} \\ 0 \mbox{ if $t\in Q_\eps \setminus (0,1)^d$}\end{cases}$$ and for $(x,y)\in \mathop{\mathrm{spt}}\nolimits(\nu)$: $$\psi_\eps(x,y):=\max\Big(\psi(x,y), \max_{t\in Q_\eps \setminus (0,1)^d} (t\cdot y-\frac{1}{\eps})\Big)$$ then by construction $(\psi_\eps, \varphi_\eps, b_\eps)$ satisfies ([constraintdudual]{}) on $\mathop{\mathrm{spt}}\nolimits(\nu)\times Q_\eps$. Let $\rho \in C_c^{\infty}({{\bf}{R}}^d)$ be a centered, smooth probability density supported on $B_1$, and define the mollifiers $\rho_\delta:=\delta^{-d} \rho(\frac{.}{\delta})$, then for $\delta \in (0, \eps)$, defining the smooth functions $b_{\eps, \delta}:=\rho_\delta \star b_\eps$ and $\varphi_{\eps, \delta}:=\rho_\delta \star \varphi_\eps$, we have that $(\psi_\eps, \varphi_{\eps, \delta}, b_{\eps, \delta})$ satisfies ([constraintdudual]{}). By monotone convergence, $\int \psi_\eps \mbox{d} \nu$ converges to $\int \psi \mbox{d}\nu$, moreover $$\lim_{\delta \to 0} \int_{(0,1)^d} \varphi_{\eps, \delta} = \int_{(0,1)^d} \varphi_\eps=\int_{(0,1)^d} \varphi,$$ we deduce that the value of the minimization problem (\[dualmimc\]), can indifferently be obtained by minimizing over continuous, smooth or $L^1$ $\varphi$ and $b$’s. The existence of optimal ($L^1$) functions $\psi, \varphi $ and $b$ is not totally obvious and is proven in the appendix under the following assumptions: - the support of $\nu$, is of the form $\mathop{\mathrm{spt}}\nolimits(\nu):=\overline{\Omega}$ where $\Omega$ is an open bounded convex subset of ${{\bf}{R}}^N\times {{\bf}{R}}^d$, - $\nu\in L^{\infty}(\Omega)$, - $\nu$ is bounded away from zero on compact subsets of $\Omega$ that is for every $K$ compact, included in $\Omega$ there exists $\alpha_K>0$ such that $\nu\ge \alpha_K$ a.e. on $K$. \[existdual\] Under the assumptions above, the dual problem ([dualmimc]{}) admits at least a solution. Vector quantile regression as optimality conditions {#mvqr3} --------------------------------------------------- Let $U$ solve (\[maxcorrmimd\]) and $(\psi, \varphi, b)$ solve its dual (\[dualmimc\]). Recall that, without loss of generality, we can take $\psi$ convex given $$\label{psiconj} \psi(x,y)=\sup_{t\in [0,1]^d} \{ t\cdot y- \varphi(t) -b(t)\cdot x\} .$$ The constraint of the dual is $$\label{contraintedual} \psi(x,y)+\varphi(t)+b(t)\cdot x\ge t\cdot y, \; \forall (x,y,t)\in \Omega\times [0,1]^d,$$ and the primal-dual relations give that, almost-surely $$\label{dualrel000} \psi(X,Y)+\varphi(U)+b(U)\cdot X= U\cdot Y.$$ Which, since $\psi$, given by (\[psiconj\]) , is convex, yields $$(-b(U), U)\in \partial \psi(X,Y), \mbox{ or, equivalently } (X,Y)\in \partial \psi^*(-b(U),U).$$ Problems (\[maxcorrmimd\]) and (\[dualmimc\]) have thus enabled us to find: - $U$ uniformly distributed with $X$ mean-independent from $U$, - $\phi$ : $[0,1]^d \to {{\bf}{R}}$, $b$ : $[0,1]^d \to {{\bf}{R}}^N$ and $\psi$ : $\Omega\to {{\bf}{R}}$ convex, such that $(X,Y)\in \partial \psi ^{\ast }(-b(U),U)$. Specification of vector quantile regression rather asks whether one can write $Y=\nabla \varphi (U)+Db(U)^{T}X:=\nabla \Phi _{X}(U)$ with $u\mapsto \Phi _{x}(u):=\varphi (u)+b(u)x$ convex in $u$ for fixed $x$. The smoothness of $\varphi $ and $b$ is actually related to this specification issue. Indeed, if $\varphi $ and $b$ were smooth then (by the envelope theorem) we would have $$Y=\nabla \varphi (U)+Db(U)^{T}X=\nabla \Phi _{X}(U).$$But even smoothness of $\varphi $ and $b$ are not enough to guarantee that the conditional quantile is affine in $x$, which would also require $u\mapsto \Phi _{x}(u)$ to be convex. Note also that if $\psi $ was smooth, we would then have $$U=\nabla _{y}\psi (X,Y),\;-b(U)=\nabla _{x}\psi (X,Y)$$so that $b$ and $\psi $ should be related by the vectorial Hamilton-Jacobi equation $$\nabla _{x}\psi (x,y)+b(\nabla _{y}\psi (x,y))=0. \label{hj}$$ In general (without assuming any smoothness), define $$\psi_x(y)=\psi(x,y).$$ We then have, thanks to (\[contraintedual\])-(\[dualrel000\]) $$U\in \partial \psi_X(Y) \mbox{ i.e. } Y \in \partial \psi_X^* (U).$$ The constraint of (\[dualmimc\]) also gives $$\psi_x(y) +\Phi_x(t)\ge t\cdot y$$ since Legendre Transform is order-reversing, this implies $$\label{inegdualps} \psi_x \ge \Phi_x^*$$ hence $$\psi_x^* \le (\Phi_x)^{**} \le \Phi_x$$ (where $\Phi_x^{**}$ denotes the convex envelope of $\Phi_x$). Duality between (\[maxcorrmimd\]) and (\[dualmimc\]) thus gives: \[qrasfoc\] Let $U$ solve (\[maxcorrmimd\]), $(\psi, \varphi, b)$ solve its dual (\[dualmimc\]) and set $\Phi_x(t):=\varphi(t)+b(t)\cdot x$ for every $(t,x)\in [0,1]^d \times \mathop{\mathrm{spt}}\nolimits(m)$ then $$\label{relaxspec} \Phi_X(U)= \Phi_X^{**}(U) \mbox{ and } U \in \partial \Phi_X^*(Y) \mbox{ i.e. } Y \in \partial \Phi_X^{**}(U).$$ almost surely. From the duality relation [(\[dualrel000\])]{} and [(\[inegdualps\])]{}, we have $$U\cdot Y=\psi_X(Y)+\Phi_X(U)\ge \Phi_X^*(Y)+\Phi_X(U)$$ so that $U\cdot Y= \Phi_X^*(Y)+\Phi_X(U)$ and then $$\Phi_X^{**}(U)\ge U\cdot Y-\Phi_X^*(Y)=\Phi_X(U).$$ Hence, $\Phi_X(U)=\Phi_X^{**}(U)$ and $U\cdot Y=\Phi_X^*(Y)+\Phi_X^{**}(U)$ i.e. $U\in \partial \Phi_X^*(Y)$ almost surely, and the latter is equivalent to the requirement that $Y \in \partial \Phi_X^{**}(U)$. The previous theorem thus gives the following interpretation of the correlation maximization with a mean independence constraint ([maxcorrmimd]{}) and its dual (\[dualmimc\]). These two variational problems in duality lead to the pointwise relations (\[relaxspec\]) which can be seen as best approximations of a specification assumption: $$Y=\nabla \Phi _{X}(U),\;(X,U)\mapsto \Phi _{X}(U)\mbox{ affine in $X$, convex in $U$}$$with $U$ uniformly distributed and ${{{\bf}E}}(X\vert U)=0$. Indeed in (\[relaxspec\]), $\Phi _{X}$ is replaced by its convex envelope, the uniform random variable $U$ solving (\[maxcorrmimd\]) is shown to lie a.s. in the contact set $\Phi _{X}=\Phi _{X}^{\ast \ast }$ and the gradient of $\Phi_X$ (which may not be well-defined) is replaced by a subgradient of $\Phi_X^{**}$. The univariate case {#univar} =================== We now study in detail the case when the dependent variable $X$ is scalar, i.e. $d=1$. As before, let $(\Omega ,{\mathcal{F}},{\bf}{P})$ be some nonatomic probability space and $Y$ be some *univariate* random variable defined on this space. Denoting by $F_{Y}$ the distribution function of $Y$: $$F_{Y}(\alpha ):={\bf}{P}(Y\leq \alpha ),\;\forall \alpha \in {{\bf}{R}}$$the *quantile* function of $Y$, $Q_{Y}=F_{Y}^{-1}$ is the generalized inverse of $F_{Y}$ given by the formula: $$Q_{Y}(t):=\inf \{\alpha \in {{\bf}{R}}\;:\;F_{Y}(\alpha )>t\}\mbox{ for all }t\in (0,1). \label{defqy}$$Let us now recall two well-known facts about quantiles: - $\alpha=Q_Y(t)$ is a solution of the convex minimization problem $$\label{minquant} \min_{\alpha} \{{{\bf}{E}}((Y-\alpha)_+)+ \alpha(1-t)\}$$ - there exists a uniformly distributed random variable $U$ such that $Y=Q_Y(U)$. Moreover, among uniformly distributed random variables, $U$ is maximally correlated to $Y$ in the sense that it solves $$\label{oteasy} \max \{ {{\bf}{E}}(VY), \; \mathop{\mathrm{Law}}\nolimits(V)=\mu\}$$ where $\mu:=\mathop{\mathrm{uniform}}\nolimits([0,1])$ is the uniform measure on $[0,1]$. Of course, when $\mathop{\mathrm{Law}}\nolimits(Y)$ has no atom, i.e. when $F_Y$ is continuous, $U$ is unique and given by $U=F_Y(Y)$. Problem ([oteasy]{}) is the easiest example of optimal transport problem one can think of. The decomposition of a random variable $Y$ as the composed of a monotone nondecreasing function and a uniformly distributed random variable is called a *polar factorization* of $Y$, the existence of such decompositions goes back to Ryff [@ryff] and the extension to the multivariate case (by optimal transport) is due to Brenier [@brenier]. We therefore see that there are basically two different approaches to study or estimate quantiles: - the *local* or “$t$ by $t$” approach which consists, for a fixed probability level $t$, in using directly formula (\[defqy\]) or the minimization problem (\[minquant\]) (or some approximation of it), this can be done very efficiently in practice but has the disadvantage of forgetting the fundamental global property of the quantile function: it should be monotone in $t$, - the global approach (or polar factorization approach), where quantiles of $Y$ are defined as all nondecreasing functions $Q$ for which one can write $Y=Q(U)$ with $U$ uniformly distributed; in this approach, one rather tries to recover directly the whole monotone function $Q$ (or the uniform variable $U$ that is maximally correlated to $Y$), in this global approach, one should rather use the optimization problem (\[oteasy\]). Let us assume now that, in addition to the random variable $Y$, we are also given a random vector $X\in {{\bf}{R}}^N$ which we may think of as being a list of explanatory variables for $Y$. We are therefore interested in the dependence between $Y$ and $X$ and in particular the conditional quantiles of $Y$ given $X=x$. In the sequel we shall denote by $\nu$ the joint law of $(X,Y)$, $\nu:=\mathop{\mathrm{Law}}\nolimits(X,Y)$ and assume that $\nu$ is compactly supported on ${{\bf}{R}}^{N+1}$ (i.e. $X$ and $Y$ are bounded). We shall also denote by $m$ the first marginal of $\nu$ i.e. $m:={\Pi_{X}}_\# \nu=\mathop{\mathrm{Law}}\nolimits(X)$. We shall denote by $F(x,y)$ the conditional cdf: $$F(x,y):={\bf}{P}(Y\le y \vert X=x)$$ and $Q(x,t)$ the conditional quantile $$Q(x,t):=\inf\{\alpha\in {{\bf}{R}} \; : \; F(x,\alpha)>t\}.$$ For the sake of simplicity we shall also assume that: - for $m$-a.e. $x$, $t\mapsto Q(x,t)$ is continuous and increasing (so that for $m$-a.e. $x$, identities $Q(x, F(x,y))=y$ and $F(x, Q(x,t))=t$ hold for every $y$ and every $t$), - the law of $(X,Y)$ does not charge nonvertical hyperplanes i.e. for every $(\alpha, \beta)\in {{\bf}{R}}^{1+N}$, ${\bf}{P}(Y=\alpha+\beta\cdot X)=0$. Finally we denote by $\nu^x$ the conditional probability of $Y$ given $X=x$ so that $\nu=m\otimes \nu^x$. A variational characterization of conditional quantiles {#univar1} ------------------------------------------------------- Let us define the random variable $U:=F(X,Y)$, then by construction: $$\begin{split} {\bf}{P}(U< t\vert X=x)&={\bf}{P}(F(x,Y)<t \vert X=x)={\bf}{P}(Y<Q(x,t) \vert X=x) \\ &=F(x,Q(x,t))=t. \end{split}$$ From this elementary observation we deduce that - $U$ is independent from $X$ (since its conditional cdf does not depend on $x$), - $U$ is uniformly distributed, - $Y=Q(X,U)$ where $Q(x,.)$ is increasing. This easy remark leads to a sort of conditional polar factorization of $Y$ with an independence condition between $U$ and $X$. We would like to emphasize now that there is a variational principle behind this conditional decomposition. Recall that we have denoted by $\mu$ the uniform measure on $[0,1]$. Let us consider the variant of the optimal transport problem ([oteasy]{}) where one further requires $U$ to be independent from the vector of regressors $X$: $$\label{otindep1d} \max \{ {{\bf}{E}}(VY), \; \mathop{\mathrm{Law}}\nolimits(V)=\mu, \; V \perp \! \! \! \perp X \}.$$ which in terms of joint law $\theta=\mathop{\mathrm{Law}}\nolimits(X,Y, U)$ can be written as $$\label{mk1} \max_{\theta\in I (\nu, \mu)} \int u\cdot y \; \theta(dx, dy, du)$$ where $I(\mu, \nu)$ consists of probability measures $\theta$ on ${{\bf}{R}}^{N+1}\times [0,1]$ such that the $(X,Y)$ marginal of $\theta$ is $\nu$ and the $(X,U)$ marginal of $\theta$ is $m\otimes \mu$. Problem (\[mk1\]) is a linear programming problem and our assumptions easily imply that it possesses solutions, moreover its dual formulation (see [@ccg] for details) reads as the minimization of $$\label{mk1dual} \inf J(\varphi, \psi)= \int \varphi(x,u)m(dx) \mu(du)+\int \psi(x,y) \nu(dx, dy)$$ among pairs of potentials $\varphi$, $\psi$ that pointwise satisfy the constraint $$\label{constr1} \varphi(x,u)+\psi(x,y)\ge uy.$$ Rewriting $J(\varphi, \psi)$ as $$J(\varphi, \psi)= \int \Big( \int \varphi(x,u) \mu(du)+\int \psi(x,y) \nu^x(dy) \Big) m(dx)$$ and using the fact that the right hand side of the constraint (\[constr1\]) has no dependence in $x$, we observe that (\[mk1dual\]) can actually be solved “$x$ by $x$”. More precisely, for fixed $x$ in the support of $m$, $\varphi(x,.)$ and $\psi(x,.)$ are obtained by solving $$\inf \int f(u) \mu(du)+ \int g(y) \nu^x(dy) \; : \; f(u)+g(y)\ge uy$$ which appears naturally in optimal transport and is well-known to admit a solution which is given by a pair of convex conjugate functions (see [villani]{} [@villani2]). In other words, the infimum in (\[mk1dual\]) is attained by a pair $\varphi$ and $\psi$ such that for $m$-a.e. $x$, $\varphi(x,.)$ and $\psi(x,.)$ are conjugate convex functions: $$\varphi(x,u)=\sup_{y} \{uy-\psi(x,y)\}, \; \psi(x,y):=\sup_{u} \{uy-\varphi(x,u)\}.$$ Since $\varphi(x,.)$ is convex it is differentiable a.e. and then $\partial_u \varphi(x,u)$ is defined for a.e. $u$, moreover $\partial_u \varphi(x,.)_\#\mu=\nu^x$; hence $\partial_u \varphi(x,.)$ is a nondecreasing map which pushes $\mu$ forward to $\nu^x$: it thus coincides with the conditional quantile $$\label{quantilopt} \partial_u \varphi(x,t)=Q(x,t) \mbox{ for $m$-a.e. $x$ and every $t$}.$$ We then have the following variational characterization of conditional quantiles Let $\varphi$ and $\psi$ solve (\[mk1dual\]). Then for $m$-a.e. $x$, the conditional quantile $Q(x,.)$ is given by: $$Q(x,.) =\partial_u \varphi(x,.)$$ and the conditional cdf $F(x,.)$ is given by: $$F(x,.)=\partial_y \psi(x,.).$$ Let now $\theta$ solve (\[mk1\]), there is a unique $U$ such that $\mathop{\mathrm{Law}}\nolimits(X,Y, U)=\theta$ (so that $U$ is uniformly distributed and independent from $X$) and it is given by $Y=\partial_u \varphi(X,U)$ almost surely. The fact that identity [(\[quantilopt\])]{} holds for every $t$ and $m$ a.e. $x$ comes from the continuity of the conditional quantile. The second identity comes from the continuity of the conditional cdf. Now, duality tells us that the maximum in [(\[mk1\])]{} coincides with the infimum in [(\[mk1dual\])]{}, so that if $\theta\in I(\mu, \nu)$ is optimal for [(\[mk1dual\])]{} and $({\widetilde{X}}, {\widetilde{Y}}, {\widetilde{U}})$ has law $\theta$[^5], we have $${{{\bf}E}}({\widetilde{U}}{\widetilde{Y}})={{{\bf}E}}(\varphi({\widetilde{X}},{\widetilde{U}})+\psi({\widetilde{X}},{\widetilde{Y}})).$$ Hence, almost surely $${\widetilde{U}}{\widetilde{Y}}=\varphi({\widetilde{X}},{\widetilde{U}})+\psi({\widetilde{X}},{\widetilde{Y}}).$$ which, since $\varphi(x,.)$ and $\psi(x,.)$ are conjugate and $\varphi(x,.)$ is differentiable, gives $$\label{sousdiff} {\widetilde{Y}}= \partial_u \varphi({\widetilde{X}}, {\widetilde{U}})=Q({\widetilde{X}}, {\widetilde{U}}).$$ Since $F(x,.)$ is the inverse of the conditional quantile, we can invert the previous relation as $$\label{sousdiff2} {\widetilde{U}}= \partial_y \psi({\widetilde{X}}, {\widetilde{Y}})=F({\widetilde{X}}, {\widetilde{U}}).$$ We then define $$U:= \partial_y \psi(X, Y)=F(X,Y),$$ then obviously, by construction ${\mathop{\mathrm{Law}}\nolimits}(X,Y, U)=\theta$ and $Y=\partial_u \varphi(X,U)=Q(X,U)$ almost surely. If ${\mathop{\mathrm{Law}}\nolimits}(X,Y, U)=\theta$ , then as observed above, necessarily $U=F(X,Y)$ which proves the uniqueness claim. To sum up, thanks to the two problems (\[mk1\]) and (\[mk1dual\]), we have been able to find a *conditional polar factorization* of $Y$ as $$\label{polarfact1} Y=Q(X,U), \; \mbox{ $Q$ nondecreasing in $U$, $U$ uniform, $U {\perp \! \! \! \perp}X$}.$$ One obtains $U$ thanks to the the correlation maximization with an independence constraint problem (\[otindep1d\]) and one obtains the primitive of $Q(X,.)$ by the dual problem (\[mk1dual\]). In this decomposition, it is very demanding to ask that $U$ is independent from the regressors $X$, in turn, the function $Q(X,.)$ is just monotone nondecreasing. In practice, the econometrician rather looks for a specific form of $Q$ (linear in $X$ for instance), which by duality will amount to relaxing the independence constraint. We shall develop this idea in details in the next paragraphs and relate it to classical quantile regression. Quantile regression: from specification to quasi-specification {#univar2} -------------------------------------------------------------- From now on, we normalize $X$ to be centered i.e. assume (and this is without loss of generality) that $${{\bf}{E}}(X)=0.$$ We also assume that $m:=\mathop{\mathrm{Law}}\nolimits(X)$ is nondegenerate in the sense that its support contains some ball centered at ${{\bf}{E}}(X)=0$. Since the seminal work of Koenker and Bassett [@kb], it has been widely accepted that a convenient way to estimate conditional quantiles is to stipulate an affine form with respect to $x$ for the conditional quantile. Since a quantile function should be monotone in its second argument, this leads to the following definition Quantile regression is under correct specification if there exist $(\alpha, \beta)\in C([0,1], {{\bf}{R}})\times C([0,1], {{\bf}{R}}^N)$ such that for $m$-a.e. $x$ $$\label{monqr} t\mapsto \alpha(t)+\beta(t)\cdot x \mbox{ is increasing on $[0,1]$}$$ and $$\label{linearcq} Q(x,t)=\alpha(t)+ x\cdot \beta(t),$$ for $m$-a.e. $x$ and every $t\in [0,1]$. If (\[monqr\])-(\[linearcq\]) hold, quantile regression is under correct specification with regression coefficients $(\alpha, \beta)$. Specification of quantile regression can be characterized by Let $(\alpha, \beta)$ be continuous and satisfy (\[monqr\]). Quantile regression is under correct specification with regression coefficients $(\alpha, \beta)$ if and only if there exists $U$ such that $$\label{polarfqrind} Y=\alpha(U)+X\cdot \beta(U) \mbox{ a.s.} , \; \mathop{\mathrm{Law}}\nolimits(U)=\mu, \; U \perp \! \! \! \perp X.$$ The fact that specification of quantile regression implies decomposition [(\[polarfqrind\])]{} has already been explained in paragraph \[univar1\]. Let us assume [(\[polarfqrind\])]{}, and compute $$\begin{split} F(x, \alpha(t)+\beta(t)\cdot x)&={{\bf}{P}}(Y\leq \alpha(t)+\beta(t) x\vert X=x)\\ &= {{\bf}{P}}(\alpha(U)+x\cdot \beta(U) \leq \alpha(t)+\beta(t) x\vert X=x)\\ &={{\bf}{P}}(U\leq t \vert X=x)={{\bf}{P}}(U\le t)=t \end{split}$$ so that $Q(x,t)=\alpha(t)+\beta(t)\cdot x$. Koenker and Bassett showed that, for a fixed probability level $t$, the regression coefficients $(\alpha, \beta)$ can be estimated by quantile regression i.e. the minimization problem $$\label{kb0} \inf_{(\alpha, \beta) \in {{\bf}{R}}^{1+N}} {{\bf}{E}}(\rho_t(Y-\alpha - \beta\cdot X))$$ where the penalty $\rho_t$ is given by $\rho_t(z) := tz_- +(1-t)z_+$ with $z_-$ and $z_+$ denoting the negative and positive parts of $z$. For further use, note that (\[kb0\]) can be conveniently be rewritten as $$\label{kb1} \inf_{(\alpha, \beta) \in {{\bf}{R}}^{1+N}} \{ {{\bf}{E}}((Y-\alpha-\beta \cdot X)_+)+(1-t) \alpha\}.$$ As already noticed by Koenker and Bassett, this convex program admits as dual formulation $$\label{dt} \sup \{{{\bf}{E}}(U_t Y)) \; : \; U_t \in [0,1], \; {{\bf}{E}}(U_t)=(1-t), \; {{\bf}{E}}(U_t X)=0 \}.$$ An optimal $(\alpha, \beta)$ for (\[kb1\]) and an optimal $U_t$ in ([dt]{}) are related by the complementary slackness condition: $$Y>\alpha +\beta \cdot X \Rightarrow U_t=1, \mbox{ and } \; Y<\alpha + \beta \cdot X \Rightarrow U_t=0.$$ Note that $\alpha$ appears naturally as a Lagrange multiplier associated to the constraint ${{\bf}{E}}(U_t)=(1-t)$ and $\beta$ as a Lagrange multiplier associated to ${{\bf}{E}}(U_t X)=0$. Since $\nu=\mathop{\mathrm{Law}}\nolimits(X,Y)$ gives zero mass to nonvertical hyperplanes, we may simply write $$\label{frombetatoU} U_t=\mathbf{1}_{\{Y>\alpha +\beta \cdot X\}}$$ and thus the constraints ${{\bf}{E}}(U_t)=(1-t)$, ${{\bf}{E}}(XU_t)=0$ read $$\label{normaleq} {{\bf}{E}}( \mathbf{1}_{\{Y> \alpha+ \beta \cdot X \}})={\bf}{P}(Y> \alpha+ \beta \cdot X) = (1-t),\; {{\bf}{E}}(X \mathbf{1}_{\{Y> \alpha+ \beta \cdot X \}} ) =0$$ which simply are the first-order conditions for (\[kb1\]). Any pair $(\alpha, \beta)$ which solves[^6] the optimality conditions ([normaleq]{}) for the Koenker and Bassett approach will be denoted $$\alpha=\alpha^{QR}(t), \beta=\beta^{QR}(t)$$ and the variable $U_t$ solving (\[dt\]) given by (\[frombetatoU\]) will similarly be denoted $U_t^{QR}$ $$\label{utqr} U_t^{QR}:=\mathbf{1}_{\{Y>\alpha^{QR}(t) +\beta^{QR}(t) \cdot X\}}.$$ Note that in the previous considerations the probability level $t$ is fixed, this is what we called the “$t$ by $t$” approach. For this approach to be consistent with conditional quantile estimation, if we allow $t$ to vary we should add an additional monotonicity requirement: Quantile regression is under quasi-specification if there exists for each $t$, a solution $(\alpha^{QR}(t), \beta^{QR}(t))$ of (\[normaleq\]) (equivalently the minimization problem (\[kb0\])) such that $t\in [0,1]\mapsto (\alpha^{QR}(t), \beta^{QR}(t))$ is continuous and, for $m$-a.e. $x$ $$\label{monqrqs} t\mapsto \alpha^{QR}(t)+\beta^{QR}(t)\cdot x \mbox{ is increasing on $[0,1]$}.$$ A first consequence of quasi-specification is given by \[qrqsdec\] If quantile regression is under quasi-specification and if we define $U^{QR}:=\int_0^1 U_t^{QR} dt$ (recall that $U_t^{QR}$ is given by (\[utqr\])) then: - $U^{QR}$ is uniformly distributed, - $X$ is mean-independent from $U^{QR}$ i.e. ${{\bf}{E}}(X\vert U^{QR})={{\bf}{E}}(X)=0$, - $Y=\alpha^{QR}(U^{QR})+ \beta^{QR}(U^{QR})\cdot X$ a.s. Moreover $U^{QR}$ solves the correlation maximization problem with a mean-independence constraint: $$\label{maxcorrmi} \max \{ {{\bf}{E}}(VY), \; \mathop{\mathrm{Law}}\nolimits(V)=\mu, \; {{\bf}{E}}(X\vert V)=0\}.$$ Obviously $$U_t^{QR}=1\Rightarrow U^{QR} \ge t, \mbox{ and } \; U^{QR}>t \Rightarrow U_t^{QR}=1$$ hence ${{\bf}{P}}(U^{QR}\ge t)\ge {{\bf}{P}}(U_t^{QR}=1)={{\bf}{P}}(Y> \alpha^{QR}(t)+\beta^{QR}(t)\cdot X)=(1-t)$ and ${{\bf}{P}}(U^{QR}> t)\le {{\bf}{P}}(U_t^{QR}=1)=(1-t)$ which proves that $U^{QR}$ is uniformly distributed and $\{U^{QR}>t\}$ coincides with $\{U^{QR}_t=1\}$ up to a set of null probability. We thus have ${{{\bf}E}}(X {{\bf 1}}_{U^{QR}>t})={{{\bf}E}}(X U_t^{QR})=0$, by a standard approximation argument we deduce that ${{{\bf}E}}(Xf(U^{QR}))=0$ for every $f\in C([0,1], {{{\bf}R}})$ which means that $X$ is mean-independent from $U^{QR}$. As already observed $U^{QR}>t$ implies that $Y>\alpha^{QR}(t)+\beta^{QR}(t)\cdot X$ in particular $Y\ge \alpha^{QR}(U^{QR}-\delta)+\beta^{QR}(U^{QR}- \delta) \cdot X$ for $\delta>0$, letting $\delta\to 0^+$ and using the continuity of $(\alpha^{QR}, \beta^{QR})$ we get $Y\ge \alpha^{QR}(U^{QR})+\beta^{QR}(U^{QR}) \cdot X$. The converse inequality is obtained similarly by remaking that $U^{QR}<t$ implies that $Y\le \alpha^{QR}(t)+\beta^{QR}(t)\cdot X$. Let us now prove that $U^{QR}$ solves [(\[maxcorrmi\])]{}. Take $V$ uniformly distributed, such that $X$ is mean-independent from $V$ and set $V_t:={{\bf 1}}_{\{V>t \}}$, we then have ${{{\bf}E}}(X V_t)=0$, ${{{\bf}E}}(V_t)=(1-t)$ but since $U_t^{QR}$ solves [(\[dt\])]{} we have ${{{\bf}E}}(V_t Y)\le {{{\bf}E}}(U_t^{QR}Y)$. Observing that $V=\int_0^1 V_t dt$ and integrating the previous inequality with respect to $t$ gives ${{{\bf}E}}(VY)\le {{{\bf}E}}(U^{QR}Y)$ so that $U^{QR}$ solves [(\[maxcorrmi\])]{}. Let us continue with a uniqueness argument for the mean-independent decomposition given in proposition \[qrqsdec\]: \[uniquedec\] Let us assume that $$Y=\alpha(U)+\beta(U)\cdot X=\overline{\alpha} (\overline{U})+ \overline{\beta}(\overline{U})\cdot X$$ with: - both $U$ and $\overline{U}$ uniformly distributed, - $X$ is mean-independent from $U$ and $\overline{U}$: ${{\bf}{E}}(X\vert U)={{\bf}{E}}(X\vert \overline{U})=0$, - $\alpha, \beta, \overline{\alpha}, \overline{\beta}$ are continuous on $[0,1]$, - $(\alpha, \beta)$ and $(\overline{\alpha}, \overline{\beta})$ satisfy the monotonicity condition (\[monqr\]), then $$\alpha=\overline{\alpha}, \; \beta=\overline{\beta}, \; U=\overline{U}.$$ Let us define for every $t\in [0,1]$ $$\varphi(t):=\int_0^t \alpha(s)ds, \; b(t):=\int_0^t \beta(s)ds.$$ Let us also define for $(x,y)$ in ${{{\bf}R}}^{N+1}$: $$\psi(x,y):=\max_{t\in [0,1]} \{ty-\varphi(t)-b(t)\cdot x\}$$ thanks to monotonicity condition [(\[monqr\])]{}, the maximization program above is strictly concave in $t$ for every $y$ and $m$-a.e.$x$. We then remark that $Y=\alpha(U)+\beta(U)\cdot X=\varphi'(U)+b'(U)\cdot X$ exactly is the first-order condition for the above maximization problem when $(x,y)=(X,Y)$. In other words, we have $$\label{ineqq} \psi(x,y)+b(t)\cdot x + \varphi(t)\ge ty, \; \forall (t,x,y)\in [0,1]\times {{{\bf}R}}^N\times {{{\bf}R}}$$ with and equality for $(x,y,t)=(X,Y,U)$ i.e. $$\label{eqas} \psi(X,Y)+b(U)\cdot X + \varphi(U)=UY, \; \mbox{ a.s. }$$ Using the fact that ${\mathop{\mathrm{Law}}\nolimits}(U)={\mathop{\mathrm{Law}}\nolimits}({\overline{U}})$ and the fact that mean-independence gives ${{{\bf}E}}(b(U)\cdot X)={{{\bf}E}}(b({\overline{U}})\cdot X)=0$, we have $${{{\bf}E}}(UY)={{{\bf}E}}( \psi(X,Y)+b(U)\cdot X + \varphi(U))= {{{\bf}E}}( \psi(X,Y)+b({\overline{U}})\cdot X + \varphi({\overline{U}})) \ge {{{\bf}E}}({\overline{U}}Y)$$ but reversing the role of $U$ and ${\overline{U}}$, we also have ${{{\bf}E}}(UY)\le {{{\bf}E}}({\overline{U}}Y)$ and then $${{{\bf}E}}({\overline{U}}Y)= {{{\bf}E}}( \psi(X,Y)+b({\overline{U}})\cdot X + \varphi({\overline{U}}))$$ so that, thanks to inequality [(\[ineqq\])]{} $$\psi(X,Y)+b({\overline{U}})\cdot X + \varphi({\overline{U}})={\overline{U}}Y, \; \mbox{ a.s. }$$ which means that ${\overline{U}}$ solves $\max_{t\in [0,1]} \{tY-\varphi(t)-b(t)\cdot X\}$ which, by strict concavity admits $U$ as unique solution. This proves that $U={\overline{U}}$ and thus $$\alpha(U)-{\overline{\alpha}}(U)=({\overline{\beta}}(U)-\beta(U))\cdot X$$ taking the conditional expectation of both sides with respect to $U$, we then obtain $\alpha={\overline{\alpha}}$ and thus $\beta(U)\cdot X={\overline{\beta}}(U)\cdot X$ a.s.. We then compute $$\begin{split} F(x, \alpha(t)+\beta(t)\cdot x)&= {{\bf}{P}}(\alpha(U)+\beta(U)\cdot X \le \alpha(t)+\beta(t)\cdot x \vert X=x) \\ &={{\bf}{P}}( \alpha(U)+ \beta(U)\cdot x \le \alpha(t)+\beta(t)\cdot x \vert X=x)\\ &={{\bf}{P}}(U\le t \vert X=x) \end{split}$$ and similarly $F(x, \alpha(t)+{\overline{\beta}}(t)\cdot x)={{\bf}{P}}(U\le t \vert X=x)=F(x, \alpha(t)+\beta(t)\cdot x)$. Since $F(x,.)$ is increasing for $m$-a.e. $x$, we deduce that $\beta(t)\cdot x={\overline{\beta}}(t)\cdot x$ for $m$-a.e. $x$ and every $t\in[0,1]$. Finally, the previous considerations and the nondegeneracy of $m$ enable us to conclude that $\beta={\overline{\beta}}$. If quantile regression is under quasi-specification, the regression coefficients $(\alpha^{QR}, \beta^{QR})$ are uniquely defined and if $Y$ can be written as $$Y=\alpha(U)+\beta(U)\cdot X$$ for $U$ uniformly distributed, $X$ being mean independent from $U$, $(\alpha, \beta)$ continuous such that the monotonicity condition (\[monqr\]) holds then necessarily $$\alpha=\alpha^{QR}, \; \beta=\beta^{QR}.$$ To sum up, we have shown that quasi-specification is equivalent to the validity of the factor linear model: $$Y=\alpha(U)+\beta(U)\cdot X$$ for $(\alpha, \beta)$ continuous and satisfying the monotonicity condition (\[monqr\]) and $U$, uniformly distributed and such that $X$ is mean-independent from $U$. This has to be compared with the decomposition of paragraph \[univar1\] where $U$ is required to be independent from $X$ but the dependence of $Y$ with respect to $U$, given $X$, is given by any nondecreasing function of $U$. Global approaches and duality {#univar3} ----------------------------- Now we wish to address quantile regression in the case where neither specification nor quasi-specification can be taken for granted. In such a general situation, keeping in mind the remarks from the previous paragraphs, we can think of two natural approaches. The first one consists in studying directly the correlation maximization with a mean-independence constraint (\[maxcorrmi\]). The second one consists in getting back to the Koenker and Bassett $t$ by $t$ problem ([dt]{}) but adding as an additional global consistency constraint that $U_t$ should be nonincreasing with respect to $t$: $$\label{monconstr} \sup\{{{\bf}{E}}(\int_0^1 U_t Ydt ) \; : \: U_t \mbox{ nonincr.}, U_t\in [0,1],\; {{\bf}{E}}(U_t)=(1-t), \; {{\bf}{E}}(U_t X)=0\}$$ Our aim is to compare these two approaches (and in particular to show that the maximization problems (\[maxcorrmi\]) and (\[monconstr\]) have the same value) as well as their dual formulations. Before going further, let us remark that (\[maxcorrmi\]) can directly be considered in the multivariate case whereas the monotonicity constrained problem (\[monconstr\]) makes sense only in the univariate case. As proven in [@ccg], (\[maxcorrmi\]) is dual to $$\label{dualmi} \inf_{(\psi, \varphi, b)} \{{{\bf}{E}}(\psi(X,Y))+{{\bf}{E}}(\varphi(U)) \; : \; \psi(x,y)+ \varphi(u)\ge uy -b(u)\cdot x\}$$ which can be reformulated as: $$\label{dualmiref} \inf_{(\varphi, b)} \int \max_{t\in [0,1]} ( ty- \varphi(t) -b(t)\cdot x) \nu(dx, dy) +\int_0^1 \varphi(t) dt$$ in the sense that $$\label{nodualgap} \sup (\ref{maxcorrmi})=\inf(\ref{dualmi})=\inf (\ref{dualmiref}).$$ The existence of a solution to (\[dualmi\]) is not straightforward and is established under appropriate assumptions in the appendix directly in the multivariate case. The following result shows that there is a $t$-dependent reformulation of (\[maxcorrmi\]): \[treform\] The value of (\[maxcorrmi\]) coincides with $$\label{monconstr01} \sup\{{{\bf}{E}}(\int_0^t U_t Ydt ) \; : \: U_t \mbox{ nonincr.}, U_t\in \{0,1\},\; {{\bf}{E}}(U_t)=(1-t), \; {{\bf}{E}}(U_t X)=0\}$$ Let $U$ be admissible for [(\[maxcorrmi\])]{} and define $U_t:={{\bf 1}}_{\{U>t\}}$ then $U=\int_0^1 U_t dt$ and obviously $(U_t)_t$ is admissible for [(\[monconstr01\])]{}, we thus have $\sup {(\ref{maxcorrmi})} \le \sup {(\ref{monconstr01})}$. Take now $(V_t)_t$ admissible for [(\[monconstr01\])]{} and let $V:=\int_0^1 V_t dt$, we then have $$V>t \Rightarrow V_t=1\Rightarrow V\ge t$$ since ${{{\bf}E}}(V_t)=(1-t)$ this implies that $V$ is uniformly distributed and $V_t={{\bf 1}}_{\{V>t\}}$ a.s. so that ${{{\bf}E}}(X {{\bf 1}}_{\{V>t\}})=0$ which implies that $X$ is mean-independent from $V$ and thus ${{{\bf}E}}(\int_0^1 V_t Y dt)\le \sup {(\ref{maxcorrmi})}$. We conclude that $\sup {(\ref{maxcorrmi})} = \sup {(\ref{monconstr01})}$. Let us now define $$\mathcal{C}:=\{u \; : \; [0,1]\mapsto [0,1], \mbox { nonincreasing}\}$$ Let $(U_t)_t$ be admissible for (\[monconstr\]) and set $$v_t(x,y):={{\bf}{E}}(U_t \vert X=x, Y=y), \; V_t:= v_t(X,Y)$$ it is obvious that $(V_t)_t$ is admissible for (\[monconstr\]) and by construction ${{\bf}{E}}(V_t Y)={{\bf}{E}}(U_t Y)$. Moreover the deterministic function $(t,x,y)\mapsto v_t(x,y)$ satisfies the following conditions: $$\label{CCt} \mbox{for fixed $(x,y)$, } t\mapsto v_t(x,y) \mbox{ belongs to ${{\cal C}}$,}$$ and for a.e. $t\in [0,1]$, $$\label{moments} \int v_t(x,y) \nu(dx, dy)=(1-t), \; \int v_t(x,y) x\nu(dx, dy)=0.$$ Conversely, if $(t,x,y)\mapsto v_t(x,y)$ satisfies (\[CCt\])-(\[moments\]), $V_t:=v_t(X,Y)$ is admissible for (\[monconstr\]) and ${{\bf}{E}}(V_t Y)=\int v_t(x,y) y \nu(dx, dy)$. All this proves that $\sup(\ref{monconstr})$ coincides with $$\label{supvt} \sup_{(t,x,y)\mapsto v_t(x,y)} \int v_t(x,y) y \nu(dx, dy)dt \mbox{ subject to: } (\ref{CCt})-(\ref{moments})$$ \[equivkb\] $$\sup (\ref{maxcorrmi})=\sup (\ref{monconstr}).$$ We know from lemma \[treform\] and the remarks above that $$\sup {(\ref{maxcorrmi})}=\sup {(\ref{monconstr01})} \le \sup {(\ref{monconstr})}=\sup {(\ref{supvt})}.$$ But now we may get rid of constraints [(\[moments\])]{} by rewriting [(\[supvt\])]{} in sup-inf form as $$\begin{split} \sup_{{\quad\text{$v_t$ satisfies {(\ref{CCt})}}\quad}} \inf_{(\alpha, \beta)} \int v_t(x,y)(y-\alpha(t)-\beta(t) \cdot x) \nu(dx,dy)dt +\int_0^1 (1-t)\alpha(t) dt. \end{split}$$ Recall that one always have $\sup \inf \le \inf \sup$ so that $\sup{(\ref{supvt})}$ is less than $$\begin{split} \inf_{(\alpha, \beta)} \sup_{{\quad\text{$v_t$ satisf. {(\ref{CCt})}}\quad}} \int v_t(x,y)(y-\alpha(t)-\beta(t) \cdot x) \nu(dx,dy)dt +\int_0^1 (1-t)\alpha(t) dt\\ \le \inf_{(\alpha, \beta)} \int \Big (\sup_{v\in {{\cal C}}} \int_0^1 v(t)(y-\alpha(t)-\beta(t)x)dt \Big) \nu(dx,dy)+ \int_0^1 (1-t)\alpha(t) dt. \end{split}$$ It follows from Lemma \[suppC\] below that, for $q\in L^1(0,1)$ defining $Q(t):=\int_0^t q(s) ds$, one has $$\sup_{v\in {{\cal C}}} \int_0^1 v(t) q(t)dt=\max_{t\in [0,1]} Q(t).$$ So setting $\varphi(t):=\int_0^t \alpha(s) ds$, $b(t):=\int_0^t \beta(s)ds$ and remarking that integrating by parts immediately gives $$\int_0^1 (1-t)\alpha(t) dt=\int_0^1 \varphi(t) dt$$ we thus have $$\begin{split} \sup_{v\in {{\cal C}}} \int_0^1 v(t)(y-\alpha(t)-\beta(t)x)dt + \int_0^1 (1-t)\alpha(t) dt\\ = \max_{t\in[0,1]} \{t y-\varphi(t)-b(t) x\} +\int_0^1 \varphi(t) dt. \end{split}$$ This yields[^7] $$\sup{(\ref{supvt})} \le \inf_{(\varphi, b)} \int \max_{t\in [0,1]} ( ty- \varphi(t) -b(t)\cdot x) \nu(dx, dy) +\int_0^1 \varphi(t) dt =\inf {(\ref{dualmiref})}$$ but we know from [(\[nodualgap\])]{} that $\inf {(\ref{dualmiref})} =\sup {(\ref{maxcorrmi})}$ which ends the proof. In the previous proof, we have used the elementary result (proven in the appendix). \[suppC\] Let $q\in L^1(0,1)$ and define $Q(t):=\int_0^t q(s) ds$ for every $t\in [0,1]$, one has $$\sup_{v\in \mathcal{C}} \int_0^1 v(t) q(t)dt=\max_{t\in [0,1]} Q(t).$$ Appendix {#appendix .unnumbered} ======== Proof of Lemma \[suppC\] {#proof-of-lemma-suppc .unnumbered} ------------------------ Since $\mathbf{1}_{[0,t]} \in \mathcal{C}$, one obviously first has $$\sup_{v\in \mathcal{C}} \int_0^1 v(s) q(s)ds \ge \max_{t\in [0,1]} \int_0^t q(s)ds=\max_{t\in [0,1]} Q(t).$$ Let us now prove the converse inequality, taking an arbitrary $v\in \mathcal{C}$. We first observe that $Q$ is absolutely continuous and that $v$ is of bounded variation (its derivative in the sense of distributions being a bounded nonpositive measure which we denote by $\eta$), integrating by parts and using the definition of $\mathcal{C}$ then give: $$\begin{split} \int_0^1 v(s) q(s)ds &=-\int_0^1 Q \eta + v(1^-) Q(1) \\ & \le (\max_{[0,1]} Q)\times (-\eta([0,1]) + v(1^-) Q(1) \\ &= (\max_{[0,1]} Q) (v(0^+)-v(1^-)) +v(1^-) Q(1) \\ &= (\max_{[0,1]} Q) v(0^+) + (Q(1)- \max_{[0,1]} Q) v(1^-) \\ & \le \max_{[0,1]} Q. \end{split}$$ Proof of theorem \[existdual\] {#proof-of-theorem-existdual .unnumbered} ------------------------------ Let us denote by $(0, \overline{y})$ the barycenter of $\nu$: $$\int_{\Omega} x \; \nu(dx, dy)=0, \; \int_{\Omega} y \; \nu(dx, dy)=:\overline{y}$$ and observe that $(0, \overline{y})\in \Omega$ (otherwise, by convexity, $\nu $ would be supported on $\partial \Omega$ which would contradict our assumption that $\nu\in L^{\infty}(\Omega)$). We wish to prove the existence of optimal potentials for the problem $$\label{duallike} \inf_{\psi, \varphi, b } \int_{\Omega} \psi(x,y) d \nu(x,y) + \int_{[0,1]^d} \varphi(u) d\mu(u)$$ subject to the pointwise constraint that $$\label{const} \psi(x,y)+\varphi(u)\ge u\cdot y -b(u)\cdot x, \; (x,y)\in \overline{\Omega}, \; u\in [0,1]^d.$$ Of course, we can take $\psi$ that satisfies $$\psi(x,y):=\sup_{u\in [0,1]^d} \{ u\cdot y -b(u)\cdot x-\varphi(u)\}$$ so that $\psi$ can be chosen convex and $1$ Lipschitz with respect to $y$. In particular, we have $$\label{lip} \psi(x,\overline{y})-\vert y-\overline{y} \vert \le \psi(x,y) \leq \psi(x,\overline{y})+\vert y-\overline{y} \vert.$$ The problem being invariant by the transform $(\psi, \varphi)\to (\psi+C, \psi-C)$ ($C$ being an arbitrary constant), we can add as a normalization the condition that $$\label{normaliz} \psi(0, \overline{y})=0.$$ This normalization and the constraint (\[const\]) imply that $$\label{fipos} \varphi(t)\ge t\cdot \overline{y} -\psi(0, \overline{y}) \ge -\vert \overline{y} \vert.$$ We note that there is one extra invariance of the problem: if one adds an affine term $q\cdot x$ to $\psi$ this does not change the cost and neither does it affect the constraint, provided one modifies $b$ accordingly by substracting to it the constant vector $q$. Take then $q$ in the subdifferential of $x\mapsto \psi(x,\overline{y})$ at $0$ and change $\psi$ into $\psi-q\cdot x$, we obtain a new potential with the same properties as above and with the additional property that $\psi(.,\overline{y})$ is minimal at $x=0$, and thus $\psi(x,\overline{y})\ge 0$, together with ([lip]{}) this gives the lower bound $$\label{lb} \psi(x,y)\ge -\vert y-\overline{y} \vert \ge -C$$ where the bound comes from the boundedness of $\Omega$ (from now one, $C$ will denote a generic constant maybe changing from one line to another). Now take a minimizing sequence $(\psi_n, \varphi_n, b_n)\in C(\overline{\Omega}, {{\bf}{R}})\times C([0,1]^d, {{\bf}{R}})\times C([0,1]^d, {{\bf}{R}}^N)$ where for each $n$, $\psi_n$ has been chosen with the same properties as above. Since $\varphi_n$ and $\psi_n$ are bounded from below ($\varphi_n \ge -\vert \overline{y}\vert$ and $\psi_n \ge C$) and since the sequence is minimizing, we deduce immediately that $\psi_n$ and $\varphi_n$ are bounded sequences in $L^1$. Let $z=(x,y)\in \Omega$ and $r>0$ be such that the distance between $z$ and the complement of $\Omega$ is at least $2r$, (so that $B_r(z)$ is in the set of points that are at least at distance $r$ from $\partial \Omega$), by assumption there is an $\alpha_r>0$ such that $\nu \ge \alpha_r$ on $B_r(z)$. We then deduce from the convexity of $\psi_n$: $$C \le \psi_n(z)\le \frac{1}{\vert B_r(z)\vert }\int_{B_r(z)} \psi_n \leq \frac{1}{\vert B_r(z)\vert \alpha_r } \int_{B_r(z)} \vert \psi_n\vert \nu \le \frac{1}{\vert B_r(z)\vert \alpha_r } \Vert \psi_n \Vert_{L^1(\nu)}$$ so that $\psi_n$ is actually bounded in $L^{\infty}_{{\mathrm{loc}}}$ and by convexity, we also have $$\Vert \nabla \psi_n \Vert_{L^{\infty}(B_r(z))} \le \frac{2}{R-r} \Vert \psi_n \Vert_{L^{\infty}(B_R(z))}$$ whenever $R>r$ and $B_R(z)\subset \Omega$ (see for instance Lemma 5.1 in [@cg] for a proof of such bounds). We can thus conclude that $\psi_n$ is also locally uniformly Lipschitz. Therefore, thanks to Ascoli’s theorem, we can assume, taking a subsequence if necessary, that $\psi_n$ converges locally uniformly to some potential $\psi$. Let us now prove that $b_n$ is bounded in $L^1$, for this take $r>0$ such that $B_{2r}(0, \overline{y})$ is included in $\Omega$. For every $x\in B_r(0)$, any $t\in[0,1]^d $ and any $n$ we then have $$\begin{split} -b_n(t) \cdot x \le \varphi_n(t)-t \cdot \overline{y} + \Vert \psi_n \Vert_{L^{\infty} (B_r(0, \overline{y}))} \le C+\varphi_n(t) \end{split}$$ maximizing in $x \in B_r(0)$ immediately gives $$\vert b_n(t )\vert r \leq C +\varphi_n(t).$$ From which we deduce that $b_n$ is bounded in $L^1$ since $\varphi_n$ is. From Komlos’ theorem (see [@komlos]), we may find a subsequence such that the Cesaro means $$\frac{1}{n} \sum_{k=1}^n \varphi_k, \; \frac{1}{n} \sum_{k=1}^n b_k$$ converge a.e. respectively to some $\varphi$ and $b$. Clearly $\psi$, $\varphi$ and $b$ satisfy the linear constraint (\[const\]), and since the sequence of Cesaro means $(\psi^{\prime }_n, \phi^{\prime }_n, b^{\prime }_n):= n^{-1}\sum_{k=1}^n (\psi_k, \phi_k, b_k)$ is also minimizing, we deduce from Fatous’ Lemma $$\begin{split} &\int_{\Omega} \psi(x,y) d \nu(x,y) + \int_{[0,1]^d} \varphi(u) d\mu(u) \\ & \leq \liminf_n \int_{\Omega} \psi^{\prime }_n(x,y) d \nu(x,y) + \int_{[0,1]^d} \varphi^{\prime }_n(u) d\mu(u)=\inf(\ref{duallike}) \end{split}$$ which ends the existence proof. [99]{} A. Belloni, R.L. Winkler, On Multivariate Quantiles Under Partial Orders, The Annals of Statistics, **39** (2), 1125-1179 (2011). Y. Brenier, Polar factorization and monotone rearrangement of vector-valued functions, Comm. Pure Appl. Math. 44 ** 4**, 375–417 (1991). G. Carlier, A. Galichon, Exponential convergence for a convexifying equation, ESAIM, Control, Optimisation and Calculus of Variations, **18** (3), 611-620 (2012). G. Carlier, V. Chernozhukov, A. Galichon, Vector quantile regression: an optimal transport approach, The Annals of Statistics, **44** (3), 1165-1192 (2016) I. Ekeland, A. Galichon, M. Henry, Comonotonic measures of multivariate risks, Math. Finance, **22** (1), 109-132 (2012). I. Ekeland, R. Temam, *Convex Analysis and Variational Problems*, Classics in Mathematics, Society for Industrial and Applied Mathematics, Philadelphia, (1999). A. Galichon, *Optimal Transport Methods in Economics*, Princeton University Press. A. Galichon, M. Henry, Dual theory of choice with multivariate risks, J. Econ Theory, **47** (4), 1501-516 (2012). M. Hallin, D. Paindaveine, M. Siman, Multivariate quantiles and multiple-output regression quantiles: From $L^1$ optimization to halfspace depth, The Annals of Statistics, **38** (2), 635-669 (2010). R. Koenker, G. Bassett, Regression Quantiles, Econometrica, **46**, 33-50 (1978). J. Komlos, A generalization of a problem of Steinhaus, *Acta Mathematica Academiae Scientiarum Hungaricae*, **18** (1–2), 1967, pp. 217–229. G. Puccetti, M. Scarsini, Multivariate comonotonicity, *Journal of Multivariate Analysis* 101, 291–304 (2010). J.V.Ryff , Measure preserving Transformations and Rearrangements, *J. Math. Anal. and Applications* **31** (1970), 449–458. F. Santambrogio, *Optimal Transport for Applied Mathematicians*, Progress in Nonlinear Differential Equations and Their Applications 87, Birkhäuser Basel, 2015. C. Villani, *Topics in optimal transportation*, Graduate Studies in Mathematics, 58, American Mathematical Society, Providence, RI, 2003. C. Villani, *Optimal transport: Old and New*, Grundlehren der mathematischen Wissenschaften, Springer-Verlag, Heidelberg, 2009. [^1]: [CEREMADE, UMR CNRS 7534, Université Paris IX Dauphine, Pl. de Lattre de Tassigny, 75775 Paris Cedex 16, FRANCE, and MOKAPLAN Inria Paris, `carlier@ceremade.dauphine.fr`]{} [^2]: [Department of Economics, MIT, 50 Memorial Drive, E52-361B, Cambridge, MA 02142, USA, `vchern@mit.edu`]{} [^3]: [Economics Department and Courant Institute of Mathematical Sciences, NYU, 70 Washington Square South, New York, NY 10013, USA `ag133@nyu.edu`.]{} [^4]: There is actually an important literature that aims at generalizing the notion of quantile to a multidimensional setting and various different approaches have been proposed; see in particular [@belloni], [hallin]{}, [@PuccettiScarsini] and the references therein. [^5]: the fact that there exists such a triple follows from the nonatomicity of the underlying space. [^6]: Uniqueness will be discussed later on. [^7]: The functions $\varphi$ and $b$ constructed above vanish at $0$ and are absolutely continuous but this is by no means a restriction in the minimization problem [(\[dualmiref\])]{} as explained in paragraph \[mvqr2\].
--- abstract: 'Disk scale length  and central surface brightness $\mu_0$ for a sample of 29955 bright disk galaxies from the Sloan Digital Sky Survey have been analysed. Cross correlation of the SDSS sample with the LEDA catalogue allowed us to investigate the variation of the scale lengths for different types of disk/spiral galaxies and present distributions and typical trends of scale lengths all the SDSS bands with linear relations that indicate the relation that connect scale lengths in one passband to another. We use the volume corrected results in the $r$-band and revisit the relation between these parameters and the galaxy morphology, and find the average values $\langle r_d\rangle = 3.8\pm 2.1$ kpc and $\langle\mu_0\rangle=20.2\pm 0.7$ mag arcsec$^{-2}$. The derived scale lengths presented here are representative for a typical galaxy mass of $10^{10.8} \rm{~M}_\odot$, and the RMS dispersion is larger for more massive galaxies. We analyse the –$\mu_0$ plane and further investigate the Freeman Law and confirm that it indeed defines an upper limit for $\mu_0$ in bright disks ($r_\mathrm{mag}<17.0$), and that disks in late type spirals ($T \ge 6$) have fainter central surface brightness. Our results are based on a sample of galaxies in the local universe ($z< 0.3$) that is two orders of magnitudes larger than any sample previously studied, and deliver statistically significant results that provide a comprehensive test bed for future theoretical studies and numerical simulations of galaxy formation and evolution.' --- Overview ======== The mass distribution of a disk is set by the  and in the exponential case, 60% of the total mass is confined within two scale lengths and 90% within four scale lengths. Moreover, the angular momentum of a disk is set by  and the mass distribution of its host halo, and the fact that the angular momentum vectors are aligned suggests that there is a physical relation between the two. During the formation process, mergers and associated star formation and feedback processes play a crucial role in the resulting structure, however, the observed sizes of disks suggest that the combination of these physical processes yield that galactic disks have not lost much of the original angular momentum acquired from cosmological torques (White & Rees 1978). A large  disk forms when the disk mass is smaller than the halo mass over the disk region, and vice versa, a small  disk forms when the mass of the disk dominates the mass of the halo in any part of the disk. The self gravitating disk will also modify the shape of the rotation curve near the centre of a galaxy and the disk is then set to undergo secular evolution. The natural implication of this scenario is that the  dictates the life of a disk, and consequently, is a prime factor which determines the position of a galaxy on the Hubble sequence. Here we analyse the  and $\mu_0$ from an unprecedentedly large sample of bright disk galaxies in the nearby universe ($z< 0.3$) using the Sloan Digital Sky Survey (SDSS) Data Release 6 (York et al. 2000; Adelman-McCarthy et al. 2008). We have used the Virtual Observatory tools and services to retrieve data in all ($u$, $g$, $r$, $i$, and $z$) SDSS band and used the LEDA catalogue (Paturel et al. 2003) to retrieve morphological classification information about our sample galaxies, and those with types defined as Sa or later are hereafter refereed to as disk galaxies. In the $g$, $i$, and $z$-band, $\approx 27000$–30000 galaxies were analysed, and in the $u$-band,  and $\mu_0$ were robustly derived for a few hundred objects. Throughout this presentation, we use disk parameters in the $r$-band to provide a comprehensive test bed for forthcoming cosmological simulations (or analytic/semi-analytic models) of galaxy formation and evolution. Further details have been presented in Fathi et al. (2010) and Fathi (2010). One prominent indicator for a smooth transition from spiral toward S0 and disky ellipticals is provided by the –$\mu_0$ diagram where $\mu_0$ is the central surface brightness of the disk, where spirals and S0s are mixed and disky ellipticals populate the upper left corner of this diagram. Another instructive relation is the Freeman Law (Freeman 1970) which relates $\mu_0$ to the galaxy morphological type. Although, some studies have found that the Freeman Law is an artefact due to selection effects (e.g., Disney et al. 1976), recent work have shown that proper consideration of selection effects can be combined with kinematic studies to explore an evolutionary sequence. In the comparison between theory and observations, two issues complicate matters. On the theory side mapping between initial halo angular momentum and  is not trivial, partly due to the fact that commonly the initial specific angular momentum distribution of the visible and dark component favour disks which are more centrally concentrated disks than exponential. Observationally, comprehensive samples have yet not been studied, and the mixture of different species such as low and high surface brightness galaxies complicate the measurements of disk parameters. ![image](Fathi_ScaleLengths_Fig1.jpg){width=".69\textwidth"} ![image](Fathi_ScaleLengths_Fig2.jpg){width=".99\textwidth"} Freeman Law and –$\mu_0$ Plane ============================== The Freeman Law defines an upper limit for $\mu_0$ and is hereby confirmed by our analysis of the largest sample ever studied in this context (see Fig. \[fig:mu0rd\]). However, disk galaxies with morphological type $T\ge6$ have fainter $\mu_0$. These results in $r$-band are comparable with those in other SDSS bands (Fathi 2010). Combined with our previous results, i.e. that  varies by two orders of magnitude independent of morphological type, this result implies that disks with large scale lengths, not necessarily have higher $\mu_0$. The $\mu_0$ has a Gaussian distribution with $\langle\mu_0\rangle=20.2\pm0.7$ mag arcsec$^{-2}$ with a linear trend seen in Fig. \[fig:mu0rd\] (applying different internal extinction parameters changes this mean value by 0.2 mag arcsec$^{-2}$). The top right corner is enclosed by the constant disk luminosity line, void of objects. The top right corner is also the region where the disk luminosity exceeds $3L^\star$, thus the absence of galaxies in this region cannot be a selection effect since big bright galaxies cannot be missed in our diameter selected sample. However, it is clear that selection effect plays a role in populating the lower left corner of this diagram. Analogue to the –$\mu_0$ plane, the Tully-Fisher relation implies lines of constant maximum speed a disk can reach. Our 282 well-classified galaxies (illustrated with coloured dots in Fig. \[fig:mu0rd\]) follow the results of, e.g., Graham and de Blok (2001), and confirm that disks of intermediate and early type spirals have higher $\mu_0$ while the late type spirals have lower $\mu_0$, and they populate the lower left corner of the diagram. Intermediate morphologies are mixed along a linear slope of 2.5 in the –$\mu_0$ plane, coinciding with the region populated by S0s as shown by Kent (1985) and disky ellipticals shown by Scorza & Bender (1995) The , on the other hand, does not vary as a function of morphological type (Fathi et al. 2010). Investigating galaxy masses, we find a forth quantity is equally important in this analysis. The total galaxy mass separates the data along lines parallel to the dashed lines drawn in Fig. \[fig:mu0rd\]. This is indeed also confirmed by the Tully-Fisher relation. Moreover, we validate that the lower mass galaxies are those with type $\ge 6$. Investigation of the asymmetry and concentration in this context further confirms the expected trends, i.e. that these parameters increase for later types, and central stellar velocity dispersion decrease for later type spiral galaxies, however, we note that these correlations are well below one-sigma confidence level. However, the higher asymmetry galaxies populate a region more extended toward the bottom right corner with respect to the high asymmetry galaxies. The middle panel shows an opposite trend, and the bottom panel shows that larger velocity dispersion has the same effect as asymmetry (see Fathi 2010 for further details). In the two relations analysed here, we find typically larger scatter than previous analyses, and although our sample represents bright disks, the sample size adds credibility to our findings. These results are fully consistent with the common understanding of the –$\mu_0$ plane and the Freeman Law, and they contribute to past results since they are based on a sample which is two order of magnitudes greater than any previous study, with more than five times more late type spiral galaxies than any previous analysis.\ [*Acknowledgements:* ]{} I thank the IAU, LOC and my colleagues i the SOC for a stimulating symposium, and Mark Allen, Evanthia Hatziminaoglou, Thomas Boch, Reynier Peletier and Michael Gatchell for their invaluable input at various stages of this project. Adelman-McCarthy, J. K. et al. 2008, ApJS, 175, 297 Disney, M. 1976, Nature, 263, 573 Graham, A. W., de Blok, W. 2001, ApJ, 556, L177 Fathi, K. et al. 2010, MNRAS, 406, 1595 Fathi, K. 2010, ApJ, 722, L120 Freeman, K. C. 1970, 160, 811 Kent, S. 1985, ApJSS, 59, 115 Paturel G. et al. 2003, A&A, 412, 45 Scorza, C., Bender, R. 1995, A&A, 293, 20 White, S. D. M., Rees, M. J. 1978, MNRAS, 183, 341 York, D. G. et al. 2000, AJ, 120, 1579
--- abstract: 'Mesh is an important and powerful type of data for 3D shapes and widely studied in the field of computer vision and computer graphics. Regarding the task of 3D shape representation, there have been extensive research efforts concentrating on how to represent 3D shapes well using volumetric grid, multi-view and point cloud. However, there is little effort on using mesh data in recent years, due to the complexity and irregularity of mesh data. In this paper, we propose a mesh neural network, named MeshNet, to learn 3D shape representation from mesh data. In this method, face-unit and feature splitting are introduced, and a general architecture with available and effective blocks are proposed. In this way, MeshNet is able to solve the complexity and irregularity problem of mesh and conduct 3D shape representation well. We have applied the proposed MeshNet method in the applications of 3D shape classification and retrieval. Experimental results and comparisons with the state-of-the-art methods demonstrate that the proposed MeshNet can achieve satisfying 3D shape classification and retrieval performance, which indicates the effectiveness of the proposed method on 3D shape representation.' author: - | Yutong Feng,^1^ Yifan Feng,^2^ Haoxuan You,^1^ Xibin Zhao^1^[^1], Yue Gao^1^\ ^1^BNRist, KLISS, School of Software, Tsinghua University, China.\ ^2^School of Information Science and Engineering, Xiamen University\ {feng-yt15, zxb, gaoyue}@tsinghua.edu.cn, {evanfeng97, haoxuanyou}@gmail.com\ title: 'MeshNet: Mesh Neural Network for 3D Shape Representation' --- Introduction ============ Three-dimensional (3D) shape representation is one of the most fundamental topics in the field of computer vision and computer graphics. In recent years, with the increasing applications in 3D shapes, extensive efforts [@wu20153d; @chang2015shapenet] have been concentrated on 3D shape representation and proposed methods are successfully applied for different tasks, such as classification and retrieval. For 3D shapes, there are several popular types of data, including volumetric grid, multi-view, point cloud and mesh. With the success of deep learning methods in computer vision, many neural network methods have been introduced to conduct 3D shape representation using volumetric grid [@wu20153d; @maturana2015voxnet], multi-view [@su2015multi] and point cloud [@qi2017pointnet]. PointNet [@qi2017pointnet] proposes to learn on point cloud directly and solves the disorder problem with per-point Multi-Layer-Perceptron (MLP) and a symmetry function. As shown in Figure 1, although there have been recent successful methods using the types of volumetric grid, multi-view and point cloud, for the mesh data, there are only early methods using handcraft features directly, such as the Spherical Harmonic descriptor (SPH) [@kazhdan2003rotation], which limits the applications of mesh data. ![**The developing history of 3D shape representation using different types of data.** The X-axis indicates the proposed time of each method, and the Y-axis indicates the classification accuracy.[]{data-label="fig:intro"}](history2.pdf){width="3.6in"} ![image](pipeline4.pdf){width="6.5in"} Mesh data of 3D shapes is a collection of vertices, edges and faces, which is dominantly used in computer graphics for rendering and storing 3D models. Mesh data has the properties of complexity and irregularity. The complexity problem is that mesh consists of multiple elements, and different types of connections may be defined among them. The irregularity is another challenge for mesh data processing, which indicates that the number of elements in mesh may vary dramatically among 3D shapes, and permutations of them are arbitrary. In spite of these problems, mesh has stronger ability for 3D shape description than other types of data. Under such circumstances, how to effectively represent 3D shapes using mesh data is an urgent and challenging task. In this paper, we present a mesh neural network, named MeshNet, that learns on mesh data directly for 3D shape representation. To deal with the challenges in mesh data processing, the faces are regarded as the unit and connections between faces sharing common edges are defined, which enables us to solve the complexity and irregularity problem with per-face processes and a symmetry function. Moreover, the feature of faces is split into spatial and structural features. Based on these ideas, we design the network architecture, with two blocks named spatial and structural descriptors for learning the initial features, and a mesh convolution block for aggregating neighboring features. In this way, the proposed method is able to solve the complexity and irregularity problem of mesh and represent 3D shapes well. We apply our MeshNet method in the tasks of 3D shape classification and retrieval on the ModelNet40 [@wu20153d] dataset. And the experimental results show that MeshNet achieve significant improvement on 3D shape classification and retrieval using mesh data and comparable performance with recent methods using other types of 3D data. The key contributions of our work are as follows: - We propose a neural network using mesh for 3D shape representation and design blocks for capturing and aggregating features of polygon faces in 3D shapes. - We conduct extensive experiments to evaluate the performance of the proposed method, and the experimental results show that the proposed method performs well on the 3D shape classification and retrieval task. Related Work ============ Mesh Feature Extraction ----------------------- There are plenty of handcraft descriptors that extract features from mesh. @lien1984symbolic calculate moments of each tetrahedron in mesh [@lien1984symbolic], and @zhang2001efficient develop more functions applied to each triangle and add all the resulting values as features [@zhang2001efficient]. @hubeli2001multiresolution extend the features of surfaces to a multiresolution setting to solve the unstructured problem of mesh data [@hubeli2001multiresolution]. In SPH [@kazhdan2003rotation], a rotation invariant representation is presented with existing orientation dependent descriptors. Mesh difference of Gaussians (DOG) introduces the Gaussian filtering to shape functions.[@zaharescu2009surface] Intrinsic shape context (ISC) descriptor [@kokkinos2012intrinsic] develops a generalization to surfaces and solves the problem of orientational ambiguity. Deep Learning Methods for 3D Shape Representation ------------------------------------------------- With the construction of large-scale 3D model datasets, numerous deep descriptors of 3D shapes are proposed. Based on different types of data, these methods can be categorized into four types. *Voxel-based method.* 3DShapeNets [@wu20153d] and VoxNet [@maturana2015voxnet] propose to learn on volumetric grids, which partition the space into regular cubes. However, they introduce extra computation cost due to the sparsity of data, which restricts them to be applied on more complex data. Field probing neural networks (FPNN) [@li2016fpnn], Vote3D [@wang2015voting] and Octree-based convolutional neural network (OCNN) [@wang2017cnn] address the sparsity problem, while they are still restricted with input getting larger. *View-based method.* Using 2D images of 3D shapes to represent them is proposed by Multi-view convolutional neural networks (MVCNN) [@su2015multi], which aggregates 2D views from a loop around the object and applies 2D deep learning framework to them. Group-view convolutional neural networks (GVCNN) [@feng2018gvcnn] proposes a hierarchical framework, which divides views into different groups with different weights to generate a more discriminative descriptor for a 3D shape. This type of method also expensively adds the computation cost and is hard to be applied for tasks in larger scenes. *Point-based method.* Due to the irregularity of data, point cloud is not suitable for previous frameworks. PointNet [@qi2017pointnet++] solves this problem with per-point processes and a symmetry function, while it ignores the local information of points. PointNet++ [@qi2017pointnet++] adds aggregation with neighbors to solve this problem. Self-organizing network (SO-Net) [@li2018so], kernel correlation network (KCNet) [@shen2018mining] and PointSIFT [@jiang2018pointsift] develop more detailed approaches for capturing local structures with nearest neighbors. Kd-Net [@klokov2017escape] proposes another approach to solve the irregularity problem using k-d tree. *Fusion method.* These methods learn on multiple types of data and fusion the features of them together. FusionNet [@hegde2016fusionnet] uses the volumetric grid and multi-view for classification. Point-view network (PVNet) [@you2018pvnet] proposes the embedding attention fusion to exploit both point cloud data and multi-view data. Method ====== In this block, we present the design of MeshNet. Firstly, we analyze the properties of mesh, propose the methods for designing network and reorganize the input data. We then introduce the overall architecture of MeshNet and some blocks for capturing features of faces and aggregating them with neighbor information, which are then discussed in detail. Overall Design of MeshNet ------------------------- We first introduce the mesh data and analyze its properties. Mesh data of 3D shapes is a collection of vertices, edges and faces, in which vertices are connected with edges and closed sets of edges form faces. In this paper, we only consider triangular faces. Mesh data is dominantly used for storing and rendering 3D models in computer graphics, because it provides an approximation of the smooth surfaces of objects and simplifies the rendering process. Numerous studies on 3D shapes in the field of computer graphic and geometric modeling are taken based on mesh. ![**Initial values of each face.** There are four types of initial values, divided into two parts: center, corner and normal are the face information, and neighbor index is the neighbor information.[]{data-label="fig:input"}](input_data3.pdf){width="3.2in"} Mesh data shows stronger ability to describe 3D shapes comparing with other popular types of data. Volumetric grid and multi-view are data types defined to avoid the irregularity of the native data such as mesh and point cloud, while they lose some natural information of the original object. For point cloud, there may be ambiguity caused by random sampling and the ambiguity is more obvious with fewer amount of points. In contrast, mesh is more clear and loses less natural information. Besides, when capturing local structures, most methods based on point cloud collect the nearest neighbors to approximately construct an adjacency matrix for further process, while in mesh there are explicit connection relationships to show the local structure clearly. However, mesh data is also more irregular and complex for the multiple compositions and varying numbers of elements. To get full use of the advantages of mesh and solve the problem of its irregularity and complexity, we propose two key ideas of design: - **Regard face as the unit.** Mesh data consists of multiple elements and connections may be defined among them. To simplify the data organization, we regard face as the only unit and define a connection between two faces if they share a common edge. There are several advantages of this simplification. First is that one triangular face can connect with no more than three faces, which makes the connection relationship regular and easy to use. More importantly, we can solve the disorder problem with per-face processes and a symmetry function, which is similar to PointNet [@qi2017pointnet], with per-face processes and a symmetry function. And intuitively, face also contains more information than vertex and edge. - **Split feature of face.** Though the above simplification enables us to consume mesh data similar to point-based methods, there are still some differences between point-unit and face-unit because face contains more information than point. We only need to know “where you are" for a point, while we also want to know “what you look like" for a face. Correspondingly, we split the feature of faces into **spatial feature** and **structural feature**, which helps us to capture features more explicitly. Following the above ideas, we transform the mesh data into a list of faces. For each face, we define the initial values of it, which are divided into two parts (illustrated in Fig \[fig:input\]): - Face Information: - **Center**: coordinate of the center point - **Corner**: vectors from the center point to three vertices - **Normal**: the unit normal vector - Neighbor Information: - **Neighbor Index**: indexes of the connected faces (filled with the index of itself if the face connects with less than three faces) In the final of this section, we present the overall architecture of MeshNet, illustrated in Fig \[fig:pipeline\]. A list of faces with initial values is fed into two blocks, named **spatial descriptor** and **structural descriptor**, to generate the initial spatial and structural features of faces. The features are then passed through some **mesh convolution** blocks to aggregate neighboring information, which get features of two types as input and output new features of them. It is noted that all the processes above work on each face respectively and share the same parameters. After these processes, a pooling function is applied to features of all faces for generating global feature, which is used for further tasks. The above blocks will be discussed in following sections. Spatial and Structural Descriptors ---------------------------------- We split feature of faces into spatial feature and structural feature. The spatial feature is expected to be relevant to the spatial position of faces, and the structural feature is relevant to the shape information and local structures. In this section, we present the design of two blocks, named spatial and structural descriptors, for generating initial spatial and structural features. #### Spatial descriptor The only input value relevant to spatial position is the center value. In this block, we simply apply a shared MLP to each face’s center, similar to the methods based on point cloud, and output initial spatial feature. #### Structural descriptor: face rotate convolution We propose two types of structural descriptor, and the first one is named face rotate convolution, which captures the “inner” structure of faces and focus on the shape information of faces. The input of this block is the corner value. The operation of this block is illustrated in Fig \[fig:rc\]. Suppose the corner vectors of a face are $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$, we define the output value of this block as follows: $$g(\frac{1}{3}(f(\mathbf{v}_1, \mathbf{v}_2) + f(\mathbf{v}_2, \mathbf{v}_3) + f(\mathbf{v}_3, \mathbf{v}_1))),$$ where $f(\cdot, \cdot) : \mathbb{R}^3 \times \mathbb{R}^3 \rightarrow \mathbb{R}^{K_1}$ and $g(\cdot) : \mathbb{R}^{K_1} \rightarrow \mathbb{R}^{K_2}$ are any valid functions. This process is similar to a convolution operation, with two vectors as the kernel size, one vector as the stride and $K_1$ as the number of kernels, except that translation of kernels is replaced by rotation. The kernels, represented by $f(\cdot, \cdot)$, rotates through the face and works on two vectors each time. With the above process, we eliminate the influence caused by the order of processing corners, avoid individually considering each corner and also leave full space for mining features inside faces. After the rotate convolution, we apply an average pooling and a shared MLP as $g(\cdot)$ to each face, and output features with the length of $K_2$. ![**The face rotate convolution block.** Kernels rotate through the face and are applied to pairs of corner vectors for the convolution operation.[]{data-label="fig:rc"}](rc.pdf){width="3.0in"} #### Structural descriptor: face kernel correlation Another structural descriptor we design is the face kernel correlation, aiming to capture the “outer” structure of faces and explore the environments where faces locate. The method is inspired by KCNet [@shen2018mining], which uses kernel correlation (KC) [@tsin2004correlation] for mining local structures in point clouds. KCNet learns kernels representing different spatial distributions of point sets, and measures the geometric affinities between kernels and neighboring points for each point to indicate the local structure. However, this method is also restricted by the ambiguity of point cloud, and may achieve better performance in mesh. In our face kernel correlation, we select the normal values of each face and its neighbors as the source, and learnable sets of vectors as the reference kernels. Since all the normals we use are unit vectors, we model vectors of kernels with parameters in the spherical coordinate system, and parameters $(\theta, \phi)$ represent the unit vector $(x, y, z)$ in the Euclidean coordinate system: $$\left\{ \begin{array}{lr} x=\sin\theta\cos\phi \\ y=\sin\theta\sin\phi \\ z=\cos\theta \end{array}, \right.$$ where $\theta \in [0, \pi]$ and $\phi \in [0, 2\pi)$. We define the kernel correlation between the i-th face and the k-th kernel as follows: $$KC(i, k) = \frac{1}{|\mathcal{N}_i||\mathcal{M}_k|}\sum\limits_{\mathbf{n} \in \mathcal{N}_i}\sum\limits_{\mathbf{m} \in \mathcal{M}_k}K_{\sigma}(\mathbf{n}, \mathbf{m}),$$ where $\mathcal{N}_i$ is the set of normals of the i-th face and its neighbor faces, $\mathcal{M}_k$ is the set of normals in the k-th kernel, and $K_{\sigma}(\cdot,\cdot)\ : \mathbb{R}^3 \times \mathbb{R}^3 \rightarrow \mathbb{R}$ is the kernel function indicating the affinity between two vectors. In this paper, we generally choose the Gaussian kernel: $$K_{\sigma}(\mathbf{n}, \mathbf{m}) = \exp(-\frac{\left\| \mathbf{n} - \mathbf{m}\right \|^2}{2\sigma^2}),$$ where $\left\|\cdot\right\|$ is the length of a vector in the Euclidean space, and $\sigma$ is the hyper-parameter that controls the kernels’ resolving ability or tolerance to the varying of sources. With the above definition, we calculate the kernel correlation between each face and kernel, and more similar pairs will get higher values. Since the parameters of kernels are learnable, they will turn to some common distributions on the surfaces of 3D shapes and be able to describe the local structures of faces. We set the value of $KC(i,k)$ as the k-th feature of the i-th face. Therefore, with $M$ learnable kernels, we generate features with the length of $M$ for each face. Mesh Convolution ---------------- The mesh convolution block is designed to expand the receptive field of faces, which denotes the number of faces perceived by each face, by aggregating information of neighboring faces. In this process, features related to spatial positions should not be included directly because we focus on faces in a local area and should not be influenced by where the area locates. In the 2D convolutional neural network, both the convolution and pooling operations do not introduce any positional information directly while aggregating with neighboring pixels’ features. Since we have taken out the structural feature that is irrelevant to positions, we only aggregate them in this block. Aggregation of structural feature enables us to capture structures of wider field around each face. Furthermore, to get more comprehensive feature, we also combine the spatial and structural feature together in this block. The mesh convolution block contains two parts: **combination of spatial and structural features** and **aggregation of structural feature**, which respectively output the new spatial and structural features. Fig \[fig:meshconv\] illustrates the design of this block. ![**The mesh convolution.** “Combination” donates the combination of spatial and structural feature. “Aggregation” denotes the aggregation of structural feature, in which “Gathering” denotes the process of getting neighbors’ features and “Aggregation for one face” denotes different methods of aggregating features.[]{data-label="fig:meshconv"}](meshconv2.pdf){width="3.5in"} #### Combination of Spatial and Structural Features We use one of the most common methods of combining two types of features, which concatenates them together and applies a MLP. As we have mentioned, the combination result, as the new spatial feature, is actually more comprehensive and contains both spatial and structural information. Therefore, in the pipeline of our network, we concatenate all the combination results for generating global feature. #### Aggregation of Structural Feature With the input structural feature and neighbor index, we aggregate the feature of a face with features of its connected faces. Several aggregation methods are listed and discussed as follows: - **Average pooling:** The average pooling may be the most common aggregation method, which simply calculates the average value of features in each channel. However, this method sometimes weakens the strongly-activated areas of the original features and reduce the distinction of them. - **Max pooling:** Another pooling method is the max pooling, which calculates the max value of features in each channel. Max pooling is widely used in 2D and 3D deep learning frameworks for its advantage of maintaining the strong activation of neurons. - **Concatenation:** We define the concatenation aggregation, which concatenates the feature of face with feature of each neighbor respectively, passes these pairs through a shared MLP and applies a max pooling to the results. This method both keeps the original activation and leaves space for the network to combine neighboring features. We finally use the concatenation method in this paper. After aggregation, another MLP is applied to further fusion the neighboring features and output new structural feature. Implementation Details ---------------------- Now we present the details of implementing MeshNet, illustrated in Fig 2, including the settings of hyper-parameters and some details of the overall architecture. The spatial descriptor contains fully-connected layers (64, 64) and output a initial spatial feature with length of 64. Parameters inside parentheses indicate the dimensions of layers except the input layer. In the face rotate convolution, we set $K_1=32$ and $K_2 = 64$, and correspondingly, the functions $f(\cdot, \cdot)$ and $g(\cdot)$ are implemented by fully-connected layers (32, 32) and (64, 64). In the face kernel correlation, we set $M = 64$ (64 kernels) and $\sigma = 0.2$. We parameterize the mesh convolution block with a four-tuple $(in_1, in_2, out_1, out_2)$, where $in_1$ and $out1$ indicate the input and output channels of spatial feature, and the $in_2$ and $out2$ indicates the same of structural feature. The two mesh convolution blocks used in the pipeline of MeshNet are configured as $(64, 131, 256, 256)$ and $(256, 256, 512, 512)$. Experiments =========== In the experiments, we first apply our network for 3D shape classification and retrieval. Then we conduct detailed ablation experiments to analyze the effectiveness of blocks in the architecture. We also investigate the robustness to the number of faces and the time and space complexity of our network. Finally, we visualize the structural features from the two structural descriptors. 3D Shape Classification and Retrieval ------------------------------------- We apply our network on the ModelNet40 dataset [@wu20153d] for classification and retrieval tasks. The dataset contains 12,311 mesh models from 40 categories, in which 9,843 models for training and 2,468 models for testing. For each model, we simplify the mesh data into no more than 1,024 faces, translate it to the geometric center, and normalize it into a unit sphere. Moreover, we also compute the normal vector and indexes of connected faces for each face. During the training, we augment the data by jittering the positions of vertices by a Gaussian noise with zero mean and 0.01 standard deviation. Since the number of faces varies among models, we randomly fill the list of faces to the length of 1024 with existing faces for batch training. -------------------------------- -------- ------ ------ Acc mAP (%) (%) 3DShapeNets [@wu20153d] volume 77.3 49.2 VoxNet [@maturana2015voxnet] volume 83.0 - FPNN [@li2016fpnn] volume 88.4 - LFD [@chen2003visual] view 75.5 40.9 MVCNN [@su2015multi] view 90.1 79.5 Pairwise [@johns2016pairwise] view 90.7 - PointNet [@qi2017pointnet] point 89.2 - PointNet++ [@qi2017pointnet++] point 90.7 - Kd-Net [@klokov2017escape] point 91.8 - KCNet [@shen2018mining] point 91.0 - SPH [@kazhdan2003rotation] mesh 68.2 33.3 MeshNet mesh 91.9 81.9 -------------------------------- -------- ------ ------ : Classification and retrieval results on ModelNet40.[]{data-label="tab:application"} For classification, we apply fully-connected layers (512, 256, 40) to the global features as the classifier, and add dropout layers with drop probability of 0.5 before the last two fully-connected layers. For retrieval, we calculate the L2 distances between the global features as similarities and evaluate the result with mean average precision (mAP). We use the SGD optimizer for training, with initial learning rate 0.01, momentum 0.9, weight decay 0.0005 and batch size 64. ---------------- -------------- -------------- -------------- -------------- -------------- -------------- Spatial $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$ Structural-FRC $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$ Structural-FKC $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$ Mesh Conv $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$ $\checkmark$ Accuracy (%) 83.5 88.2 87.0 89.9 90.4 91.9 ---------------- -------------- -------------- -------------- -------------- -------------- -------------- : Classification results of ablation experiments on ModelNet40.[]{data-label="tab:ablation"} Aggregation Method Accuracy (%) -------------------- -------------- Average Pooling 90.7 Max Pooling 91.1 Concatenation 91.9 : Classification results of different aggregation methods on ModelNet40.[]{data-label="tab:aggregation"} Table \[tab:application\] shows the experimental results of classification and retrieval on ModelNet40, comparing our work with representative methods. It is shown that, as a mesh-based representation, MeshNet achieves satisfying performance and makes great improvement compared with traditional mesh-based methods. It is also comparable with recent deep learning methods based on other types of data. Our performance is dedicated to the following reasons. With face-unit and per-face processes, MeshNet solves the complexity and irregularity problem of mesh data and makes it suitable for deep learning method. Though with deep learning’s strong ability to capture features, we do not simply apply it, but design blocks to get full use of the rich information in mesh. Splitting features into spatial and structural features enables us to consider the spatial distribution and local structure of shapes respectively. And the mesh convolution blocks widen the receptive field of faces. Therefore, the proposed method is able to capture detailed features of faces and conduct the 3D shape representation well. On the Effectiveness of Blocks ------------------------------ To analyze the design of blocks in our architecture and prove the effectiveness of them, we conduct several ablation experiments, which compare the classification results while varying the settings of architecture or removing some blocks. For the spatial descriptor, labeled as “Spatial” in Table \[tab:ablation\], we remove it together with the use of spatial feature in the network, and maintain the aggregation of structural feature in the mesh convolution. For the structural descriptor, we first remove the whole of it and use max pooling to aggregate the spatial feature in the mesh convolution. Then we partly remove the face rotate convolution, labeled as “Structural-FRC” in Table \[tab:ablation\], or the face kernel correlation, labeled as “Structural-FKC”, and keep the rest of pipeline to prove the effectiveness of each structural descriptor. For the mesh convolution, labeled as “Mesh Conv” in Table \[tab:ablation\], we remove it and use the initial features to generate the global feature directly. We also explore the effectiveness of different aggregation methods in this block, and compare them in Table \[tab:aggregation\]. The experimental results show that the adopted concatenation method performs better for aggregating neighboring features. On the Number of Faces ---------------------- The number of faces in ModelNet40 varies dramatically among models. To explore the robustness of MeshNet to the number of faces, we regroup the test data by the number of faces with interval 200. In Table \[tab:facenum\], we list the proportion of the number of models in each group, together with the classification results. It is shown that the accuracy is absolutely irrelevant to the number of faces and shows no downtrend with the decrease of it, which proves the great robustness of MeshNet to the number of faces. Specifically, on the 9 models with less than 50 faces (the minimum is 10), our network achieves 100% classification accuracy, showing the ability to represent models with extremely few faces. Number of Faces Proportion (%) Accuracy (%) ----------------- ---------------- -------------- $[ 1000, 1024)$ 69.48 92.00 $[800, 1000)$ 6.90 92.35 $[600, 800)$ 4.70 93.10 $[400, 600)$ 6.90 91.76 $[200, 400)$ 6.17 90.79 $[0, 200)$ 5.84 90.97 : Classification results of groups with different number of faces on ModelNet40.[]{data-label="tab:facenum"} On the Time and Space Complexity -------------------------------- Table \[tab:timespace\] compares the time and space complexity of our network with several representative methods based on other types of data for the classification task. The column labeled “\#params” shows the total number of parameters in the network and the column labeled “FLOPs/sample” shows the number of floating-operations conducted for each input sample, representing the space and time complexity respectively. It is known that methods based on volumetric grid and multi-view introduce extra computation cost while methods based on point cloud are more efficient. Theoretically, our method works with per-face processes and has a linear complexity to the number of faces. In Table \[tab:timespace\], MeshNet shows comparable effectiveness with methods based on point cloud in both time and space complexity, leaving enough space for further development. ------------------------------- ---------- ------------ \#params FLOPs / (M) sample (M) PointNet [@qi2017pointnet] 3.5 440 Subvolume [@qi2016volumetric] 16.6 3633 MVCNN [@su2015multi] 60.0 62057 MeshNet 4.25 509 ------------------------------- ---------- ------------ : Time and space complexity for classification.[]{data-label="tab:timespace"} ![**Feature visualization of structural feature.** Models from the same column are colored with their values of the same channel in features. **Left**: Features from the face rotate convolution. **Right**: Features from the face kernel correlation.[]{data-label="fig:vis"}](vis1.pdf){width="3.3"} Feature Visualization --------------------- To figure out whether the structural descriptors successfully capture our features of faces as expected, we visualize the two types of structural features from face rotate convolution and face kernel correlation. We randomly choose several channels of these features, and for each channel, we paint faces with colors of different depth corresponding to their values in this channel. The left of Fig \[fig:vis\] visualizes features from face rotate convolution, which is expected to capture the “inner” features of faces and concern about the shapes of them. It is clearly shown that these faces with similar look are also colored similarly, and different channels may be activated by different types of triangular faces. The visualization results of features from face kernel correlation are in the right of Fig \[fig:vis\]. As we have mentioned, this descriptor captures the “outer” features of each face and is relevant to the whole appearance of the area where the face locates. In the visualization, faces in similar types of areas, such as flat surfaces and steep slopes, turn to have similar features, regardless of their own shapes and sizes. Conclusions =========== In this paper, we propose a mesh neural network, named MeshNet, which learns on mesh data directly for 3D shape representation. The proposed method is able to solve the complexity and irregularity problem of mesh data and conduct 3D shape representation well. In this method, the polygon faces are regarded as the unit and features of them are split into spatial and structural features. We also design blocks for capturing and aggregating features of faces. We conduct experiments for 3D shape classification and retrieval and compare our method with the state-of-the-art methods. The experimental result and comparisons demonstrate the effectiveness of the proposed method on 3D shape representation. In the future, the network can be further developed for more computer vision tasks. Acknowledgments =============== This work was supported by National Key R&D Program of China (Grant No. 2017YFC0113000), National Natural Science Funds of China (U1701262, 61671267), National Science and Technology Major Project (No. 2016ZX01038101), MIIT IT funds (Research and application of TCN key technologies) of China, and The National Key Technology R&D Program (No. 2015BAG14B01-02). Chang, A. X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; et al. 2015. . . Chen, D.-Y.; Tian, X.-P.; Shen, Y.-T.; and Ouhyoung, M. 2003. . In [*Computer Graphics Forum*]{}, volume 22, 223–232. Wiley Online Library. Feng, Y.; Zhang, Z.; Zhao, X.; Ji, R.; and Gao, Y. 2018. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 264–272. Hegde, V., and Zadeh, R. 2016. . . Hubeli, A., and Gross, M. 2001. . In [*Proceedings of the Conference on Visualization*]{}, 287–294. IEEE Computer Society. Jiang, M.; Wu, Y.; and Lu, C. 2018. . . Johns, E.; Leutenegger, S.; and Davison, A. J. 2016. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 3813–3822. Kazhdan, M.; Funkhouser, T.; and Rusinkiewicz, S. 2003. . In [*Symposium on Geometry Processing*]{}, volume 6, 156–164. Klokov, R., and Lempitsky, V. 2017. . In [*Proceedings of the IEEE International Conference on Computer Vision*]{}, 863–872. IEEE. Kokkinos, I.; Bronstein, M. M.; Litman, R.; and Bronstein, A. M. 2012. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 159–166. IEEE. Li, Y.; Pirk, S.; Su, H.; Qi, C. R.; and Guibas, L. J. 2016. . In [*Advances in Neural Information Processing Systems*]{}, 307–315. Li, J.; Chen, B. M.; and Lee, G. H. 2018. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 9397–9406. Lien, S.-l., and Kajiya, J. T. 1984. . 4(10):35–42. Maturana, D., and Scherer, S. 2015. . In [*2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)*]{}, 922–928. IEEE. Qi, C. R.; Su, H.; Nie[ß]{}ner, M.; Dai, A.; Yan, M.; and Guibas, L. J. 2016. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 5648–5656. Qi, C. R.; Su, H.; Mo, K.; and Guibas, L. J. 2017a. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 77–85. Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017b. . In [*Advances in Neural Information Processing Systems*]{}, 5099–5108. Shen, Y.; Feng, C.; Yang, Y.; and Tian, D. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, volume 4. Su, H.; Maji, S.; Kalogerakis, E.; and Learned-Miller, E. 2015. . In [*Proceedings of the IEEE International Conference on Computer Vision*]{}, 945–953. Tsin, Y., and Kanade, T. 2004. . In [*European Conference on Computer Vision*]{}, 558–569. Springer. Wang, D. Z., and Posner, I. 2015. In [*Robotics: Science and Systems*]{}, volume 1. Wang, P.-S.; Liu, Y.; Guo, Y.-X.; Sun, C.-Y.; and Tong, X. 2017. . 36(4):72. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; and Xiao, J. 2015. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 1912–1920. You, H.; Feng, Y.; Ji, R.; and Gao, Y. 2018. . In [*Proceedings of the 26th ACM International Conference on Multimedia*]{}, 1310–1318. ACM. Zaharescu, A.; Boyer, E.; Varanasi, K.; and Horaud, R. 2009. . In [*Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*]{}, 373–380. IEEE. Zhang, C., and Chen, T. 2001. . In [*Proceedings of the IEEE International Conference on Image Processing*]{}, volume 3, 935–938. IEEE. [^1]: Corresponding authors
--- abstract: 'We investigate the driven quantum phase transition between oscillating motion and the classical nearly free rotations of the Josephson pendulum coupled to a harmonic oscillator in the presence of dissipation. This model describes the standard setup of circuit quantum electrodynamics, where typically a transmon device is embedded in a superconducting cavity. We find that by treating the system quantum mechanically this transition occurs at higher drive powers than expected from an all-classical treatment, which is a consequence of the quasiperiodicity originating in the discrete energy spectrum of the bound states. We calculate the photon number in the resonator and show that its dependence on the drive power is nonlinear. In addition, the resulting multi-photon blockade phenomenon is sensitive to the truncation of the number of states in the transmon, which limits the applicability of the standard Jaynes–Cummings model as an approximation for the pendulum-oscillator system. compare two different approaches to dissipation, namely the Floquet–Born–Markov and the Lindblad formalisms.' author: - 'I. Pietikäinen$^1$' - 'J. Tuorila$^{1,2}$' - 'D. S. Golubev$^2$' - 'G. S. Paraoanu$^2$' bibliography: - 'nonlinear\_bib.bib' title: 'Photon blockade and the quantum-to-classical transition in the driven-dissipative Josephson pendulum coupled to a resonator' --- Introduction ============ The pendulum, which can be seen as a rigid rotor in a gravitational potential [@baker2005], is a quintessential nonlinear system. It has two extreme dynamical regimes: the low-energy regime, where it can be approximated as a weakly anharmonic oscillator, and the high-energy regime, where it behaves as a free rotor. Most notably, the pendulum physics appears in systems governed by the Josephson effect, where the Josephson energy is analogous to the gravitational energy, and the role of the momentum is taken by the imbalance in the number of particles due to tunneling across the weak link. Such a system is typically referred to as a Josephson pendulum. In ultracold degenerate atomic gases, several realizations of the Josephson pendulum have been studied [@Leggett2001; @paraoanu2001; @Smerzi1997; @Marino1999]. While the superfluid-fermion case [@Paraoanu2002; @Heikkinen2010] still awaits experimental realization, the bosonic-gas version has been already demonstrated [@Albiez2005; @Levy2007]. Also in this case two regimes have been identified: small Josephson oscillations, corresponding to the low-energy limit case described here, and the macroscopic self-trapping regime [@Smerzi1997; @Marino1999], corresponding to the free-rotor situation. Another example is an oscillating $LC$ electrical circuit as a tunnel barrier between two superconducting leads. This is the case of the transmon circuit [@koch2007], which is currently one of the most promising approaches to quantum processing of information, with high-fidelity operations and good prospects for scalability. Its two lowest eigenstates are close to those of a harmonic oscillator, with only weak perturbations caused by the anharmonicity of the potential. The weak anharmonicity also guarantees that the lowest states of the transmon are immune to charge noise, which is a major source of decoherence in superconducting quantum circuits. In this paper we consider a paradigmatic model which arises when the Josephson pendulum is interacting with a resonator. Circuit quantum electrodynamics offers a rigorous embodyment of the above model as a transmon device coupled to a superconducting resonator - fabricated either as a three-dimensional cavity or as a coplanar waveguide segment. In this realization, the system is driven by an external field of variable frequency, and dissipation affects both the transmon and the resonator. We study in detail the onset of nonlinearity in the driven-dissipative phase transition between the quantum and the classical regimes. We further compare the photon number in detail with the corresponding transmon occupation, and demonstrate that the onset of nonlinearities is accompanied by the excitation of all bound states of the transmon and, thus, it is sensitive to the transmon truncation. We also find that the onset of the nonlinearities is sensitive to the energy level structure of the transmon, [*e.g.*]{} on the gate charge which affects the eigenenergies near and outside the edge of the transmon potential. The results also show that the full classical treatment is justified only in the high-amplitude regime, yielding significant discrepancies in the low-amplitude regime - where the phenomenology is governed by photon blockade. This means that the system undergoes a genuine quantum-to-classical phase transition. our numerical simulations demonstrate that the multi-photon blockade phenomenon is qualitatively different for a realistic multilevel anharmonic system compared to the Jaynes–Cummings case studied extensively in the literature. namely the conventional Lindblad master equation and the Floquet–Born–Markov master equation, which is developed especially to capture the effects of the drive on the dissipation. We show that both yield relatively close results. However, we emphasize that the Floquet–Born–Markov approach should be preferred because its numerical implementation is considerably more efficient than that of the corresponding Lindblad equation. While our motivation is to elucidate fundamental physics, in the burgeoning field of quantum technologies several applications of our results can be envisioned. For example, the single-photon blockade can be employed to realize single-photon sources, and the two-photon blockade can be utilized to produce transistors controlled by a single photon and filters that yield correlated two-photon outputs from an input of random photons [@Kubanek2008]. In the field of quantum simulations, the Jaynes–Cummings model can be mapped the Dirac electron in an electromagnetic field, with the coupling and the drive amplitude corresponding respectively to the magnetic and electric field: the photon blockade regime is associated with a discrete spectrum of the Dirac equation, while the breakdown of the blockade corresponds to a continuous spectrum [@Gutierrez-Jauregui2018]. Finally, the switching behavior of the pendulum in the transition region can be used for designing bifurcation amplifiers for the single-shot nondissipative readout of qubits [@Vijay2009]. The paper is organized as follows. In Section \[sec:II\] we introduce the electrical circuit which realizes the pendulum-oscillator system, calculate its eigenenergies, identify the two dynamical regimes of the small oscillations and the free rotor. In Section \[sec:III\], we introduce the drive and dissipation. We discuss two formalisms for dissipation, namely the Lindblad equation and the Floquet–Born–Markov approach. Section \[sec:IV\] presents the main results for the quantum-to-classical transition and the photon blockade, focusing on the resonant case. Here, we also discuss the gate dependence, the ultra-strong coupling regime, Section \[sec:V\] is dedicated to conclusions. Circuit-QED implementation of a Josephson pendulum coupled to a resonator {#sec:II} ========================================================================= We discuss here the physical realization of the Josephson pendulum-resonator system as an electrical circuit consisting of a transmon device coupled capacitively to an $LC$ oscillator, as depicted in Fig. \[fig:oscpendevals\](a). The coupled system is modeled by the Hamiltonian $$\label{eq:H0} \hat H_{0} = \hat H_{\rm r} + \hat H_{\rm t} + \hat H_{\rm c},$$ where $$\begin{aligned} \hat H_{\rm r} &=& \hbar\omega_{\rm r}\hat a^{\dag}\hat a, \label{eq:Hr}\\ \hat H_{\rm t} &=& 4E_{\rm C}(\hat n-n_{\rm g})^2 - E_{\rm J}\cos \hat \varphi, \label{eq:transmonHam}\\ \hat H_{\rm c} &=& \hbar g\hat n(\hat a^{\dag}+\hat a)\end{aligned}$$ describe the resonator, the transmon, and their coupling, respectively. We have defined $\hat a$ as the annihilation operator of the harmonic oscillator, and used $\hat n = -i\partial /\partial\varphi$ as the conjugate momentum operator of the superconducting phase difference $\hat \varphi$. These operators obey the canonical commutation relation $[\hat \varphi,\hat n]=i$. The angular frequency of the resonator is given by $\omega_{\rm r}$. We have also denoted the Josephson energy with $E_{\rm J}$, and the charging energy with $E_{\rm C } = e^2/(2C_\Sigma)$ where the capacitance on the superconducting island of the transmon is given as $C_{\Sigma}=C_{\rm B} + C_{\rm J} + C_{\rm g}$. Using the circuit diagram in Fig. \[fig:oscpendevals\](a), we obtain the coupling constant $g = 2 e C_{\rm g}q_{\rm zp}/(\hbar C_{\Sigma} C_{\rm r})$, where the zero-point fluctuation amplitude of the oscillator charge is denoted with $q_{\rm zp} = \sqrt{C_{\rm r}\hbar \omega_{\rm r}/2}$ [@koch2007]. Let us briefly discuss the two components of this system: the Josephson pendulum and the resonator. The pendulum physics is realized by the superconducting transmon circuit [@koch2007] in Fig. \[fig:oscpendevals\](a) and described by the Hamiltonian $\hat H_{\rm t}$ in Eq. (\[eq:transmonHam\]). As discussed in Ref. [@koch2007], the Hamiltonian of the transmon is analogous to that of an electrically charged particle whose motion is restricted to a circular orbit and subjected to homogeneous and perpendicular gravitational and magnetic fields. By fixing the x and z directions as those of the gravity and the magnetic field, respectively, the position of the particle is completely determined by the motion along the polar angle in the xy plane. The polar angle can be identified as the $\varphi$ coordinate of the pendulum. Thus, the kinetic energy part of the Hamiltonian (\[eq:transmonHam\]) describes a free rotor in a homogeneous magnetic field. In the symmetric gauge, the vector potential of the field imposes an effective constant shift for the $\varphi$ component of the momentum, which is analogous to the offset charge $n_{\textrm g}$ on the superconducting island induced either by the environment or by a gate voltage. In the following, the ’plasma’ frequency for the transmon is given by $\omega_{\rm p} = \sqrt{8E_{\rm C}E_{\rm J}}/\hbar$ and describes the classical oscillations of the linearized circuit. The parameter $\eta=E_{\rm J}/E_{\rm C}$ is the ratio between the potential and kinetic energy of the pendulum, and determines, by the condition $\eta \gg 1$, whether the device is in the charge-insensitive regime of the free rotor and the gravitational potential. The eigenvalues $\{\hbar \omega_k\}$ and the corresponding eigenvectors $\{|k\rangle\}$, with $k=0,1,\ldots$, of the Hamiltonian in Eq. (\[eq:transmonHam\]) can be obtained by solving the Mathieu equation, see Appendix \[app:eigenvalue\]. In general, the eigenvalues of the coupled system Hamiltonian $\hat H_0$ in Eq. (\[eq:H0\]) have to be solved numerically. With a sufficient truncation in the Hilbert spaces of the uncoupled systems in Eq. (\[eq:Hr\]) and Eq. (\[eq:transmonHam\]), one can represent the Hamiltonian $\hat H_0$ in a matrix form. The resulting eigenvalues of the truncated Hamiltonian $\hat H_0$ are shown in Fig. \[fig:oscpendevals\]. We see that the coupling creates avoided crossings at the locations where the pendulum transition frequencies are equal to positive integer multiples of the resonator quantum. Also, the density of states increases drastically with the energy. The nonlinearity in the system is characterized by the non-equidistant spacings between the energy levels. Their origin is the sinusoidal Josephson potential of the transmon. Here, we are especially interested in the regime where the resonator frequency is (nearly) resonant with the frequency of the lowest transition of the pendulum, i.e. when $\omega_{\rm r}\approx\omega_{\rm q} = \omega_{01}=\omega_1-\omega_0$. ![Electrical circuit and the corresponding eigenenergy spectrum. (a) Lumped-element schematic of a transmon-resonator superconducting circuit. The resonator and the transmon are marked with blue and magenta rectangles. (b) Numerically obtained eigenenergies of the resonator-pendulum Hamiltonian in Eq. (\[eq:H0\]) are shown in blue as a function of the resonator frequency. The bare pendulum eigenenergies $\hbar \omega_k$ are denoted with dashed horizontal lines and indicated with the label $k$. The eigenenergies of the uncoupled system, defined Eq. (\[eq:Hr\]) and Eq. (\[eq:transmonHam\]), are given by the dashed lines whose slope increases in integer steps with the number of quanta in the oscillator as $n\hbar \omega_{\rm r}$. We only show the eigenenergies of the uncoupled system for the case of pendulum in its ground state, but we note that one obtains a similar infinite fan of energies for each pendulum eigenstate. Note that in general $\omega_{\rm q}\neq \omega_{\rm p}$. We have used the parameters in Table \[tab:params1\] and fixed $n_{\rm g}=0$. []{data-label="fig:oscpendevals"}](fig1){width="1.0\linewidth"} The Hamiltonian $\hat H_0$ in Eq. (\[eq:H0\]) can be represented in the eigenbasis $\{|k\rangle\}$ of the Josephson pendulum as $$\hat{H}_{0} = \hbar\omega_{\rm r} \hat{a}^{\dag}\hat{a} + \sum_{k=0}^{K-1} \hbar\omega_{k} \vert k\rangle\langle k\vert + \hbar g(\hat{a}^{\dag}+\hat{a})\sum_{k,\ell=0}^{K-1}\hat{\Pi}_{k\ell}.\label{eq:ManyStatesHam}$$ Here, $K$ is the number of transmon states included in the truncation. We have also defined $\hat\Pi_{k\ell}\equiv \langle k|\hat{n}|\ell\rangle |k\rangle\langle \ell|$ which is the representation of the Cooper-pair-number operator in the eigenbasis of the transmon. A useful classification of the eigenstates can be obtained by using the fact that the transmon can be approximated as a weakly anharmonic oscillator [@koch2007], thus $\langle k|\hat{n}|\ell\rangle$ is negligible if $k$ and $\ell$ differ by more than 1. Together with the rotating-wave approximation, this results in $$\begin{split} \hat{H}_{0} \approx &\hbar\omega_{\rm r} \hat{a}^{\dag}\hat{a} + \sum_{k=0}^{K-1} \hbar\omega_{k}\vert k\rangle\langle k\vert \\ &+ \hbar g\sum_{k=0}^{K-2} \left(\hat{a}\hat{\Pi}_{k,k+1}^{\dag} + \hat{a}^{\dag} \hat{\Pi}_{k,k+1} \right),\label{eq:ManyStatesHam_simple} \end{split}$$ Here, we introduce the total excitation-number operator as $$\hat N = \hat a^{\rm \dag}\hat a + \sum_{k=0}^{K-1} k\vert k\rangle\langle k\vert,\label{eq:exitationN_K}$$ which commutes with the Hamiltonian in Eq. (\[eq:ManyStatesHam\_simple\]). Thus, the eigenstates of this Hamiltonian can be labeled by the eigevalues of $\hat N$, which is a representation that we will find useful transitions between these states. The terms neglected in the rotating-wave approximation can be treated as small perturbations except for transitions where the coupling frequency $g_{\ell k} = g\langle k|\hat n|\ell\rangle$ becomes a considerable fraction of the corresponding transition frequency $\omega_{\ell k}=\omega_{k}-\omega_\ell$ and, thus, enters the ultrastrong coupling regime with $g_{k\ell} \geq 0.1\times \omega_{\ell k}$. In the ultrastrong coupling regime and beyond, the eigenstates are superpositions of states with different excitation numbers and cannot, thus, anymore be labeled with $N$. Another important approximation for the Hamiltonian in Eq.(\[eq:ManyStatesHam\]) is the two-state truncation ($K=2$), which reduces it to the Rabi Hamiltonian $$\label{eq:HRabi} \hat H_{\rm R} = \hbar \omega_{\rm r}\hat a^{\dag}\hat a+\hbar \omega_{\rm q} \hat \sigma_+\hat \sigma_- + \hbar g_{01}(\hat a^{\dag}+\hat a)\hat \sigma_{\rm x}.$$ Here $g_{01}=g\langle 1|\hat n|0\rangle$, the qubit annihilation operator is $\hat \sigma_- = |0\rangle\langle 1|$, and the Pauli spin matrix $\hat \sigma_{\rm x}=\hat \sigma_-+\hat \sigma_+$. The Rabi Hamiltonian is a good approximation to the pendulum-oscillator system as long as the corrections for the low-energy eigenvalues and eigenstates, arising from the higher excited states of the pendulum, are taken properly into account [@boissonneault2009a; @boissonneault2012b; @boissonneault2012c]. Further, by performing a rotating-wave approximation, we obtain the standard Jaynes–Cummings model $$\hat H_{\rm JC} = \hbar \omega_{\rm r}\hat a^{\dag}\hat a+\hbar \omega_{\rm q} \hat \sigma_+\hat \sigma_- + \hbar g_{01}(\hat a^{\dag}\hat \sigma_-+\hat a\hat \sigma_+),\label{eq:HJC}$$ which also results from a truncation of Eq. (\[eq:ManyStatesHam\_simple\]) to the low-energy subspace spanned by the lowest two eigenstates of the transmon. Apart from the non-degenerate ground state $|0,0\rangle$ with zero energy, the excited-state eigenenergies of the Jaynes–Cummings Hamiltonian in Eq. (\[eq:HJC\]) form a characteristic doublet structure. In the resonant case, the excited-state eigenenergies and the corresponding eigenstates are given by $$\begin{aligned} E_{n_{r},\pm} &=& n_{r}\hbar \omega_{\rm r} \pm \sqrt{n_{r}}\hbar g_{01}, \label{eq:JCener}\\ |n_{r},\pm\rangle &=& \frac{1}{\sqrt{2}}(|n_{r},0\rangle \pm |n_{r}-1,1\rangle). \label{eq:JCstates}\end{aligned}$$ Here, $n_r=1,2,\ldots$ and we have denoted eigenstates of the uncoupled Jaynes–Cummings Hamiltonian with $\{|n_{r},0\rangle , |n_{r},1\rangle\}$ where $|n_{r}\rangle$ are the eigenstates of the resonator with $n_{r}=0,1,\ldots$. Due to the rotating-wave approximation, the Jaynes–Cummings Hamiltonian commutes with the excitation-number operator in Eq. (\[eq:exitationN\_K\]) truncated to two states and represented as $$\hat N = \hat a^{\rm \dag}\hat a + \hat \sigma_+\hat \sigma_-. \label{eq:exitationN_2}$$ Thus, they have joint eigenstates and, in addition, the excitation number $N$ is a conserved quantity. For a doublet with given $n_{r}$, the eigenvalue of the excitation-number operator is $N=n_{r}$, while for the ground state $N=0$. We note that the transition energies between the Jaynes–Cummings eigenstates depend nonlinearly on $N$. Especially, the transition energies from the ground state $\vert 0,0\rangle$ to the eigenstate $\vert n_{r},\pm \rangle$ are given by $n_{r}\hbar\omega_{\rm r} \pm \sqrt{n_{r}}\hbar g_{01}$. Models for the driven-dissipative Josephson pendulum coupled to the harmonic oscillator {#sec:III} ======================================================================================= Here, we provide a master equation approach that incorporates the effects of the drive and dissipation to the coupled system. Previous studies on this system have typically truncated the transmon to the low-energy subspace spanned by the two lowest energy eigenstates [@Bishop2010; @Reed2010], or treated the dissipation in the conventional Lindblad formalism [@bishop2009]. Recent studies [@pietikainen2017; @pietikainen2018; @verney2018; @lescanne2018] have treated the dissipation at the detuned limit using the Floquet–Born–Markov approach. We will apply a similar formalism for the case where the pendulum and resonator are in resonance in the low-energy subspace. Especially, we study the driven-dissipative transition between the low-energy and the free rotor regimes of the pendulum in terms of the dependence of the number $N_{\rm r}$ of quanta in the resonator on the drive power. Coupling to the drive --------------------- The system shown in Fig. \[fig:oscpendevals\] and described by the Hamiltonian in Eq. (\[eq:H0\]) can be excited by coupling the resonator to a monochromatic driving signal modeled with the Hamiltonian $$\label{eq:Hd} \hat H_{\rm d} = \hbar A \cos(\omega_{\rm d}t)[\hat a^{\dag}+\hat a],$$ where $A$ and $\omega_{\rm d}$ are the amplitude and the angular frequency of the drive, respectively. This results in a total system Hamiltonian $\hat H_{\rm S} = \hat H_{0} + \hat{H}_{\rm d}$. For low-amplitude drive, only the first two states of the pendulum have a significant occupation and, thus, the Hamiltonian $\hat H_0$ can be truncated into the form of the well-known Rabi Hamiltonian in Eq. (\[eq:HRabi\]), which in turn, under the rotating-wave approximation, yields the standard Jaynes–Cummings Hamiltonian in Eq. (\[eq:HJC\]). The transitions induced by the drive in the Jaynes–Cummings system are subjected to a selection rule – the occupation number can change only by one, i.e. $N \rightarrow N\pm 1$. This follows from the relations $$\begin{aligned} \langle n_{r},\pm|(\hat a^{\dag}+\hat a)|0,0\rangle &=& \frac{1}{\sqrt{2}}\delta_{n_{r},1}, \label{eq:selection1}\\ \langle n_{r},\pm|(\hat a^{\dag}+\hat a)|\ell_{r},\pm \rangle &=& \frac{1}{2}\left(\sqrt{n_{r}}+\sqrt{n_{r}-1}\right)\delta_{n_{r},\ell_{r}+1} \nonumber\\ &+&\frac{1}{2}\left(\sqrt{n_{r}+1}+\sqrt{n_{r}}\right)\delta_{n_{r},\ell_{r}-1}\label{eq:selection2}\end{aligned}$$ As a consequence, the system climbs up the Jaynes–Cummings ladder by one step at a time. Particularly, a system in the ground state is coupled directly only to states $|1,\pm\rangle$. Indeed, in such a system the Jaynes–Cummings ladder has been observed [@fink2008], as well as the effect of strong drive in the off-resonant [@pietikainen2017] and on-resonant case [@fink2017]. The Jaynes–Cummings model offers a good starting point for understanding the phenomenon of photon blockade in the pendulum-resonator system, which will be discussed later in detail. Indeed, it is apparent from Eq. (\[eq:JCener\]) that, as the system is driven externally by not too intense fields, the excitation to higher levels in the resonator is suppressed by the higher levels being off-resonant, due to the nonlinearity induced by the coupling. This is referred to as photon blockade. As the drive amplitude increases further, the entire Jaynes–Cummings hierarchy breaks down [@carmichael2015]. However, in weakly anharmonic systems such as the transmon, as the drive amplitude is increased, the higher excited states of the Josephson pendulum become occupied and the two-state approximation becomes insufficient. As a consequence, the system has to be modeled by a In the resonant case, the need to take into account the second excited state of the transmon has been pointed out already in Ref. [@fink2017]. , at larger drive amplitudes, the pendulum escapes the low-energy subspace defined by the states localized in a well of the cosine potential and the unbound free rotor states also become occupied [@pietikainen2017; @pietikainen2018; @lescanne2018; @verney2018] even in the case of strongly detuned drive frequency. In the limit of very high drive power, the pendulum behaves as a free rotor and the nonlinear potential can be neglected. Consequently, the resonance frequency of the system is set by the bare resonator frequency, instead of the normal modes. Dissipative coupling -------------------- The dissipation is treated by modeling the environment thermal bosonic bath which is coupled bilinearly to the resonator. The Hamiltonian of the driven system coupled to the bath can be written as $$\label{eq:totHam} \hat H = \hat H_{\rm S} + \hat H_{\rm B} + \hat H_{\rm int},$$ where $$\begin{aligned} \hat H_{\rm B} &=& \hbar \sum_k \Omega_k\hat b_k^{\dag}\hat b_k,\\ \hat H_{\rm int} &=& \hbar (\hat a^{\dag}+\hat a) \sum_k g_k(\hat b_k^{\dag}+\hat b_k).\label{eq:dissint}\end{aligned}$$ Above, $\{\hat b_k\}$, $\{\Omega_k\}$, and $\{g_k\}$ are the annihilation operators, the angular frequencies, and the coupling frequencies of the bath oscillators. We use this model in the derivation of a master equation for the reduced density operator of the system. We proceed in the conventional way and assume the factorized initial state $\hat \rho(0) = \hat\rho_{\rm S}(0)\otimes \hat \rho_{\rm B}(0)$, apply the standard Born and Markov approximations, trace over the bath, and perform the secular approximation. As a result, we obtain a master equation in the standard Lindblad form. Lindblad master equation {#sec:Lindblad} ------------------------ Conventionally, the dissipation in the circuit QED setup has been treated using independent Lindblad dissipators for the resonator and for the pendulum. Formally, this can be achieved by coupling the pendulum to another heat bath formed by an infinite set of harmonic oscillators. This interaction can be described with the Hamiltonian $$\label{eq:transdissint} \hat H_{\rm int}^{\rm t} = \hbar \hat n \sum_k f_k(\hat c_k^{\dag}+\hat c_k),$$ where $\{f_k\}$ and $\{\hat c_k\}$ are the coupling frequencies and the annihilation operators of the bath oscillators. The bath is coupled to the transmon through the charge operator $\hat n$ which is the typical source of decoherence in the charge-based superconducting qubit realizations. By following the typical Born–Markov derivation of the master equation for the uncoupled subsystems, one obtains a Lindblad equation where the dissipators induce transitions between the eigenstates of the uncoupled ($g=0$) system [@Breuer2002; @scala2007; @beaudoin2011; @tuorila2017] $$\begin{split} \frac{{\,\text{d}\hat\rho\,}}{{\,\text{d}t\,}} =& -\frac{i}{\hbar}[\hat{H}_{\rm S},\hat{\rho}] +\kappa[n_{\rm th}(\omega_{\rm r})+1]\mathcal{L}[\hat{a}]\hat\rho \\ &+\kappa n_{\rm th}(\omega_{\rm r})\mathcal{L}[\hat{a}^\dagger]\hat\rho \\ &+\sum_{k\ell} \Gamma_{k\ell}\mathcal{L}[|\ell\rangle\langle k|]\hat{\rho}, \end{split} \label{eq:LindbladME}$$ where $\mathcal{L}[\hat{A}]\hat\rho = \frac12 (2\hat{A}\hat\rho \hat{A}^{\dag} - \hat{A}^{\dag}\hat{A}\hat\rho-\hat\rho \hat{A}^{\dag}\hat{A})$ is the Lindblad superoperator and $n_{\rm th}(\omega)=1/[e^{\hbar\omega/(k_{\rm B} T)}-1]$ is the Bose–Einstein occupation. Note that the treatment of dissipation as superoperators acting separately on the qubit and on the resonator is valid if their coupling strength and the drive amplitude are weak compared to the transition frequencies of the uncoupled system-environment. Above, we have also assumed an ohmic spectrum for the resonator bath. In the Lindblad master equation (\[eq:LindbladME\]), we have included the effects arising from the coupling $g$ and the drive into the coherent von Neumann part of the dynamics. The first two incoherent terms cause transitions between the eigenstates of the resonator and arise from the interaction Hamiltonian in Eq. (\[eq:dissint\]). The strength of this interaction is characterized with the spontaneous emission rate $\kappa$. The last term describes the relaxation, excitation, and dephasing of the transmon caused by the interaction Hamiltonian in Eq. (\[eq:transdissint\]). The transition rates $\Gamma_{k\ell}$ between the transmon eigenstates follow the Fermi’s golden rule as $$\Gamma_{k\ell} = |\langle \ell | \hat n| k\rangle|^2 S(\omega_{k\ell}).$$ In our numerical implementation, we have assumed that the fluctuations of the transmon bath can also be characterised with an ohmic spectrum $S(\omega)=\frac{\gamma_0\omega}{1-\exp[-\hbar\omega/k_{\rm B}T]}$, where $\gamma_0$ is a dimensionless factor describing the bath-coupling strength. We have also denoted the transition frequencies of the transmon with $\omega_{k\ell} = \omega_{\ell}-\omega_k$. Here, the magnitude of the transition rate from state $|k\rangle$ to the state $|\ell\rangle$ is given by the corresponding matrix element of the coupling operator $\hat n$ and the coupling strength $\gamma_0$. We note that in a typical superconducting resonator-transmon realization one has $\gamma=\gamma_0 \omega_{01}\ll \kappa$. In this so-called bad-cavity limit, the effects of the transmon bath are negligible especially if the coupling frequency $g$ with the resonator is large. Thus, the main contribution of the transmon dissipators in the master equation Eq. (\[eq:LindbladME\]) is that it results to faster convergence in the numerical implementation of the dynamics. Floquet–Born–Markov formalism {#sec:FBM} ----------------------------- The dissipators in the Lindblad model above are derived under the assumption of weak driving and weak coupling between the transmon and the resonator. However, both the driving and the coupling affect the eigenstates of the system and, thus, have to be taken into account in the derivation of the master equation. This can be achieved in the so-called Floquet–Born–Markov approach, where the drive and the transmon-resonator coupling are explicitly included throughout the derivation of the dissipators [@tuorila2013; @pietikainen2017; @pietikainen2018; @lescanne2018; @verney2018]. For this purpose, we represent the system in terms of the quasienergy states which can be obtained only numerically. Since the drive in Eq. (\[eq:Hd\]) is $\tau=2\pi/\omega_{\rm d}$-periodic, the solution to the time-dependent Schrödinger equation $$\label{eq:tdse} i\hbar\frac{{\,\text{d}}\,}{{\,\text{d}t\,}}|\Psi(t)\rangle = \hat{H}_{\rm S}(t) |\Psi(t)\rangle,$$ corresponding to the Hamiltonian $\hat{H}_{\rm S}(t)$ in Eq. (\[eq:totHam\]), can be written in the form $$\label{eq:FloqState} |\Psi(t)\rangle = e^{-i\varepsilon t/\hbar} |\Phi(t)\rangle,$$ where $\varepsilon$ are the quasienergies and $|\Phi(t)\rangle$ are the corresponding $\tau$-periodic quasienergy states. By defining the unitary time-propagator as $$\label{eq:FloqProp} \hat{U}(t_2,t_1)|\Psi(t_1)\rangle =|\Psi(t_2)\rangle,$$ one can rewrite the Schrödinger equation (\[eq:tdse\]) in the form $$i\hbar\frac{{\,\text{d}}\,}{{\,\text{d}t\,}}\hat{U}(t,0) = \hat{H}_{\rm S}(t)\hat{U}(t,0).$$ Using Eqs. (\[eq:FloqState\]) and (\[eq:FloqProp\]), we obtain $$\begin{aligned} \hat{U}(\tau,0)|\Phi(0)\rangle &=& e^{-i\varepsilon \tau/\hbar} |\Phi(0)\rangle, \label{eq:QEproblem}\end{aligned}$$ from which the quasienergies $\varepsilon_\alpha$ and the corresponding quasienergy states $|\Phi_\alpha(0)\rangle$ can be solved. Using the propagator $\hat U$, one can obtain the quasienergy states for all times from $$\hat{U}(t,0)|\Phi_\alpha(0)\rangle = e^{-i\varepsilon_\alpha t/\hbar} |\Phi_\alpha(t)\rangle.$$ Due to the periodicity of $|\Phi_\alpha(t)\rangle$, it is sufficient to find the quasienergy states for the time interval $t\in[0,\tau ]$. Also, if $\varepsilon_\alpha$ is a solution for Eq. (\[eq:QEproblem\]), then $\varepsilon_\alpha +\ell\hbar\omega_{\rm d}$ is also a solution. Indeed, all solutions of Eq. (\[eq:QEproblem\]) can be obtained from the solutions of a single energy interval of $\hbar\omega_{\rm d}$. These energy intervals are called Brillouin zones, in analogy with the terminology used in solid-state physics for periodic potentials. The master equation for the density operator in the quasienergy basis can be written as [@Blumel1991; @Grifoni1998] $$\label{eq:FBM} \begin{split} \dot{\rho}_{\alpha\alpha}(t) &= \sum_{\nu} \left[\Gamma_{\nu\alpha}\rho_{\nu\nu}(t)-\Gamma_{\alpha\nu}\rho_{\alpha\alpha}(t)\right],\\ \dot{\rho}_{\alpha\beta}(t) &= -\frac12 \sum_{\nu}\left[\Gamma_{\alpha\nu}+\Gamma_{\beta\nu}\right]\rho_{\alpha\beta}(t), \ \ \alpha\neq \beta, \end{split}$$ where $$\begin{split} \Gamma_{\alpha\beta}&=\sum_{\ell=-\infty}^{\infty} \left[\gamma_{\alpha\beta \ell}+n_{\rm th}(|\Delta_{\alpha\beta \ell}|)\left(\gamma_{\alpha\beta \ell}+\gamma_{\beta \alpha -\ell}\right)\right],\\ \gamma_{\alpha\beta \ell} &= \frac{\pi}{2} \kappa \theta(\Delta_{\alpha\beta\ell})\frac{\Delta_{\alpha\beta\ell}}{\omega_{\rm r}}|X_{\alpha\beta\ell}|^2. \end{split}$$ Above, $\theta(\omega)$ is the Heaviside step-function and $\hbar\Delta_{\alpha \beta \ell} = \varepsilon_{\alpha} - \varepsilon_{\beta} + \ell\hbar\omega_{\rm d}$ is the energy difference between the states $\alpha$ and $\beta$ in Brillouin zones separated by $\ell$. Also, $$X_{\alpha\beta \ell} = \frac{1}{\tau}\int_{t_0}^{t_0 +\tau} {\,\text{d}t\,} e^{-i\ell\omega_d t} \langle \Phi_\alpha(t)|(\hat{a}^\dagger+\hat{a})|\Phi_\beta(t)\rangle,$$ where $t_0$ is some initial time after the system has reached a steady state. From Eq. (\[eq:FBM\]), we obtain the occupation probabilities $p_\alpha=\rho_{\alpha\alpha}(t\rightarrow \infty)$ in the steady state as $$p_{\alpha} = \frac{\sum_{\nu\neq \alpha} \Gamma_{\nu\alpha}p_{\nu}}{\sum_{\nu\neq \alpha}\Gamma_{\alpha\nu}},$$ and the photon number $$\label{eq:FBMNr} N_{\rm r} = \sum_\alpha p_\alpha\langle \hat{a}^\dagger\hat{a}\rangle_\alpha,$$ where $$\langle \hat{a}^\dagger\hat{a}\rangle_\alpha= \frac{1}{\tau}\int_{t_0}^{t_0 +\tau} {\,\text{d}t\,} \langle \Phi_\alpha(t)|\hat{a}^\dagger\hat{a}|\Phi_\alpha(t)\rangle,$$ is the photon number in a single quasienergy state. The occupation probability for the transmon state $\vert k\rangle$ is given by $$\label{eq:FBMPk} P_k= \frac{1}{\tau}\sum_\alpha p_\alpha\int_{t_0}^{t_0 +\tau} {\,\text{d}t\,} \langle \Phi_\alpha(t)|k\rangle\langle k|\Phi_\alpha(t)\rangle \,.$$ We emphasize that this method assumes weak coupling to the bath but no such restrictions are made for the drive and pendulum-resonator coupling strengths. As a consequence, the dissipators induce transitions between the quasienergy states of the driven coupled system. Parameters ---------- The parameter space is spanned by seven independent parameters which are shown in Table \[tab:params1\]. Symbol Parameter Value ------------------ ------------------------------------ ------- $\omega_{\rm q}$ qubit frequency 1.0 $\omega_{\rm d}$ drive frequency 0.98 $\omega_{\rm p}$ plasma oscillation frequency 1.08 $g$ coupling frequency 0.04 $\kappa$ resonator dissipation rate 0.002 $k_{\rm B}T$ thermal energy 0.13 $E_{\rm C}$ charging energy 0.07 $\eta$ energy ratio $E_{\rm J}/E_{\rm C}$ 30 : Parameters of the driven and dissipative oscillator-pendulum system. The numerical values of the angular frequencies and energies used in the numerical simulations are given in units of $\omega_{\rm r}$ and $\hbar\omega_{\rm r}$, respectively. We note that $\omega_{\rm q}$ is determined by $E_{\rm C}$ and $\eta$, see the text.[]{data-label="tab:params1"} We fix the values of the energy ratio $\eta=E_{\rm J}/E_{\rm C}$ and the coupling strengths $g$ and $\kappa$. The ratio $\eta$ sets the number $K_{\rm b}$ of bound states in the pendulum, see Appendix \[app:eigenvalue\], but does not qualitatively affect the response. We have used a moderate value of $\eta$ in the transmon regime, in order to keep $K_{\rm b}$ low allowing more elaborate discussion of the transient effects between the low-energy oscillator and rotor limits. We use the Born, Markov, and secular approximations in the description of dissipation which means that the value of $\kappa$ has to be smaller than the system frequencies. In addition, we work in the experimentally relevant strong coupling regime where the oscillator-pendulum coupling $g\gg \kappa$. The choice of parameters is similar to the recently realized circuit with the same geometry [@pietikainen2017]. The transition energies of the transmon are determined by the Josephson energy $E_{\rm J}$ and by the charging energy $E_{\rm C}$, which can be adjusted by the design of the shunting capacitor $C_{\rm B}$, see Fig. \[fig:oscpendevals\]. The transition energy between the lowest two energy eigenstates is given by $\hbar\omega_{\rm q} \approx \sqrt{8E_{\rm J}E_{\rm C}}-E_{\rm C} = E_{\rm C} (\sqrt{8\eta}-1)$. We will study the onset of the nonlinearities for different drive detunings $\delta_{\rm d}=\omega_{\rm d}-\omega_{\rm r}$ as a function of the drive amplitude $A$. We are especially interested in the resonant case $\delta_{\rm q}=\omega_{\rm q}-\omega_{\rm r}=0$. The detuned case has been previously studied in more detail in Refs. [@pietikainen2017; @pietikainen2018; @lescanne2018; @verney2018]. We have used a temperature value of $k_{\rm B}T/(\hbar \omega_{\rm r})=0.13$ which corresponds to $T\approx 30$ mK for a transmon with $\omega_{\rm q}/(2\pi) = 5$ GHz. Numerical results {#sec:IV} ================= Classical system ---------------- Classically, we can understand the behaviour of our system as follows: the pendulum-resonator forms a coupled system, whose normal modes can be obtained. However, because the pendulum is nonlinear, the normal-mode frequencies of the coupled-system depend on the oscillation amplitude of the pendulum. The resonator acts also as a filter for the drive, which is thus applied to the pendulum. As the oscillation amplitude of the pendulum increases, the normal-mode frequency shifts, an effect which is responsible for photon blockade. Eventually the pendulum reaches the free rotor regime, where the Josephson energy becomes negligible. As a consequence, the nonlinearity no longer plays any role, and the resulting eigenmode of the system is that of the bare resonator. We first solve the classical equation of motion (see Appendix \[app:classeom\]) for the driven and damped resonator-transmon system. We study the steady-state occupation $N_{\rm r}$ of the resonator as a function of the drive amplitude. Classically, one expects that the coupling to the transmon causes deviations from the bare resonator occupation $$\label{eq:anocc} N_{\rm bare} = \frac14\frac{A^2}{\delta_{\rm d}^2 + \kappa^2/4}.$$ We emphasize that $N_{\rm bare} \leq A^2/\kappa^2$ where the equality is obtained if the drive is in resonance, i.e. if $\delta_{\rm d}=0$. The numerical data for $\delta_{\rm d}/\omega_{\rm r} =-0.02$ is shown in Fig. \[fig:classsteps\]. We compare the numerical data against the bare-resonator photon number in Eq. (\[eq:anocc\]), and against the photon number of the linearized system, see Appendix \[app:classeom\], $$\label{eq:linearNr} N_{\rm lin} = \frac{A^2}{4}\frac{1}{\left(\delta_{\rm d}-g_{\rm eff}^2\frac{\delta_{\rm p}}{\delta_{\rm p}^2+\gamma^2/4}\right)^2+\left(\frac{\kappa}{2}+g_{\rm eff}^2\frac{\gamma/2}{\delta_{\rm p}^2+\gamma^2/4}\right)^2},$$ where $\delta_{\rm p} = \omega_{\rm d}-\omega_{\rm p}$, $\hbar \omega_{\rm p} = \sqrt{8E_{\rm J}E_{\rm C}}$, $g_{\rm eff} = g\sqrt[4]{\eta/32}$, and $\gamma$ is the dissipation rate of the pendulum. The above result is obtained by linearizing the pendulum potential which results to system that is equivalent to two coupled harmonic oscillators. We find in Fig. \[fig:classsteps\] that for small drive amplitude $A/\kappa = 0.005$, the steady state of the resonator photon number is given by that of the linearized system. As a consequence, both degrees of freedom oscillate at the drive frequency and the system is classically stable. The small deviation between the numerical and analytic steady-state values is caused by the rotating-wave approximations that were made for the coupling and the drive in the derivation of Eq. (\[eq:linearNr\]). If the drive amplitude is increased to $A/\kappa =7$, the nonlinearities caused by the cosinusoidal Josephson potential generate chaotic behavior in the pendulum. As a consequence, the photon number does not find a steady state but, instead, displays aperiodic chaotic oscillations around some fixed value between those of the bare resonator and the linearized system. This value can be found by studying the long-time average in the steady state. For very high drive amplitude $A/\kappa = 500$, the photon number in the classical system is given by that of the bare resonator in Eq. (\[eq:anocc\]). Physically this means that for strong driving, the pendulum experiences rapid free rotations and, as a consequence, its contribution to the photon dynamics is zero on average. In Fig. \[fig:Occupation7\](a), we study in more detail how the classical steady-state photon number of the resonator changes as the coupled system goes through the transition between the linearized oscillations in the weak driving regime and the bare-resonator oscillations for strong driving. In the absence of driving the steady-state photon number is zero in accordance to Eq. (\[eq:linearNr\]). For low drive amplitudes, the resonator-transmon system can be approximated as a driven and damped Duffing oscillator. We show in Appendix \[app:classeom\] that the system has one stable solution for drive amplitudes $A<A_{\rm min}$ and $A>A_{\rm crit}$, and two stable solutions for $A_{\rm min}<A<A_{\rm crit}$ where $$\begin{aligned} A_{\rm min} &=& \tilde\gamma\sqrt{2(\tilde\omega_{\rm p}^2-\omega_{\rm d}^2)}\frac{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}}{g\omega_{\rm r}\omega_{\rm p}},\label{eq:duffminimal}\\ A_{\rm crit} &=& \sqrt{\frac{8}{27}}\sqrt{(\tilde{\omega}_{\rm p}^2-\omega_{\rm d}^2)^3}\frac{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}}{g\omega_{\rm r}\omega_{\rm d}\omega_{\rm p}},\label{eq:duffan}\end{aligned}$$ where we have defined the renormalized oscillator frequency and transmon dissipation rate as $\tilde{\omega}_{\rm r}^2 = \omega_{\rm r}^2 - g^2\hbar \omega_{\rm r}/(4E_{\rm C})$ and $\tilde{\gamma} = \gamma+gg_1\kappa\omega_{\rm d}^2/[(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2]$, respectively, the classical oscillation frequency of the linearized transmon as $\hbar\omega_{\rm p}=\sqrt{8E_{\rm J}E_{\rm C}}$, and the renormalized linearized transmon frequency as $\tilde{\omega}_{\rm p}^2 = \omega_{\rm p}^2-g^2 \omega_{\rm d}^2/(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)[\hbar \omega_{\rm r}/(4E_{\rm C})]$. For amplitudes $A<A_{\rm min}$, the classical system behaves as a two-oscillator system and the photon number has the typical quadratic dependence on the drive amplitude in Eq. (\[eq:linearNr\]). As the drive amplitude becomes larger than $A_{\rm min}$ deviations from the linearized model emerge. In addition, the system becomes bistable. If $A \approx A_{\rm crit}$, the number of stable solutions for the Duffing oscillator is reduced from two to one. This is displayed by the abrupt step in the photon number of the classical solution in Fig. \[fig:Occupation7\] around $A_{\rm crit}/\kappa =1.2$. The remaining high-amplitude stable solution appears as a plateau which reaches up to the drive amplitude $A/\kappa \approx 5.6$. If the drive amplitude is further increased, the higher order terms beyond the Duffing approximation render the motion of the classical system chaotic, as described already in Fig. \[fig:classsteps\]. For large drives, the classical photon number approaches asymptotically the photon number of the bare resonator. ![Classical dynamics of the resonator occupation $N_{\rm r}$ of the driven and dissipative resonator-transmon system. We show data for linear ($A/\kappa = 0.005$, ), chaotic ($A/\kappa = 7$, ), and the bare-resonator ($A/\kappa = 500$, ) regimes. The bare-oscillator occupation in the steady state is given by Eq. (\[eq:anocc\]) and indicated with dashed lines. We also show with dot-dashed lines the steady-state photon numbers for the linearized system, as given in Eq. (\[eq:linearNr\]). We have used the pendulum dissipation rate $\gamma/\omega_{\rm r} =2\times 10^{-4}$. The other parameters are listed in Table \[tab:params1\].[]{data-label="fig:classsteps"}](fig2){width="1.0\linewidth"} Quantum description {#sec:quantdesc} ------------------- The transition between the motion of linearized and bare-resonator oscillations is characteristic to oscillator-pendulum systems. However, we show here that in the quantum mechanical context, the onset of the nonlinear dynamical behaviour turns out to be quantitatively different from that provided by the above classical model. This was also observed in recent experimental realization with superconducting circuits [@pietikainen2017]. In the quantum-mechanical treatment, we calculate the steady-state photon number in the resonator as a function of the drive amplitude using the Floquet–Born–Markov master equation presented in Sec. \[sec:FBM\]. We have confirmed that for the used values of the drive amplitude the simulation has converged for the truncation of seven transmon states and 60 resonator states. We compare the quantum results against those given by the classical equation of motion and study also deviations from the results obtained with the two-state truncation of the transmon. In Fig. \[fig:Occupation7\], we present the results corresponding to gate charge $n_{\rm g}=0$, where the resonator, the transmon, and the drive are nearly resonant at low drive amplitudes. The used parameters are the same as in Fig. \[fig:classsteps\] and listed in Table \[tab:params1\]. ![Onset of the nonlinearities in the driven system. (a) The steady-state photon number $N_{\rm r}$ as a function of the drive amplitude. We compare the Floquet–Born–Markov (FBM) simulation with seven transmon states against the corresponding solutions for the Rabi Hamiltonian and the classical system. The classical region of bistability occurs between $A_{\rm min}/\kappa = 0.97$ and $A_{\rm crit}/\kappa=1.2$, given by Eqs. (\[eq:duffminimal\]) and (\[eq:duffan\]), respectively. The classical simulation demonstrates switching between the two stable solutions at $A\approx A_{\rm crit}$. We also show the photon numbers of the linearized system and the bare resonator, as given by Eqs. (\[eq:anocc\]) and (\[eq:linearNr\]), respectively. Note that both axes are shown in logarithmic scale. (b) Occupation probabilities $P_k$ of the transmon eigenstates calculated using FBM. We indicate the regime of classical response with the shaded region in both figures. (c) Order parameter $\Xi$ defined in Eq. (\[eq:orderparameter\]). We have used $n_{\rm g}=0$ and the drive detuning $\delta_{\rm d}/\omega_{\rm r} = -0.02$. Other parameters are listed in Table \[tab:params1\].[]{data-label="fig:Occupation7"}](fig3){width="1.0\linewidth"} First, we notice in Fig. \[fig:Occupation7\](a) that even in the absence of driving there always exists a finite photon occupation of $N_{\rm r} \approx 10^{-3}$ in the ground state, contrary to the classical solution which approaches zero. At zero temperature, the existence of these ground-state photons [@lolli2015] originates from the terms in the interaction Hamiltonian that do not conserve the number of excitations and are neglected in the rotating-wave approximation resulting in Eq. (\[eq:ManyStatesHam\_simple\]). For the two-state truncation of the transmon, one can derive a simple analytic result for the ground-state photon number by treating these terms as a small perturbation. In the second order in the perturbation parameter $g/\omega_{\rm r}$, one obtains that the number of ground-state photons is given by $N_{\rm r} \approx (g/2\omega_{\rm r})^2$. We have confirmed that our simulated photon number at zero driving is in accordance with this analytic result if $T=0$ and $g/\omega_{\rm r}\ll 1$. The photon number at zero driving obtained in Fig. \[fig:Occupation7\](a) is slightly higher due to additional thermal excitation - in the simulations we use a finite value for temperature, see Table \[tab:params1\]. As was discussed in the previous section, the resonator photon number of a classical system increases quadratically with the drive amplitude. For amplitudes $A<A_{\rm crit}$, the classical system can be approximated with a linearized model formed by two coupled harmonic oscillators. However, in the quantum case the energy levels are discrete and, thus, the system responds only to a drive which is close to resonance with one of the transitions. In addition, the energy levels have non-equidistant separations which leads to a reduction of the photon number compared to the corresponding classical case, referred to as the photon blockade. This is also apparent in Fig. \[fig:Occupation7\](a). We emphasize that the photon-blockade is quantitatively strongly-dependent on the transmon truncation. This can be seen as the deviation between the two and seven state truncation results for $A/\kappa>1$ in Fig. \[fig:Occupation7\](a). We further demonstrate this by showing the transmon occupations $P_{\rm k}$ in Fig. \[fig:Occupation7\](b). For weak drive amplitudes, the transmon stays in its ground state. The excitation of the two-level system is accompanied by excitations of the transmon to several its bound states. If $A/\kappa \geq 30$, the transmon escapes its potential well and also the free rotor states start to gain a finite occupation. This can be interpreted as a transition between the Duffing oscillator and free rotor limits of the transmon, see Appendix \[app:eigenvalue\]. As a consequence, the response of the quantum system resembles its classical counterpart. We will study the photon blockade in more detail in the following section. ### Order parameter {#order-parameter .unnumbered} In order to characterize the transition between the quantum and classical regime, we can also study the behaviour of the order parameter $\Xi$ defined as the expectation value of the coupling part of the Hamiltonian in Eq. (\[eq:ManyStatesHam\]), normalized with $\hbar g$, as $$\label{eq:orderparameter} \Xi = \left|\left\langle (\hat a^{\dag} + \hat a)\sum_{k,\ell}\hat \Pi_{k,\ell}\right\rangle\right|,$$ previously introduced and used for the off-resonant case in Ref. [@pietikainen2017]. To get an understanding of its behavior, let us evaluate it for the resonant Jaynes–Cummings model, $$\label{eq:orderparameterJC} \Xi_{\rm JC} = \left|\left\langle n_{r}, \pm| (\hat a^{\dag} + \hat a)\sigma_{x} |n_{r},\pm \right\rangle \right| = \sqrt{n_{r}},$$ therefore it correctly estimates the absolute value of the cavity field operator. At the same time, when applied to the full Rabi model, it includes the effect of the terms that do not conserve the excitation number. In Fig. \[fig:Occupation7\](c), we present $\Xi$ as a function of the drive amplitude $A$. Much like in the off-resonant case, this order parameter displays a marqued increase by one order of magnitude across the transition region. Photon blockade: dependence on the drive frequency -------------------------------------------------- ![image](fig4){width="1.0\linewidth"} Here, we discuss in more detail the phenomenon of photon blockade in the pendulum-resonator system as a function of the drive detuning $\delta_{\rm d} = \omega_{\rm d}-\omega_{\rm r}$. First, we consider the transition between the ground state and the state $|n_{r},\pm\rangle$ \[Eq. (\[eq:JCstates\])\] of the resonant Jaynes–Cummings system ($\omega_{\rm q}=\omega_{\rm r}$). We recall that the selection rules Eqs. (\[eq:selection1\]) and (\[eq:selection2\]) allow only direct transitions that change the excitation number by one. However, at higher amplitudes the probability of higher order processes is no longer negligible and excited states can be populated by virtual non-resonant single-photon transitions. As a consequence, one obtains the resonance condition for multi-photon transitions as $n_{r}\omega_{\rm d} =n_{r}\omega_{\rm r} \pm \sqrt{n_{r}}g$. Because the energy-level structure is non-equidistant, the drive couples only weakly to other transitions in the system. In the absence of dissipation, the dynamics of the Jaynes–Cummings system can, thus, be approximated in a subspace spanned by the states $\{|0,0\rangle,|n_{r},\pm\rangle\}$. Thus, one expects that, due to the driving, the system goes through $n_{r}$-photon Rabi oscillations between the basis states of the subspace. The Rabi frequency $\Omega_{n_{r},\pm}$ of such process is proportional to the corresponding matrix element of the driving Hamiltonian in Eq. (\[eq:Hd\]) and the drive amplitude $A$. Consequently, the time-averaged photon number in the system is $N_{\rm r} = (n_{r}-\frac{1}{2})/2$. The driving does not, however, lead into a further increase of the photon number either because the drive is not resonant with transitions from the state $|n_{r},\pm\rangle$ to higher excited states or the matrix element of the resonant transitions are negligibly small. We are referring to this phenomenon as $n_{r}$-photon blockade. Dissipation modifies somewhat this picture, as it causes transitions outside the resonantly driven subspace. As a consequence, the average photon number decays with a rate which is proportional to $\kappa$. Thus, the steady state of such system is determined by the competition between the excitation and relaxation processes caused by the drive and the dissipation, respectively. At low temperatures, the occupation in the ground state becomes more pronounced as the dissipation causes mostly downward transitions. Thus, the steady-state photon number is reduced compared to the time-averaged result for Rabi-driven non-dissipative transition. This was visible already in Fig. \[fig:Occupation7\] in which the data was obtained with the two-state truncation and corresponds to the 4-photon blockade of the Jaynes–Cummings system. The diagram in Fig. \[fig:DriveDetuning\](a) represents the eigenenergies of the Hamiltonian for Eq. (\[eq:ManyStatesHam\]) in the two-state truncation for the transmon. The states are classified according to the excitation number $N$ from Eq. (\[eq:exitationN\_2\]). We note that, here, we do not make a rotating-wave approximation and strictly speaking $N$ is, therefore, not a good quantum number. However, it still provides a useful classification of the states since the coupling frequency is relatively small, i.e. $g/\omega_{\rm r}=0.04$. In Fig. \[fig:DriveDetuning\](c), we show the photon blockade spectrum of the resonator-transmon system as a function of the drive detuning $\delta_{\rm d}$, obtained numerically with the Floquet–Born–Markov master equation. Here, one can clearly identify the one-photon blockade at the locations where the drive frequency is in resonance with the single-photon transition frequency of the resonator-transmon system [@bishop2009], i.e. when $\delta_{\rm d}= \pm g$. Two, three, and higher-order blockades occur at smaller detunings and higher drive amplitudes, similar to Ref. [@carmichael2015]. Transitions involving up to five drive photons are denoted in the diagram in Fig. \[fig:DriveDetuning\](a) and are vertically aligned with the corresponding blockades in Fig. \[fig:DriveDetuning\](c). At zero detuning, there is no excitation as the coupling to the transmon shifts the energy levels of the resonator so that there is no transition corresponding with the energy $\hbar\omega_{\rm r}$. We also note that the photon-number spectrum is symmetric with respect to the drive detuning $\delta_{\rm d}$. We see this same symmetry also in Eq. (\[eq:linearNr\]) for the linearized classical system when the classical linearized frequency of the transmon is in resonance with the resonator frequency, i.e. when $\omega_{\rm p}=\omega_{\rm r}$. However, in experimentally relevant realisations of such systems the higher excited states have a considerable quantitative influence to the photon-number spectrum. We demonstrate this by showing data for the seven-state transmon truncation in Figs. \[fig:DriveDetuning\](b) and (d). The eigenenergies shown in Fig. \[fig:DriveDetuning\](b) are those obtained in Fig. \[fig:oscpendevals\] at resonance ($\omega_{\rm r}=\omega_{\rm q}$). We have again confirmed that for our choice of drive amplitudes and other parameters, this truncation is sufficient to obtain converged results with the Floquet–Born–Markov master equation. We observe that the inclusion of the higher excited states changes considerably the observed photon number spectrum. However, the states can again be labeled by the excitation number $N$ which we have confirmed by numerically calculating $N=\langle \hat N\rangle$ for all states shown in Fig. \[fig:DriveDetuning\](c). The relative difference from whole integers is less than one percent for each shown state. Corresponding to each $N$, the energy diagram forms blocks containing $N+1$ eigenstates with (nearly) the same excitation number, similar to the doublet structure of the Jaynes–Cummings model. Contrary to the two-state case, these blocks start to overlap if $N>4$ for our set of parameters, as can be seen in Fig. \[fig:DriveDetuning\](b). The number of transitions that are visible for our range of drive frequencies and amplitudes in Fig. \[fig:DriveDetuning\](d) is, thus, increased from ten observed in the Jaynes–Cummings case to 15 in the seven-state system. However, some of these transitions are not visible for our range amplitudes due to the fact that the corresponding virtual one-photon transitions are not resonant and/or have small transition matrix elements. In addition, the spectrum is asymmetric with respect to the detuning as the multi-photon resonances are shifted towards larger values of $\delta_{\rm d}$. As a consequence, the break-down of the photon blockade at $\delta_{\rm d}=0$ occurs at much lower amplitudes as is observed in the Jaynes–Cummings system [@carmichael2015]. Approaching the ultrastrong coupling ------------------------------------ For most applications in quantum information processing a relative coupling strength $g/\omega_{\rm r}$ of a few percent is sufficient. However, recent experiments with superconducting circuits have demonstrated that it is possible to increase this coupling into the ultrastrong regime ($g/\omega_{\rm r} \sim 0.1 - 1$) and even further in the deep strong coupling regime ($g/\omega_{\rm r} \geq 1$) [@FornDiaz2018; @Gu2017; @Kockum2019]. While the highest values have been obtained so far with flux qubits, vacuum-gap transmon devices with a similar electrical circuit as in Fig. \[fig:oscpendevals\](a) can reach $g/\omega_{\rm r} = 0.07$ [@Bosman2017a] and $g/\omega_{\rm r} = 0.19$ [@Bosman2017b]. for the average number of photons in the resonator for couplings $g/\omega_{\rm r} = 0.04, 0.06$, and $0.1$, employing the Floquet–Born–Markov approach to dissipation. At low drive powers the two-level approximation can be used for the transmon, and the Josephson pendulum-resonator system maps into the quantum Rabi model. From Fig. \[fig:ultrastrong\] we see that the average number of photons $N_{\rm r}$ in the resonator is not zero even in the ground state; this number clearly increases as the coupling gets stronger. As noted also before, this is indeed a feature of the quantum Rabi physics: differently from the Jaynes–Cummings model where the ground state contains zero photons, the terms that do not conserve the excitation number in $\hat{H}_{\rm c}$ lead to a ground state which is a superposition of transmon and resonator states with non-zero number of excitations. the perturbative formula $N_{\rm r} \approx (g/2\omega_{\rm r})^2$ approximates very well the average number of photons at zero temperature, while in Fig. \[fig:ultrastrong\] we observe slightly higher values due to the finite temperature. As the drive increases, we observe that the photon blockade tends to be more effective for large $g$’s. Interestingly, the transition to a classical state also occurs more abruptly as the coupling gets stronger. We have checked that this coincides with many of the upper levels of the transmon being rapidly populated. Due to this effect, the truncation to seven states (which is the maximum that our code can handle in a reasonable amount of time) becomes less reliable and artefacts such as the sharp resonances at some values start to appear. ![Steady-state photon number $N_{\rm r}$ as a function of drive amplitude for different coupling strengths. The simulations are realized using the Floquet–Born–Markov approach with the seven-state truncation for the transmon. The drive detuning is $\delta_{\rm d}/\omega_{\rm r} = -0.02$ and also the other parameters are the same as in Table \[tab:params1\].[]{data-label="fig:ultrastrong"}](fig5){width="1.0\linewidth"} ![image](fig6){width="0.9\linewidth"} Dependence on the gate charge ----------------------------- ![Onset of nonlinearity as a function of the gate charge. (a) The photon number $N_{\rm r}$ as a function of the drive amplitude. We compare the numerical data for $n_{\rm g} = 0$ and $n_{\rm g} = 0.5$ obtained with seven transmon states. (b) Corresponding occupations $P_{k}$ in the transmon eigenstates. The drive detuning is $\delta_{\rm d}/\omega_{\rm r} = -0.02$ and also the other parameters are the same as in Table \[tab:params1\].[]{data-label="fig:Ng"}](fig7){width="\linewidth"} If the transmon is only weakly nonlinear, i.e. $\eta \gg 1$, its lowest bound eigenstates are insensitive to the gate charge, see Appendix \[app:eigenvalue\]. As a consequence, one expects that the value of the gate charge should not affect the photon-number response to a weak drive. However, as the amplitude of the drive is increased, the higher excited states of the transmon become occupied, as discussed in the context of Fig. \[fig:Occupation7\]. Especially, the transition region between the quantum and classical responses should be dependent on the gate-charge dispersion of the transmon states. We demonstrate this in Fig. \[fig:Ng\], where we show the simulation data for the gate-charge values $n_{\rm g}=0$ and $n_{\rm g}=0.5$. Clearly, in the weak driving regime, the responses for the two gate-charge values are nearly equal. The deviation in the photon number is of the order of $10^{-3}$, which is explained by our rather modest value of $\eta=30$. The deviations between the photon numbers of the two gate-charge values are notable if $A/\kappa = 10\ldots 20$. In this regime, the transmon escapes the subspace spanned by the two lowest eigenstates and, thus, the solutions obtained with different gate-charge numbers are expected to differ. At very high amplitudes, the free-rotor states with $k\geq 6$ also begin to contribute the dynamics. These states have a considerable gate-charge dispersion but, however, the superconducting phase is delocalized. Accordingly, the gate-charge dependence is smeared by the free rotations of the phase degree of freedom. We also note that the photon number response displays two sharp peaks for $n_{\rm g}=0.5$ at $A/\kappa \approx 13$ and $A/\kappa \approx 25$. The locations of the peaks are very sensitive to the value of the gate charge, i.e. to the energy level structure of the transmon. Similar abrupt changes in the transmon occupation were also observed in recent experiments in Ref. [@lescanne2018]. They could be related to quantum chaotic motion of the system recently discussed in Ref. [@mourik2018]. In this parameter regime, also the Jaynes–Cummings model displays bistability [@Vukics2018]. Comparison between different master equations --------------------------------------------- ![Comparison between the Floquet–Born–Markov (FBM) and Lindblad models for dissipation in the two-level approximation for the transmon. The drive detuning is $\delta_{\rm d}/\omega_{\rm r}= -0.02$ and also the other parameters are the same as in Table \[tab:params1\].[]{data-label="fig:comparison"}](fig8){width="1.0\linewidth"} We have also compared our numerical Floquet–Born–Markov method against the Lindblad master equation which was presented in Sec. \[sec:Lindblad\] and has been conventionally used in the studies of similar strongly driven systems with weak dissipation. We note that for the case of strong coupling to the bath, the possible treatment is by the method of path integrals, as developed by Feynman and Vernon, which has already been applied to describe the dynamics of the Rabi model [@Henriet2014]. We recall that in the Lindblad formalism, the environment induces transitions between the non-driven states of the system, whereas in the Floquet–Born–Markov approach the dissipation couples to the drive-dressed states of the system. Thus, one expects deviations from the Floquet–Born–Markov results in the limit of strong driving. In Fig. \[fig:comparison\] we show a comparison between the two models in the two-state truncation approximation for the transmon. We see that the largest differences between the models appear when the transition from the quantum to classical response starts to emerge, see Fig. \[fig:Occupation7\]. Based on our numerical calculations, the differences are the largest at resonance and both models give equivalent results whenever one of the three frequencies, $\omega_{\rm r}$, $\omega_{\rm q}$, or $\omega_{\rm d}$, is detuned from the other two. We emphasize, however, that computationally the Floquet–Born–Markov master equation is by two orders of magnitude more efficient than the corresponding Lindblad equation. Moreover, in the case of Fig. \[fig:DriveDetuning\](d) the computing time of the Floquet–Born–Markov equation was roughly a week with an ordinary CPU. In such cases, the solution of the Lindblad equation becomes impractical and one should use a parallelized implementation of the Floquet–Born–Markov master equation. Conclusions {#sec:V} =========== We have given a comprehensive treatment of the driven-dissipative quantum-to-classical phase transition for a Josephson pendulum coupled to a resonator, going beyond the truncated Rabi form of the Hamiltonian through the full inclusion of the higher energy level of the pendulum. We modelled the open quantum system with the Floquet–Born–Markov method, in which the dissipative transitions occur between the drive-dressed states of the system. We compared our results also against those given by the conventional Lindblad formalism where the dissipation couples to the eigenstates of the non-driven system. We found that the quantitative description of the multi-photon blockade phenomenon and of the nonlinearities associated with the phase transition in this system requires a systematic inclusion of the higher energy levels of the transmon and a proper model for dissipation. We also studied approximate classical models for this system, and showed that the discrete energy structure of the quantum system suppresses the classical chaotic motion of the quantum pendulum. Indeed, while the classical solution predicts a sudden change between the low and high amplitude solutions, the quantum solution displays a continuous transition from the normal-mode oscillations to the freely rotating pendulum regime. Finally, we analyzed in detail the two models of dissipation and demonstrated that they produce slightly different predictions for the onset of the photon blockade. Acknowledgments =============== We thank D. Angelakis, S. Laine, and M. Silveri for useful discusssions. We would like to acknowledge financial support from the Academy of Finland under its Centre of Excellence program (projects 312296, 312057, 312298, 312300) and the Finnish Cultural Foundation. This work uses the facilities of the Low Temperature Laboratory (part of OtaNano) at Aalto University. The eigenvalue problem for the Josephson pendulum {#app:eigenvalue} ================================================= The energy eigenstates of the pendulum can be solved from the Mathieu equation [@baker2005; @cottet2002; @abramovitz1972] which produces a spectrum with bound and free-particle parts. The high-energy unbound states are given by the doubly-degenerate quantum rotor states, which are also the eigenstates of the angular momentum operator. In analogy with the elimination of the vector potential by a gauge transformation as usually done for a particle in magnetic field, one can remove the dependence on $n_{\rm g}$ from the transmon Hamiltonian in Eq. (\[eq:transmonHam\]), i.e. $$\hat H_{\rm t} = 4E_{\rm C}(\hat n-n_{\rm g})^2 - E_{\rm J}\cos \hat \varphi,$$ with the gauge transformation $\hat U \hat H_{\rm t} \hat U^{\dag}$, where $$\hat U = e^{-i n_{\rm g}\hat \varphi}.$$ As a consequence, the eigenstates $|k\rangle$ of the Hamiltonian are modified into $$|k\rangle \ \rightarrow \ e^{-in_{\rm g} \hat \varphi}|k\rangle.$$ The transformed Hamiltonian can be written as $$\hat H_{\rm t} = 4E_{\rm C}\hat n^2-E_{\rm J}\cos\hat \varphi.$$ Here, we represent the (Schrödinger) eigenvalue equation for the transformed Hamiltonian in the eigenbasis of the operator $\hat \varphi$. As a result, the energy levels of the transmon can be obtained from the Mathieu equation [@baker2005; @cottet2002; @abramovitz1972] $$\label{eq:Mathieu} \frac{\partial^2}{\partial z^2}\psi_k(z) - 2 q \cos(2z) \psi_k(z) = -a\psi_k(z),$$ where $z = \varphi/2$, $q=-\eta/2 = -E_{\rm J}/(2 E_{\rm C})$, and $a=E_k/E_{\rm C}$. We have also denoted the transformed eigenstate $|k\rangle$ in the $\varphi$ representation with $\psi_k(\varphi) = \langle \varphi | e^{-in_{\rm g} \hat \varphi}|k\rangle$. Note that $\Psi_k(\varphi) = e^{in_{\rm g}\varphi} \psi_k(\varphi)$ is the eigenfunction of the original Hamiltonian in Eq. (\[eq:transmonHam\]). Due to the periodic boundary conditions, one has that $\Psi_k(\varphi+2\pi) = \Psi_k(\varphi)$. The solutions to Eq. (\[eq:Mathieu\]) are generally Mathieu functions which have a power series representation, but cannot be written in terms of elementary functions [@abramovitz1972]. However, the corresponding energy-level structure can be studied analytically in the high and low-energy limits. In Fig. \[fig:Mathieuevals\], we present the eigenenergies $E_k$ obtained as solutions of the Mathieu equation (\[eq:Mathieu\]). The eigenstates that lie within the wells formed by the cosine potential are localized in the coordinate $\varphi$, whereas the states far above are (nearly) evenly distributed, see Fig. \[fig:Mathieuevals\](a). As a consequence, the high-energy states are localized in the charge basis. The data shows that if plotted as a function of the gate charge, the states inside the cosine potential are nearly flat, see Fig. \[fig:Mathieuevals\](b). This implies that such levels are immune to gate charge fluctuations, which results in a high coherence of the device. Outside the well, the energy dispersion with respect to the gate charge becomes significant, and leads to the formation of a band structure typical for periodic potentials [@marder2000]. ![Eigenvalues and eigenstates of the transmon obtained with the Mathieu equation (\[eq:Mathieu\]) for $\eta =30$. (a) Eigenenergies as a function of the superconducting phase difference $\varphi$. The cosine potential is indicated with the blue line. Inside the well the eigenenergies are discrete and denoted with dashed black lines. On top of each line, we show the absolute square of the corresponding Mathieu eigenfunction. The energy bands from (b) are indicated with gray. (b) Eigenenergies as a function of the gate charge. We compare the numerically exact eigenenergies $E_k$ (solid black) with those of the perturbative Duffing oscillator (dashed red) and the free rotor (dashed blue). The charge dispersion in the (nearly) free rotor states leads to energy bands, which are denoted with gray. We show the Duffing and free rotor solutions only inside and outside the potential, respectively. []{data-label="fig:Mathieuevals"}](fig9){width="\linewidth"} High-energy limit: Free rotor ----------------------------- If the energy in the system is very high due to, e.g., strong driving, the Josephson energy can be neglected and the transmon behaves as a free particle rotating in a planar circular orbit, which can be described solely by its angular momentum $\hat L_{\rm z} = \hat n$. Since the angular momentum is a good quantum number, the eigenenergies and the corresponding eigenfunctions are given by $$\label{eq:rotEn} E_k = 4E_{\rm C}(k-n_{\rm g})^2, \ \ \psi_k(\varphi) = e^{i (k-n_{\rm g}) \varphi},$$ where $k=0, \pm 1, \pm 2, \ldots$. We note that if the magnetic field is zero ($n_g=0$), the nonzero free rotor energies are doubly degenerate. The level spacing is not constant but increases with increasing $k$ as [@baker2005] $$\Delta E_k = E_{k+1}-E_k = 4E_{\rm C}[ 2(k-n_{\rm g})+1].$$ In Fig. \[fig:Mathieuevals\], we show the eigenenergies calculated with Eq. (\[eq:rotEn\]). Clearly, with large energies outside the potential, the energy spectrum of the particle starts to resemble that of the free rotor. Also, the eigenfunctions of the free rotor are plane waves in the $\varphi$ eigenbasis, yielding a flat probability density as a function of $\varphi$. On the other hand, in the momentum eigenbasis, the free rotor states are fully localized. Low-energy limit: Duffing oscillator ------------------------------------ If the pendulum energy is very low, the superconducting phase of the transmon is localized near $\varphi\approx 0$. Thus, the cosine potential can be approximated with the first terms of its Taylor expansion. Consequently, the transmon Hamiltonian reduces to that of a harmonic oscillator with an additional quartic potential $$\hat H_{\rm t} \approx 4E_{\rm C}\hat n^2 + E_{\rm J}\left[-1 + \frac12\hat\varphi^2 - \frac{1}{12}\hat \varphi^4\right].$$ This is the Hamiltonian operator of the quantum Duffing model. The Duffing model has received a considerable attention in the recent literature [@Peano2006; @Serban2007; @Verso2010; @Vierheilig2010; @divincenzo2012; @Everitt2005b] especially in the context of superconducting transmon realizations. It is worthwhile to notice that in this regime the potential is no longer periodic and, thus, we can neglect the periodic boundary condition of the wavefunction. As a consequence, the eigenenergies and eigenfunctions are not dependent on the offset charge $n_{\rm g}$. If $\eta = E_{J}/E_{C}\gg 1$, the quartic term is small and one can solve the eigenvalues and the corresponding eigenvectors perturbatively up to the first order in $\eta$. This regime in which the Josephson energy dominates over the charging energy is referred to as the transmon limit. One, therefore, obtains the eigenenergies $$\label{eq:DuffEn} \frac{E_k}{4E_{\rm C}} = -\frac{\eta}{4}+\sqrt{\eta/2}\left(k+\frac12\right) - \frac{1}{48}(6k^2+6k+3),$$ where $k=0,1,2,\ldots$. Especially, the transition energy between the two lowest Duffing oscillator states can be written as $$\label{eq:DOqubitEn} \hbar \omega_{\rm q} = E_1-E_0 = \sqrt{8E_{\rm J}E_{\rm C}}-E_{\rm C}.$$ This becomes accurate as $\eta\rightarrow \infty$. The anharmonicity of a nonlinear oscillator is typically characterized in terms of the absolute and relative anharmonicity, which are defined, respectively, as $$\mu = E_{12} - E_{01}\approx - E_{\rm C}, \ \ \mu_{\rm r} = \mu/E_{01}\approx -(8\eta)^{-1/2},$$ where $E_{ij} = E_j-E_i$ and the latter approximations are valid in the transmon limit. We emphasize that in the low-excitation limit the transmon oscillates coherently with frequency $\omega_{\rm q} \approx \omega_{\rm p} - E_{\rm C}/\hbar$. Thus, in the quantum pendulum the nonlinearity is present even in the zero-point energy, whereas the small-amplitude oscillations in classical pendulum occur at the angular (plasma) frequency $\hbar\omega_{\rm p} = \sqrt{8E_{\rm J}E_{\rm C}}$. In Fig. (\[fig:Mathieuevals\]), we compare the eigenenergies (\[eq:DuffEn\]) of the Duffing model obtained with the perturbation theory against the exact solutions of the Mathieu equation (\[eq:Mathieu\]). We see that in the low-energy subspace the perturbed-Duffing solution reproduces very well the full Mathieu results. For the higher excited states, the momentum dispersion starts to play a dominant role and deviations arise as expected. This starts to occur close to the boundary of the potential. One can estimate the number $K_{\rm b}$ of bound states by requiring $E_{K_{\rm b}-1}\approx E_{\rm J}$ in Eq. (\[eq:DuffEn\]). This implies that the number of states within the potential scales with $\eta\gg 1$ as $$\label{eq:Nbound} K_{\rm b} \propto \sqrt{\eta}.$$ For the device with parameters listed in Table \[tab:params1\], one has that $\sqrt{\eta} \approx 5$, and the above estimate gives $K_{\rm b} \approx 5$. This coincides with the number of bound states extracted from the numerically exact spectrum of the eigenenergies depicted in Fig. \[fig:Mathieuevals\]. Driven and damped classical system {#app:classeom} ================================== The classical behaviour of the uncoupled pendulum has been extensively studied in the literature [@Dykman1990; @Dykman2012; @baker2005]. If the driving force is not too strong, one can approximate the pendulum with a Duffing oscillator with a quartic non-linearity, as shown in the previous appendix. The main feature of such an oscillator is the bistability of its dynamics. Namely, in a certain range of drive amplitudes and frequency detunings between the driving signal and the oscillator, two stable solutions with low and high amplitudes of the oscillations are possible. If one gradually increases the driving, the pendulum suddenly jumps from the low to the high amplitude solution at the critical driving strength, at which the low amplitude solution vanishes. In the bistable region the Duffing oscillator may switch between the two solutions if one includes noise into the model [@Dykman1990]. This complicated dynamics has been observed in a classical Josephson junction [@Siddiqi2004; @Siddiqi2005]. However, differently from the previous work mentioned above, in our setup the pendulum is coupled to a resonator and driven only indirectly. Here we develop the classical theory of the coupled system and show that the basic physics of bistability is present as well. We first linearize the equations of motion and then introduce systematically the corrections due to the nonlinearity. The system Hamiltonian $\hat H_{\rm S} = \hat H_0 + \hat H_{\rm d}$, defined by Eq. (\[eq:H0\]) and Eq. (\[eq:Hd\]), can be written in terms of the circuit variables as $$\begin{aligned} \label{eq:classHam} \hat H_{\rm S} &=& \frac{\hat q^2}{2C_{\rm r}} + \frac{\hat \phi^2}{2L_{\rm r}} + 4E_{\rm C}\hat n^2-E_{\rm J}\cos \hat \varphi + \tilde g \hat n \hat q \nonumber\\ && + \tilde A \cos(\omega_{\rm d}t) \hat q,\end{aligned}$$ Above, we have denoted the capacitance and inductance of the $LC$ resonator with $C_{\rm r}$ and $L_{\rm r}$, respectively, the effective coupling with $\tilde g=\hbar g/q_{\rm zp}$, the effective drive with $\tilde A=\hbar A/q_{\rm zp}$, and the zero-point fluctuations with $\phi_{\rm zp} = \sqrt{\hbar/(2C_{\rm r}\omega_{\rm r})}$ and $q_{\rm zp}=\sqrt{C_{\rm r}\hbar \omega_{\rm r}/2}$. Also, the resonance frequency of the bare resonator is defined as $\omega_{\rm r}=1/\sqrt{L_{\rm r}C_{\rm r}}$. The corresponding equations of motion for the expectation values of the dimensionless operators $\hat\phi_{\rm r}=\hat \phi/\phi_{\rm zp}$ and $\hat q_{\rm r}=\hat q/q_{\rm zp}$ can be written as $$\begin{aligned} \dot{\phi}_{\rm r} &=& \omega_{\rm r} q_{\rm r} + 2g n + 2 A\cos(\omega_{\rm d}t)-\frac{\kappa}{2}\phi_{\rm r},\label{eq:classeom1}\\ \dot{q}_{\rm r} &=& -\omega_{\rm r} \phi_{\rm r}-\frac{\kappa}{2}q_{\rm r},\label{eq:classeom2}\\ \dot{\varphi} &=& \frac{8E_{\rm C}}{\hbar}n+gq_{\rm r}-\frac{\gamma}{2}\varphi,\\ \dot{n} &=& -\frac{E_{\rm J}}{\hbar}\sin\varphi-\frac{\gamma}{2}n.\label{eq:classeom4}\end{aligned}$$ where we have denoted the expectation value of operator $\hat x$ as $\langle \hat x\rangle\equiv x$, applied the commutation relations $[\hat \phi_{\rm r},\hat q_{\rm r}] = 2i$ and $[\hat\varphi,\hat{n}]=i$, and defined the phenomenological damping constants $\kappa$ and $\gamma = \gamma_0\omega_{\rm q}$ for the oscillator and the pendulum, respectively. The exact solution to these equations of motion is unavoidably numerical and is given in Figs. \[fig:classsteps\] and \[fig:Occupation7\]. The resonator occupation is calculated as $N_{\rm r} = \frac{1}{4}(q_{\rm r}^2 +\phi_{\rm r}^2)$. Solution of the linearized equation {#app:lineom} ----------------------------------- We study Eqs. (\[eq:classeom1\])-(\[eq:classeom4\]) in the limit of weak driving. As a consequence, one can linearize the equations of motion by writing $\sin\varphi \approx \varphi$. In addition, by defining $$\begin{aligned} \alpha &=& \frac12(q_{\rm r}-i\phi_{\rm r}),\\ \beta &=& \frac{1}{\sqrt{2}}\left(\sqrt[4]{\frac{\eta}{8}} \varphi + i\sqrt[4]{\frac{8}{\eta}} n\right),\end{aligned}$$ we obtain $$\begin{split} \dot{\alpha}=& -i\omega_{\rm r}\alpha + g_{\rm eff}(\beta^*-\beta)-\frac{iA}{2}\left(e^{i\omega_{\rm d}t}+e^{-i\omega_{\rm d}t}\right) - \frac{\kappa}{2}\alpha,\\ \dot{\beta}=& - i\omega_{\rm p}\beta + g_{\rm eff}(\alpha+\alpha^*) - \frac{\gamma}{2}\beta, \end{split}$$ where we have introduced an effective coupling as $g_{\rm eff}=g\sqrt[4]{\eta/32}$. The above equations describe two driven and dissipative coupled oscillators. We assume that both oscillators are excited at the drive frequency, i.e. $\alpha = \alpha_0 \exp(-i\omega_{\rm d}t)$ and $\beta = \beta_0 \exp(-i\omega_{\rm d}t)$. By making a rotating-wave approximation for the coupling and the drive, we obtain the resonator occupation $N_{\rm lin} = |\alpha_0|^2$ in the steady state $$N_{\rm lin} = \frac{A^2}{4}\frac{1}{\left(\delta_{\rm d}-g_{\rm eff}^2\frac{\delta_{\rm p}}{\delta_{\rm p}^2+\gamma^2/4}\right)^2+\left(\frac{\kappa}{2}+g_{\rm eff}^2\frac{\gamma/2}{\delta_{\rm p}^2+\gamma^2/4}\right)^2},$$ where $\delta_{\rm d}=\omega_{\rm d}-\omega_{\rm r}$ and $\delta_{\rm p} = \omega_{\rm d}-\omega_{\rm p}$, with $\omega_{\rm p}=\sqrt{8E_{\rm J}E_{\rm C}}/\hbar$. This appeared already in Eq. (\[eq:linearNr\]). Correction due to the pendulum nonlinearity {#app:nonlin} ------------------------------------------- Here, we study the nonlinear effects neglected in the above linearized calculation. We eliminate the variables $\phi_{\rm r}$ and $n$ from Eqs. (\[eq:classeom1\])-(\[eq:classeom4\]) and obtain $$\begin{aligned} \ddot{q}_{\rm r} + \kappa \dot{q}_{\rm r} + \tilde{\omega}_{\rm r}^2q_{\rm r}+g_1 \dot{\varphi} + 2A\omega_{\rm r}\cos(\omega_{\rm d}t) &=& 0,\label{eq:classqr}\\ \ddot{\varphi} + \gamma\dot{\varphi}+\omega_{\rm p}^2\sin\varphi - g\dot{q}_{\rm r}&=& 0,\label{eq:classvarphi}\end{aligned}$$ where we have denoted $g_1=g \hbar \omega_{\rm r}/(4E_{\rm C})$, and defined the renormalized resonator frequency as $\tilde{\omega}_{\rm r}^2 = \omega_{\rm r}^2 - g^2\hbar \omega_{\rm r}/(4E_{\rm C})$. In Eq. (\[eq:classqr\]), we have included only the term that is proportional to $g^2$ as it provides the major contribution to the frequency renormalization, and neglected the other second order terms in $\kappa$, $\gamma$ and $g$ that lead to similar but considerable smaller effects. We write the solutions formally in terms of Fourier transform as $$\begin{aligned} q_{\rm r}(t) &=& \int \frac{d\Omega}{2\pi}q_{\rm r}[\Omega]e^{-i\Omega t},\\ \varphi(t) &=& \int \frac{d\Omega}{2\pi}\varphi[\Omega]e^{-i\Omega t},\end{aligned}$$ where $q_{\rm r}[\Omega]$ and $\varphi[\Omega]$ are the (complex valued) Fourier coefficients of $q_{\rm r}(t)$ and $\varphi(t)$, respectively. As a consequence, one can write the equations of motion as $$\begin{aligned} \int \frac{d\Omega}{2\pi}\left\{\left(\tilde{\omega}_{\rm r}^2 - \Omega^2 - i\kappa\Omega\right)q_{\rm r}[\Omega] - ig_1\Omega \varphi[\Omega]+2\pi A\omega_{\rm r} \left[\delta(\Omega-\omega_{\rm d})+\delta(\Omega+\omega_{\rm d})\right]\right\}e^{-i\Omega t}&=&0,\\ \int \frac{d\Omega}{2\pi}\left\{\left(- \Omega^2 - i\gamma\Omega\right)\varphi[\Omega] + ig\Omega q_{\rm r}[\Omega]\right\}e^{-i\Omega t} + \omega_{\rm p}^2\sin\varphi &=&0.\label{eq:classvphi}\end{aligned}$$ We solve $q_{\rm r}[\Omega]$ from the first equation and obtain $$\begin{aligned} \label{eq:classqr2} q_{\rm r}[\Omega] &=& \frac{ig_1\Omega\varphi[\Omega]-2\pi A\omega_{\rm r}[\delta(\Omega-\omega_{\rm d})+\delta(\Omega+\omega_{\rm d})]}{\tilde{\omega}_{\rm r}^2-\Omega^2-i\kappa\Omega}.\end{aligned}$$ By replacing this result into Eq. (\[eq:classvphi\]), we obtain $$\int \frac{d\Omega}{2\pi}\left\{\left(- \Omega^2 - i\gamma\Omega- \frac{gg_1\Omega^2}{\tilde{\omega}_{\rm r}^2-\Omega^2-i\kappa\Omega}\right)\varphi[\Omega] \right\}e^{-i\Omega t} + \omega_{\rm p}^2\sin\varphi =\frac{2gA\omega_{\rm r}\omega_{\rm d}}{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}}\cos(\omega_{\rm d}t),$$ where we have neglected a constant phase factor. For weak drive amplitudes, $\varphi[\omega_{\rm d}]$ is the only non-zero Fourier component. Thus, one can evaluate the Fourier transform in the above equation at the drive frequency. Consequently, the Fourier component of the third term in the equation can be evaluated as $$\begin{split} \frac{gg_1\Omega^2}{\tilde{\omega}_{\rm r}^2-\Omega^2-i\kappa\Omega} &\approx \frac{gg_1\omega_{\rm d}^2}{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}\left[(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)+i\kappa\omega_{\rm d}\right]\\ &\approx \frac{gg_1\omega_{\rm d}^2}{\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2} + i\frac{gg_1\kappa\omega_{\rm d}^3}{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}, \end{split}$$ where in the second term we have assumed that the dissipation is weak, i.e. $\kappa\ll\sqrt{\tilde\omega_{\rm r}^2-\omega_{\rm d}^2}$ and we taken into account the dominant terms for the real and imaginary parts. As a result, we obtain $$\ddot{\varphi}+\tilde{\gamma}\dot{\varphi}+\omega_{\rm p}^2\sin\varphi + (\tilde{\omega}_{\rm p}^2-\omega_{\rm p}^2)\varphi = B\cos(\omega_{\rm d} t). \label{eqn:appendix_nonlin}$$ Here, we have defined the renormalized linear oscillation frequency $\tilde{\omega}_{\rm p}$, dissipation rate $\tilde{\gamma}$, and drive amplitude $B$ as $$\begin{aligned} \tilde{\omega}_{\rm p}^2 &=& \omega_{\rm p}^2-\frac{g g_1 \omega_{\rm d}^2}{\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2},\\ \tilde{\gamma} &=& \gamma+\frac{gg_1\kappa\omega_{\rm d}^2}{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2},\\ B &=& \frac{2gA\omega_{\rm r}\omega_{\rm d}}{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}},\label{eq:effamp}\end{aligned}$$ where the two first equations are valid if $\kappa \ll \sqrt{\tilde\omega_{\rm r}^2-\omega_{\rm d}^2}$. Thus, we have shown that in the limit of low dissipation, the classical resonator-transmon system can be modeled as a driven and damped pendulum. In the case of weak driving, we expand the sinusoidal term up to the third order in $\varphi$. We obtain the equation of motion for the driven and damped Duffing oscillator: $$\label{eq:duff} \ddot{\varphi}+\tilde{\gamma}\dot{\varphi}+\tilde{\omega}_{\rm p}^2\left(\varphi-\frac{\omega_{\rm p}^2}{6\tilde{\omega}_{\rm p}^2}\varphi^3\right) = B\cos(\omega_{\rm d} t).$$ This equation can be solved approximatively by applying a trial solution $\varphi(t) = \varphi_1 \cos(\omega_{\rm d}t)$ into Eq. (\[eq:duff\]). By applying harmonic balance, and neglecting super-harmonic terms, we obtain a relation for the amplitude $\varphi_1$ in terms of the drive amplitude $B$. By taking a second power of this equation and, again, neglecting the super-harmonic terms, we obtain $$\left[\left(\tilde{\omega}_{\rm p}^2-\omega_{\rm d}^2-\frac{\omega_{\rm p}^2}{8}\varphi_1^2\right)^2+\tilde\gamma^2\omega_{\rm d}^2\right]\varphi_1^2 = B^2.$$ The above equation is cubic in $\varphi_1^2$. It has one real solution if the discriminant $D$ of the equation is negative, i.e. $D<0$. If $D>0$, the equation has three real solutions, two stable and one unstable. The stable solutions can appear only if $\omega_{\rm d}<\tilde{\omega}_{\rm p}$, which is typical for Duffing oscillators with a soft spring (negative nonlinearity). The bistability can, thus, occur for amplitudes $B_{\rm min}<B<B_{\rm crit}$ where the minimal and critical amplitudes $B_{\rm min}$ and $B_{\rm crit}$, respectively, determine the region of bistability and are obtained from the equation $D=0$. By expanding the resulting $B_{\rm min}$ and $B_{\rm crit}$ in terms of $\tilde\gamma$ and by taking into account the dominant terms, we find that $$\begin{aligned} B_{\rm min} &=& \tilde\gamma\frac{\omega_{\rm d}}{\omega_{\rm p}}\sqrt{8(\tilde\omega_{\rm p}^2-\omega_{\rm d}^2)} = \tilde\gamma\frac{\sqrt{27}\omega_{\rm d}}{2(\tilde\omega_{\rm p}^2-\omega_{\rm d}^2)}B_{\rm crit}\label{eq:bistabmin}\\ B_{\rm crit} &=& \sqrt{\frac{32}{27}}\frac{(\tilde{\omega}_{\rm p}^2-\omega_{\rm d}^2)^{3/2}}{\omega_{\rm p}}\approx \frac{16}{3\sqrt{3}}\sqrt{\omega_{\rm p}\delta_{\rm p}^3},\label{eq:anjump}\end{aligned}$$ where the last equality holds if $\delta_{\rm p} = \tilde{\omega}_{\rm p}-\omega_{\rm d}\ll \omega_{\rm p}$. The iterative numerical solution of Eq. (\[eq:duff\]) indicates that the initial state affects the switching location between the two stable solutions. We note that this approximation neglects all higher harmonics and, thus, cannot reproduce any traces towards chaotic motion inherent to the strongly driven pendulum. Finally, we are able to write the minimal and critical drive amplitudes of the coupled resonator-transmon system using Eqs. (\[eq:effamp\]), (\[eq:bistabmin\]), and (\[eq:anjump\]). We obtain \[see Eq. (\[eq:duffan\])\] $$\begin{aligned} A_{\rm min} &=& \tilde\gamma\sqrt{2(\tilde\omega_{\rm p}^2-\omega_{\rm d}^2)}\frac{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}}{g\omega_{\rm r}\omega_{\rm p}},\\ A_{\rm crit} &=& \sqrt{\frac{8}{27}} \left(\tilde{\omega}_{\rm p}^2-\omega_{\rm d}^2\right)^{3/2}\frac{\sqrt{(\tilde{\omega}_{\rm r}^2-\omega_{\rm d}^2)^2+\kappa^2\omega_{\rm d}^2}}{g\omega_{\rm r}\omega_{\rm d}\omega_{\rm p}}.\end{aligned}$$ Note that these equations are valid for $\kappa \ll \sqrt{\tilde\omega_{\rm q}^2-\omega_{\rm d}^2}$.
--- abstract: 'High-mass stars form within star clusters from dense, molecular regions, but is the process of cluster formation slow and hydrostatic or quick and dynamic? We link the physical properties of high-mass star-forming regions with their evolutionary stage in a systematic way, using Herschel and Spitzer data. In order to produce a robust estimate of the relative lifetimes of these regions, we compare the fraction of dense, molecular regions above a column density associated with high-mass star formation, N(H$_2$) $>$ 0.4-2.5 $\times$ 10$^{22}$ cm$^{-2}$, in the ‘starless’ (no signature of stars $\gtrsim$ 10  forming) and star-forming phases in a 2$\times$2 region of the Galactic Plane centered at $\ell$=30. Of regions capable of forming high-mass stars on $\sim$1 pc scales, the starless (or embedded beyond detection) phase occupies about 60-70% of the dense molecular region lifetime and the star-forming phase occupies about 30-40%. These relative lifetimes are robust over a wide range of thresholds. We outline a method by which relative lifetimes can be anchored to absolute lifetimes from large-scale surveys of methanol masers and UCHII regions. A simplistic application of this method estimates the absolute lifetime of the starless phase to be 0.2-1.7 Myr (about 0.6-4.1 fiducial cloud free-fall times) and the star-forming phase to be 0.1-0.7 Myr (about 0.4-2.4 free-fall times), but these are highly uncertain. This work uniquely investigates the star-forming nature of high-column density gas pixel-by-pixel and our results demonstrate that the majority of high-column density gas is in a starless or embedded phase.' author: - 'Cara Battersby, John Bally, & Brian Svoboda' bibliography: - 'references1.bib' title: 'The Lifetimes of Phases in High-Mass Star-Forming Regions' --- 2hp[N$_{2}$H$^{+}$]{} 3CO[$^{13}$CO]{} 3[NH$_{3}$]{} Introduction ============ Whether star clusters and high-mass stars form as the result of slow, equilibrium collapse of clumps [e.g., @tan06] over several free-fall times or if they collapse quickly on the order of a free-fall time [e.g., @elm07; @har07], perhaps mediated by large scale accretion along filaments [@mye09], remains an open question. The stars that form in these regions may disrupt and re-distribute the molecular material from which they formed without dissociating it, allowing future generations of star formation in the cloud with overall long GMC lifetimes [20-40 Myr, e.g.; @kaw09]. The scenario of quick, dynamic star formation sustained over a long time by continued inflow of material is motivated by a variety of observations [discussed in detail in @elm07; @har07], and more recently by the lack of starless massive protoclusters [@gin12; @urq14; @cse14] observed through blind surveys of cold dust continuum emission in the Galaxy. Additionally, observations of infall of molecular material on large scales [@sch10; @per13] suggest that GMCs are dynamic and evolve quickly, but that material may be continually supplied into the region. To study the formation, early evolution, and lifetimes of high-mass star-forming regions, we investigate their earliest evolutionary phase in dense, molecular regions (DMRs). The gas in regions that form high-mass stars, DMRs, has high densities [10$^{4-7}$ cm$^{-3}$; @lad03] and cold temperatures [10-20 K; @rat10; @bat11] and is typically detected by submm observations where the cold, dust continuum emission peaks. Given the appropriate viewing angle, these regions can also be seen in silhouette as Infrared Dark Clouds (IRDCs) absorbing the diffuse, mid-IR, Galactic background light. @bat11 showed that by using a combination of data [including measurements of their temperatures and column densities from Hi-GAL; @mol10], we can sample DMRs and classify them as starless or star-forming in a more systematically robust way than just using one wavelength (e.g. an IRDC would not be ‘dark’ on the far side of the Galaxy, so a mid-IR-only selection would exclude those DMRs). Most previous studies of high-mass star-forming region lifetimes have focused on discrete ‘clumps’ of gas, usually identified in the dust continuum [e.g. @hey16; @cse14; @dun11a]. However, oftentimes these ‘clumps’ contain sub-regions of quiescence and active star formation and cannot simply be classified as either. To lump these regions together and assign the entire ‘clump’ as star-forming or quiescent can cause information to be lost on smaller scales within the clump. For example, the filamentary cloud highlighted in a black box in centered at \[30.21, -0.18\] clearly contains an actively star-forming and a quiescent region, but is identified as a single clump in the Bolocam Galactic Plane Survey [@gin13; @ros10; @agu11], as 3 clumps in the first ATLASGAL catalog [@contreras13; @urq14b], and as 5 clumps in the second ATLASGAL catalog [@cse14]. Resolution and sensitivity are not the primary drivers for the number of clumps identified, rather it is algorithmic differences, such as bolocat vs. clumpfind or gaussclumps. In this paper, we present an alternate approach that circumvents the issues surrounding clump identification, by retaining the available information in a pixel-by-pixel analysis of the maps. All together, the pixels give us a statistical overview of the different evolutionary stages. Pixels that satisfy the criteria for high-mass star formation, explicated in the paper, are referred to as dense molecular regions (DMRs). We compare the fractions of the statistical ensembles of starless and star-forming DMRs to estimate their relative lifetimes. Previous lifetime estimates, based primarily on mid-IR emission signatures toward samples of IRDCs or dust continuum clumps, found relative starless fractions between about 30-80% and extrapolate these to absolute starless lifetimes ranging anywhere between 10$^{3}$-10$^{6}$ years. Notably, the recent comprehensive studies of clump lifetimes from @svo16 and @hey16 find starless fractions of 47% and 69%, respectively. Our approach differs from previous methods by 1) not introducing clump boundaries, thereby using the highest resolution information available, 2) defining regions capable of forming high-mass stars based strictly on their column density, and 3) using the dust temperature and the mid-IR emission signature to classify a region as starless or star-forming. Methods {#sec:method} ======= The Galactic Plane centered at Galactic longitude $\ell =$ 30 contains one of the largest concentrations of dense gas and dust in the Milky Way. Located near the end of the Galactic bar and the start of the Scutum-Centaurus spiral arm, this region contains the massive W43 star forming complex at a distance of 5.5 kpc [@ell15] and hundreds of massive clumps and molecular clouds with more than 80% of the emission having  = 80 to 120  implying a kinematic distance between 5 and 9 kpc [@Carlhoff13; @ell13; @ell15]. We investigate the properties and star-forming stages of DMRs in a 2$\times$ 2 field centered at \[$\ell$,b\] = \[30, 0\] using survey data from Hi-GAL [@mol10], GLIMPSE [@ben03], 6.7 GHz CH$_{3}$OH masers [@pes05], and UCHII regions [@woo89]. In a previous work, we measured T$_{dust}$ and N(H$_{2}$) from modified blackbody fits to background-subtracted Hi-GAL data from 160 to 500  using methods described in @bat11 and identified each pixel as mid-IR-bright, mid-IR-neutral, or mid-IR-dark, based on its contrast at 8 . The column density and temperature maps are produced using data which have had the diffuse Galactic component removed, as described in @bat11. We use the 25 version of the maps, convolving and regridding all the data to this resolution, which corresponds to beam sizes of 0.6 and 1.1pc and pixel sizes of 0.13 and 0.24 pc at typical distances of 5 and 9 kpc. In this work, we identify pixels above a column density threshold capable of forming high-mass stars (§\[sec:nh2\_thresh\]), then, on a pixel-by-pixel basis, identify them as starless or star-forming (§\[sec:starry\]), and from their fractions infer their relative lifetimes. Using absolute lifetimes estimated from survey statistics of 6.7 CH$_{3}$OH masers and UCHII regions, we anchor our relative lifetimes to estimate the absolute lifetimes of the DMRs (§\[sec:maser\] and \[sec:uchii\]). Previous works (see §\[sec:comp\]) have estimated star-forming lifetimes over contiguous “clumps," typically $\sim$1 pc in size. However, sub-mm identified clumps often contain distinct regions in different stages of star formation. It was this realization that led us to the pixel-by-pixel approach. In the clump approach, actively star-forming and quiescent gas are lumped together; a single signature of star formation in a clump will qualify all of the gas within it as star-forming. This association of a large amount of non-star-forming gas with a star-forming clump could lead to erroneous lifetime estimates for the phases of high-mass star formation, though we note that this will be less problematic in higher-resolution analyses. Therefore, we use a pixel-by-pixel approach and consider any pixel with sufficient column density (see next section) to be a dense molecular region, DMR. ![image](f1.pdf) Column Density Thresholds for High-Mass Star Formation {#sec:nh2_thresh} ------------------------------------------------------ High-mass stars form in regions with surface densities $\Sigma$ $\sim$ 1 g cm$^{-2}$ [@kru08], corresponding to N(H$_{2}$) $\sim$ 2.1$\times$10$^{23}$ cm$^{-2}$. At distances of several kpc or more, most cores are highly beam-diluted in our 25 beam. To derive a realistic high-mass star-forming column density threshold for cores beam-diluted by a 25 beam, consider a spherical core with a constant column density $\Sigma$ = 1 g cm$^{-2}$ in the central core (defined as $r < r_{f}$, where r$_f$ is the flat inner portion of the density profile, determined from fits to the data) and a power-law drop off for $r > r_{f}$ $$n(r) = n_{f} (r / r_{f}) ^{-p}$$ where p is the density power-law exponent. The @mue02 study of 51 high-mass star-forming cores found a best-fit central core radius, r$_{f} ~\approx$ 1000 AU, and density power-law index, $p$ = 1.8. This model implies an H$_2$ central density of $n_{f}$ = 6.2 $\times$ 10$^{7}$ cm$^{-3}$, which, integrated over $r_{f}$ = 1000 AU, corresponds to the theoretical surface density threshold for forming high-mass stars of $\Sigma$ = 1 g cm$^{-2}$. Integration of this model core along the line of sight and convolution with a 25 beam results in a beam-diluted column density threshold, at typical distances of 5 and 9 kpc toward the $\ell$ = 30 field [@ell13; @ell15], of N(H$_{2}$) = 0.8 and 0.4 $\times$ 10$^{22}$ cm$^{-2}$, respectively. Pixels above this column density threshold are referred to as dense molecular regions (DMRs). We note that the column density maps used have had the diffuse Galactic background removed, as described in @bat11, so we can attribute all of the column density to the DMRs themselves. As discussed in §\[sec:colfig\], the relative lifetimes are mostly insensitive to variations in the threshold column density from $\sim$0.3 to 1.3 $\times$ 10$^{22}$ cm$^{-2}$ (corresponding to distances of 11 and 3 kpc for the model core). We make two estimates of lifetimes throughout the text using the extreme ends of reasonable parameter space, for this section, the cutoffs are: N(H$_{2}$) = 0.4 $\times$ 10$^{22}$ cm$^{-2}$ for the ‘generous’ and N(H$_{2}$) = 0.8 $\times$ 10$^{22}$ cm$^{-2}$ and ‘conservative’ estimates. Alternatively, we apply the @kau10 criteria (hereafter KP10) for high-mass star formation. @kau10 observationally find that regions with high-mass star formation tend to have a mass-radius relationship of m(r) $>$ 870 $^{1.33}$. At our beam-sizes of 0.6 and 1.1 pc, this corresponds to column densities of about 2.5 and 1.6 $\times$ 10$^{22}$ cm$^{-2}$, respectively. The results of this column density threshold are discussed in more detail in each results section (§\[sec:colfig\], \[sec:maser\], and \[sec:uchii\]). The result that our relative lifetimes are mostly insensitive to variations in the the threshold column density holds true with these higher column density thresholds. ![image](f2.pdf){width="100.00000%"} Starless vs. Star-Forming {#sec:starry} ------------------------- In this paper, dust temperature distributions are combined with mid-IR star formation signatures to determine whether each DMR is ‘starless’ or ‘star-forming’. The mid-IR signature is determined from the contrast at 8 , i.e. how ‘bright’ or ‘dark’ the pixel is above the background - the 8  image smoothed with a median filter of 25 resolution [see @bat11 for details]. We use the mid-IR signature at 8  (mid-IR dark or bright) as the main discriminator and the dust temperature as a secondary discriminator, particularly to help identify regions that are cold and starless, but do not show particular absorption as an IRDC as they be on the far side of the Galaxy. We use the range of dust temperatures found to be associated with mid-IR-dark and bright regions (based on Gaussian fits to those temperature distributions) to help discriminate DMRs are ‘starless’ or ‘star-forming.’ If a DMR is mid-IR-dark and its temperature is within the normal cold dust range (2 or 3-$\sigma$ for the ‘conservative’ and ‘generous’ thresholds respectively), then it is classified as ‘starless.’ If a DMR is mid-IR-bright and its temperature is within the normal warm dust range (2 or 3-$\sigma$ for the ‘conservative’ and ‘generous’ thresholds respectively), then it is classified as ‘star-forming.’ Slight changes in the temperature distributions (e.g., including all DMRs down to 0 K as starless and up to 100 K as star-forming) have a negligible effect. Pixels that are mid-IR-bright and cold or mid-IR-dark and warm are extremely rare and left out of the remaining analysis. When a DMR is mid-IR-neutral, its temperature is used to classify it as ‘starless’ or ‘star-forming.’ A flow chart depicting this decision tree is shown in . This study is only sensitive to the star-forming signatures of high-mass stars. Therefore, the term ‘starless’ refers only to the absence of high-mass stars forming; the region may support active low- or intermediate-mass star formation. Using @rob06 models of dust emission toward YSOs, scaled to 5 and 9 kpc distances and apertures, we find that our typical starless flux limit of 130 MJy/Sr at 8  (technically we use a contrast not a specific flux, but this is the approximate level for most of the region at our contrast cut) is sensitive enough to detect the vast majority (85%) of possible model YSOs with a mass above 10 . Some YSO models will always be undetected (due to unfortunate inclinations, etc.) no matter the flux limit, so we estimate that we are sensitive to forming massive stars above $\sim$10 . While contrast at 8  is an imperfect measure of the presence or absence of high-mass star formation, it was found to be a powerful discriminator in @bat11, who compared it with dust temperature, 24  emission, maser emission, and Extended Green Objects. In particular, since we employ the contrast at 8  smoothed over 25 using a median filter, rather than the simple presence or absence of emission, we are unlikely to be affected by field stars, which are small and very bright, and thus will be removed by the median filter smoothing. PAH emission at 8  toward Photon-Dominated Regions (PDRs) is not likely to be an issue since the low column density of dust towards PDRs will preclude their inclusion as DMRs in the first place. Lifetimes {#sec:lifetimes} ========= Assumptions in Deriving Relative Lifetimes {#sec:caveats} ------------------------------------------ Several necessary assumptions were made in inferring the relative lifetimes of the starless and star-forming stages for high-mass star-forming regions. i) The sample is complete and unbiased in time and space. ii) The sum of DMRs represents with equal probability all the phases in the formation of a massive star. This can be achieved either with a constant star formation rate (SFR) or a large enough sample. iii) DMRs are forming or will form high-mass stars. iv) The lifetimes don’t depend on mass. v) Starless and star-forming regions occupy similar areas on the sky (i.e. their area is proportional to the mass of gas). vi) The signatures at 8  (mid-IR bright or dark) and their associated temperature distributions are good indicators of the presence or absence of a high-mass star. vii) Pixels above the threshold column density contain beam-diluted dense cores rather than purely beam-filling low-surface density gas. We discuss these assumptions below. Assumptions (i) and (ii) are reasonable given the size of our region; we are sampling many populations in different evolutionary stages. The relative lifetimes we derive here may apply elsewhere to special regions in the Galaxy with similar SFRs and properties. However, since this region contains W43 a ‘mini-starburst’ [e.g. @bal10; @lou14], it is likely in a higher SFR phase and may not be applicable generally throughout the Galaxy. One possible complication to assumption (ii) is the possibility that some high-mass stars can be ejected from their birthplaces with velocities sufficient to be physically separated from their natal clumps within about 1 Myr. However, this should not result in an underestimate of the star-forming fraction of DMRs, since high-mass star formation is highly clustered, and if the stellar density is sufficient for ejection of high-mass stars, then the remaining stars should easily classify the region as a DMR. On the other hand, the ejected stars could in principle classify a dense, starless region they encounter as ‘star-forming’ by proxy. We expect that this would be rare, but suggest it as an uncertainty worthy of further investigation. We argue in favor of assumption (iii) in §\[sec:nh2\_thresh\]. While we expect that assumption (iv) is not valid over all size clumps, this assumption is reasonable (and necessary) for our sample, as the column density variation over our DMRs is very small (from the threshold density to double that value). This lack of column density variation in our DMRs also argues in favor of assumption (v). Various studies [e.g., @bat10; @cha09] argue in favor of assumption (vi), statistically, but more sensitive and higher resolution studies will continue to shed light on the validity of this assumption. Assumption (vii) is supported by the fact that interferometric observations of DMRs [e.g. @bat14b] and density measurements [@gin15] demonstrate that most (if not all) DMRs contain dense substructure rather than purely beam-filling low surface-density gas. While we expect that the high column density of some DMRs may not indicate a high volume density, but rather long filaments seen ‘pole-on,’ we expect this to be quite rare. If this special geometry were common, it would lead to an over-estimate of the starless fraction. ![image](f3-eps-converted-to.pdf) Observed Relative Lifetimes {#sec:colfig} --------------------------- For both the ‘conservative’ and ‘generous’ estimates, the relative percentage of DMRs in the starless / star-forming phase is about 70%/30%. The higher column density thresholds from KP10 give relative percentages of 63%/37%. These lifetimes are shown in . The dashed lines in shows the percentage of pixels in the starless vs. starry categories for a range of column density thresholds shown on the x-axis. Below a column density threshold of about 0.3 $\times$ 10$^{22}$ cm$^{-2}$ starless and starry pixels are equally distributed (50%). At any column density threshold in the range of 0.3 - 1.3 $\times$ 10$^{22}$ cm$^{-2}$ about 70% of the pixels are categorized as starless and 30% as starry (i.e. star-forming). At the higher column density thresholds from KP10, 1.7 - 2.5 $\times$ 10$^{22}$ cm$^{-2}$ about 63% are starless and 37% star-forming. At each column density, the ratio between the number of pixels in each category (solid lines in , N$_{starless}$ and N$_{starry}$) divided by the total number of pixels at that column density (N$_{total}$), multiplied by 100, gives the percentage of pixels in each category (dashed lines in , Starless % and Starry %). $$\rm{Starless~\%} = [ N_{starless} / N_{total} ] \times 100$$ $$\rm{Starry~\%} = [ N_{starry} / N_{total} ] \times 100$$ The histograms in solid lines in show the number of pixels in each category with a given column density, as noted on the x-axis; i.e. a column density probability distribution function. This figure demonstrates that the average column density distribution is higher for starless pixels. We interpret this to mean that regions categorized as starless have a high capacity for forming future stars (high column density and cold). Our method of comparing these populations above a threshold column density allows us to disentangle them, and derive relative lifetime estimates for regions on the brink of (starless) vs. actively forming stars (starry). Under the assumptions discussed in §\[sec:method\] and \[sec:caveats\], high-mass star-forming DMRs spend about 70% of their lives in the starless phase and 30% in the actively star-forming phase. If we instead apply the KP10 criteria as a column density threshold (see §\[sec:nh2\_thresh\]), our relative lifetimes are 63% for the starless phase and 37% the for star-forming phase. We therefore conclude that the starless phase occupies approximately 60-70% of the lifetime of DMRs while the star-forming phase is about 30-40% over a wide range of parameters, both column density threshold and temperature thresholds as shown by ‘conservative’ vs. ‘generous’ criteria. While our relative lifetime estimates are robust over a range of parameters, the connection of our relative lifetimes to absolute lifetimes is extremely uncertain. We present below two methods to connect our relative lifetimes to absolute timescales. The first is to link the methanol masers detected in our region with Galactic-scale maser surveys, which provide an estimate of the maser lifetime. The second approach is to instead assume an absolute lifetime of UCHII regions to anchor our relative DMR lifetimes. Maser association and absolute lifetime estimates {#sec:maser} ------------------------------------------------- We use the association of DMRs with 6.7 GHz Class II methanol masers (thought to be almost exclusively associated with regions of high-mass star formation), and the lifetime of these masers from @van05, to anchor our relative lifetimes to absolute timescales. These lifetimes are highly uncertain and rest on a number of assumptions, therefore care should be taken in their interpretation. We utilize unbiased Galactic plane searches for methanol masers by @szy02 and @ell96 compiled by @pes05. We define the ‘sizes’ of the methanol masers to be spherical regions with a radius determined by the average size of a cluster forming clump associated with a methanol maser from the BGPS as $R\sim0.5$ pc [@svo16], corresponding to 30 diameter apertures for the average distance in this field. While methanol maser emission comes from very small areas of the sky [e.g., @wal98], they are often clustered, so these methanol maser “sizes" are meant to represent the extent of the star-forming region. The absolute lifetime of DMRs can be anchored to the duration of the 6.7 GHz Class II CH$_{3}$OH masers, which are estimated to have lifetimes of $\sim$35,000 years [by extrapolating the number of masers identified in these same surveys to the total number of masers in the Milky Way and using an IMF and global SFR to estimate their lifetimes; @van05]. We note that the @van05 extrapolated total number of methanol masers in the Galaxy of 1200 is in surprisingly good agreement with the more recent published methanol maser count from the MMB group [they find 582 over half the Galaxy; @cas10; @cas11; @gre12], therefore, though this catalog is outdated, its absolute lifetime estimate remains intact, though we suggest that future works use these new MMB catalogs, which have exquisite positional accuracy. While the fraction of starless vs. star-forming DMRs is insensitive to the column density cuts, the fraction of DMRs associated with methanol masers ($f_{maser}$) increases as a function of column density (see ). In the ‘generous’ and ‘conservative’ cuts, the methanol maser fraction is 2% and 4% ($f_{maser}$), respectively, corresponding to total DMR lifetimes ($\tau_{total}$) of 1.9 Myr and 0.9 Myr. Using the alternative column density threshold from @kau10, called KP10, as discussed in §\[sec:nh2\_thresh\], the maser fraction is about 7-12%, corresponding to total DMR lifetimes ($\tau_{total}$) of 0.3 and 0.4 Myr. Given our starless and star-forming fractions ($f_{starless}$=0.6-0.7 and $f_{starry}$=0.3-0.4) and the methanol maser lifetime [$\tau_{maser}$=35,000 years; @van05] we can calculate the total DMC lifetime and relative phase lifetimes using the following equations: $$\tau_{total} = \frac{\tau_{maser}}{f_{maser}}$$ $$\tau_{starless} = f_{starless}~ \tau_{total}$$ $$\tau_{starry} = f_{starry}~ \tau_{total}$$ The ‘starless’ lifetime then is about 0.6-1.4 Myr, while the ‘star-forming’ lifetime is 0.2-0.6 Myr considering only the ‘conservative’ and ‘generous’ thresholds. The KP10 column density threshold for high-mass star formation, because of its larger methanol maser fraction, yields absolute starless lifetimes of 0.2-0.3 Myr and star-forming lifetimes of 0.1-0.2 Myr. Overall, the range of absolute starless lifetimes is 0.2 - 1.4 Myr and star-forming lifetimes is 0.1 - 0.6 Myr. See for a summary of the relative and absolute lifetimes for various methods. Criteria N(H$_2$) \[cm$^{-2}$\] $f_{starless}$ $f_{starry}$ Anchor $\tau_{total}$ \[Myr\] $\tau_{starless}$ $\tau_{starry}$ -------------- -------------------------- ---------------- -------------- -------- ------------------------ ------------------- ----------------- -- Generous 0.4$\times$10$^{22}$ 0.71 0.29 maser 1.9 1.4 0.6 Conservative 0.8$\times$10$^{22}$ 0.72 0.28 maser 0.9 0.6 0.3 Either 0.4-0.8$\times$10$^{22}$ 0.7 0.3 UCHII 1.2-2.4 0.8-1.7 0.4-0.7 KP10 near 2.5$\times$10$^{22}$ 0.63 0.37 maser 0.3 0.2 0.1 KP10 far 1.6$\times$10$^{22}$ 0.63 0.37 maser 0.5 0.3 0.2 Either KP 1.6-2.5$\times$10$^{22}$ 0.63 0.37 UCHII 1.0-1.9 0.6-1.2 0.4-0.7 Overall 0.4-2.5$\times$10$^{22}$ 0.6-0.7 0.3-0.4 both 0.3-2.4 0.2-1.7 0.1-0.7 Criteria Median N(H$_2$) Anchor $\tau_{ff}$ \[Myr\] N$_{ff, total}$ N$_{ff, starless}$ N$_{ff, starry}$ -------------- -------------------------- -------- --------------------- ----------------- -------------------- ------------------ Generous 0.6$\times$10$^{22}$ maser 0.7 2.8 2.0 0.8 Conservative 1.2$\times$10$^{22}$ maser 0.5 1.8 1.3 0.5 Either 0.6-1.2$\times$10$^{22}$ UCHII 0.5-0.7 1.7-4.8 1.2-3.4 0.5-1.4 KP10 near 3.5$\times$10$^{22}$ maser 0.3 1.0 0.6 0.4 KP10 far 2.2$\times$10$^{22}$ maser 0.4 1.1 0.7 0.4 Either KP 2.2-3.5$\times$10$^{22}$ UCHII 0.3-0.4 2.4-6.5 1.5-4.1 0.9-2.4 Overall 0.6-3.5$\times$10$^{22}$ both 0.3-0.7 1.0-6.5 0.6-4.1 0.4-2.4 UCHII Region Association and Lifetimes {#sec:uchii} -------------------------------------- Since the absolute lifetimes of methanol masers is quite uncertain, we tie our relative lifetimes to absolute lifetimes of UCHII regions using a different method to probe the range of parameter space that is likely for DMRs. @woo89b [@woo89] determined that the lifetimes of UCHII regions are longer than anticipated based on the expected expansion rate of D-type ionization fronts as HII regions evolve toward pressure equilibrium. They estimate that O stars spend about 10-20% of their main-sequence lifetime in molecular clouds as UCHII regions, or about 3.6 $\times$ 10$^{5}$ years[^1], $\tau_{UCHII}$. The remaining link between the absolute and relative lifetimes is the fraction of DMRs associated with UCHII regions, particularly O stars. @woo89 look for only the brightest UCHII regions, dense regions containing massive stars, while the more recent and more sensitive studies of @and11 show HII regions over wider evolutionary stages, after much of the dense gas cocoon has been dispersed. We suggest that future works investigate the use of newer catalogs from the CORNISH survey [e.g.; @pur13]. @woo89 searched three regions in our $l$ = 30  field (all classified as “starry" in our study) for UCHII regions and found them toward two. Therefore the @woo89 absolute lifetime of 3.6 $\times$ 10$^{5}$ years ($\tau_{UCHII}$) corresponds, very roughly, to 2/3 of ‘starry’ DMRs they surveyed in our analyzed field. We make the assumption that approximately 50-100% of our ‘starry’ pixels are associated with UCHII regions ($f_{UCHII}$). This assumption is based on the following lines of evidence: 1) 8  emission is often indicative of UV excitation [@ban10 show that nearly all GLIMPSE bubbles, 8  emission, are associated with UCHII regions], 2) “starry" pixels show warmer dust temperatures, 3) and UCHII regions were found toward 2/3 regions surveyed by @woo89 in our field, as shown above. The assumption that 50-100% of ‘starry’ pixels are associated with UCHII regions ($f_{UCHII}$) corresponds to total DMR lifetimes ($\tau_{total}$) of 2.4 or 1.2 Myr (for 50% or 100%, respectively) when we assume a starry fraction of 30% ($f_{starry}$) as shown in the equation below. $$\tau_{total} = \frac{\tau_{UCHII}}{f_{starry}~f_{UCHII}}$$ The absolute lifetimes of the ‘Generous’ and ‘Conservative’ thresholds starless phase (using Equations 5 and 6) is then 0.8-1.7 Myr and the “starry" phase would be 0.4-0.7 Myr. If we instead assume the KP10 column density threshold for high-mass star formation, $f_{UCHII}$ would not change, only the star-forming and starless fraction, in this case, 37% and 63%, respectively. The absolute lifetimes inferred for these phases based on association with UCHII regions is a $\tau_{total}$ of 1.0-1.9 Myr, a $\tau_{starless}$ of 0.6-1.2 Myr and a $\tau_{starry}$ of 0.4-0.7 Myr. See for a summary of the relative and absolute lifetimes for various methods. Free-fall times {#sec:ff} --------------- The absolute lifetimes derived in §\[sec:maser\] and \[sec:uchii\] are compared with fiducial cloud free-fall times. To calculate ‘fiducial’ cloud free-fall times, we first calculate the median pixel column density for each of the categories (conservative, generous, KP10 near, and KP10 far). These are shown in . We then use Equations 11 and 12 from @svo16 to calculate a fiducial free-fall time from the column densities. The central volume density is calculated from the column density assuming a characteristic length of 1 pc and a spherically symmetric Gaussian density distribution [see @svo16 for details]. We stress that there are many uncertainties in this calculation, as we do not know the true volume densities, but these are simply meant to provide approximate free-fall times based on known cloud parameters, such as the median column density and typical size. Moreover, these free-fall times are, of course, calculated at the ‘present day,’ and we do not know what the ‘initial’ cloud free-fall times were. The fiducial free-fall times for each category are listed in . For each category, we then convert the absolute lifetimes derived into number of free-fall times. These are shown in the rightmost three columns of . The number of free-fall times for the total lifetimes range from 1-6.5. The starless phase ranges between 0.6-4.1 free-fall times and the starry phase ranges from 0.4-2.4 free-fall times. Clump Identification SF Identification $f_{starless}$ Reference ------------------------ --------------------- ---------------- ----------- -- IRDC 24 0.65 @cha09 IRDC 8 0.82 @cha09 IRDC 24 0.33 @par09 IRDC 24 0.32-0.80 @per09 LABOCA 8/24 0.44 @mie12 ATLASGAL 22 0.25 @cse14 ATLASGAL GLIMPSE+MIPSGAL 0.23 @tac12 Hi-GAL 24 0.18 @wil12 Hi-GAL 8 0.33 @wil12 BGPS mid-IR catalogs[^2] 0.80 @dun11a BGPS many tracers 0.47 @svo16 ATLASGAL MIPSGAL YSOs 0.69 @hey16 N(H$_{2}$) from Hi-GAL T$_{dust}$ and 8 0.60-0.70 This work Comparison with Other Lifetime Estimates for Dense, Molecular Clumps {#sec:comp} -------------------------------------------------------------------- Previous lifetime estimates are summarized in . They are based primarily on mid-IR emission signatures at 8/24  toward IRDCs or dust-identified clumps and find starless fraction percentages between 23-82%. In previous studies, these starless fractions are often extrapolated to absolute starless lifetimes ranging from $\sim$ 10$^{3}$-10$^{6}$ years. Additionally, @tac12 find a lifetime for the starless phase for the most high-mass clumps of 6 $\times$ 10$^{4}$ years based on an extrapolated total number of starless clumps in the Milky Way and a Galactic SFR, and @ger14 found an IRDC lifetime of 10$^{4}$ years based on chemical models. Previous analyses determined lifetimes of high-mass star-forming regions by calculating relative fractions of ‘clumps’ or ‘cores,’ defined in various ways and with arbitrary sizes. All the regions within each ‘clump’ or ‘core’ are lumped together and collectively denoted as starless or star-forming. Since clumps identified in sub-mm surveys generally contain gas in different stages of star formation (actively star-forming and quiescent) we chose to use the pixel-by-pixel approach. In this way, gas that is star-forming or quiescent is identified as such without being instead included in a different category due to its association with a clump. Previously, a single 8 or 24  point source would classify an entire clump as star-forming, therefore, we expect that our pixel-by-pixel approach will identify more regions as starless and give a higher starless fraction. Our relative lifetime estimates are in reasonable agreement with previous work on the topic, and yield a somewhat higher starless fraction, as would be expected with the pixel-by-pixel method. Of particular interest for comparison are the recent works of @svo16 and @hey16. @svo16 perform a comprehensive analysis of over 4500 clumps from BGPS across the Galactic Plane, including their distances, physical properties, and star-forming indicators. In this analysis they notice a possible trend in which the clump starless lifetimes decrease with clump mass. Overall, about 47% of their clumps can be qualified as ‘starless’ clump candidates, and using a similar method for determining absolute lifetimes, they find lifetimes of 0.37 $\pm$ 0.08 Myr (M/10$^{3}$ )$^{-1}$ for clumps more massive than 10$^3$ . @hey16 similarly perform a comprehensive analysis of the latency of star formation in a large survey of about 3500 clumps identified by ATLASGAL. They carefully identify MIPSGAL YSOs [@gut15] that overlap with these clumps, and accounting for clumps excluded due to saturation, find that about 31% are actively star-forming. They conclude that these dense, molecular clumps have either no star formation, or low-level star formation below their sensitivity threshold, for about 70% of their lifetimes. Our starless lifetime of about 60-70% agrees remarkably well with both of these studies. The @svo16 analysis includes regions of lower column density than our selection and are also sensitive to the early signatures of lower-mass stars, so it would be expected that the starless fraction is a bit lower. The absolute lifetimes we derive are larger than most previous studies simply because of how they are anchored - most previous studies simply assume a star-forming lifetime ($\tau_{starry}$) of 2 $\times$ 10$^{5}$ years [representative YSO accretion timescale, @zin07]. If this star-forming lifetime ($\tau_{starry}$) is used along with our starless fractions of 0.6-0.7, we would derive starless lifetimes ($\tau_{starless}$, using Equations 5 and 6) of 0.3 - 0.5 Myr. Overall, there is quite a wide range in the estimates of the starless lifetimes for DMRs. However, the relatively good agreement from the comprehensive studies of @svo16 and @hey16 on individual clumps and the present work using a variety of star-formation tracers and a pixel-by-pixel analysis over a large field may indicate that these values are converging. Moreover, it is crucial to understand that different techniques will necessarily provide different values, as each is probing a different clump or DMR density and some star-formation tracers will be more sensitive to the signatures of lower-mass stars. The matter is overall, quite complex, and assigning a single lifetime to regions of different masses and densities is a simplification [@svo16]. Comparison with Global Milky Way SFRs {#sec:compsfr} ------------------------------------- One simple sanity test for our lifetime estimates is to compare them with global SFRs. We use our column density map of the “starry" regions and convert it to a total mass of material in the star-forming phase. For the range of column densities considered, assuming distances between 5-9 kpc, we find the total mass of material engaged in forming high-mass stars to be about 0.5 - 3 $\times$ 10$^5$ $\Msun$ in the 2 $\times$ 2 field centered at \[$\ell$, b\] = \[30, 0\]. We assume a typical star formation efficiency of 30% to derive the mass of stars we expect to form in the region over “starry" lifetime of 0.1 - 0.7 Myr. Multiplying the gas mass by the efficiency and dividing by this lifetime gives us a total SFR in our 2 $\times$ 2 region of 0.02 - 0.82 /year (given the ranges in lifetimes, threshholds and distances). Extrapolating this to the entire Galaxy within the solar circle [assuming a sensitivity of our measurements from 3 to 12 kpc and a typical scale height of 80 pc as in @rob10] gives a global Milky Way SFR ranging from 0.3 - 20 /year. Typical estimates of Milky Way SFRs range from about 0.7 - 10 /year, and when accounting for different IMFs, converge to about 2 /year [e.g. @rob10; @cho11]. Since our observed field contains W43, often touted as a “mini-starbust" [e.g. @bal10; @lou14], we expect our inferred global SFR to be higher than the true global SFR. Due to the many uncertainties and assumptions, we find that the inferred global SFR has a large range and is between 0.1 - 10 $\times$ the fiducial value of 2 /year. While the level of uncertainties and assumptions preclude any meaningful inference from this comparison, our numbers do pass the simple sanity check. Additionally, the average of inferred global SFRs is higher than the fiducial value, as would be expecåted for this highly active, “mini-starburst" region of the Galaxy. Conclusion {#sec:conclusion} ========== We estimate the relative lifetimes of the starless and star-forming phases for all regions capable of forming high-mass stars in a 2 $\times$ 2 field centered at \[$\ell$, b\] = \[30, 0\]. We use column densities derived from Hi-GAL to determine which regions are capable of forming high-mass stars, and dust temperature and Spitzer 8  emission to determine if the region is starless (to a limit of about 10 ) or star-forming. Unlike previous analyses, we do not create any artificial ‘clump’ boundaries, but instead use all the spatial information available and perform our analysis on a pixel-by-pixel basis. We find that regions capable of forming high-mass stars spend about 60-70% of their lives in a starless or embedded phase with star formation below our detection level and 30-40% in an actively star-forming phase. Absolute timescales for the two phases are anchored to the duration of methanol masers determined from @van05 and the UCHII region phase from @woo89. We include a wide range of possible assumptions and methodologies, which gives a range for starless lifetimes of 0.2 to 1.7 Myr (60-70%) and a star-forming lifetime of 0.1 to 0.7 Myr (30-40%) for high-mass star-forming regions identified in the dust continuum above column densities from 0.4 - 2.5 $\times$ 10$^{22}$ cm$^{-2}$. These lifetimes correspond to about 0.6-4.1 free-fall times for the starless phase and 0.4-2.4 free-fall times for the star-forming phase, using fiducial cloud free-fall times. In this work, we are only sensitive to tracing forming stars more massive than about 10 . If lower-mass stars in the same regions form earlier, the starless timescale for those stars would be even faster than the 0.6-4.1 free-fall times reported here. We find that the relative lifetimes of about 60-70% of time in the starless phase and 30-40% in the star-forming phase are robust over a wide range of thresholds, but that the absolute lifetimes are rather uncertain. These results demonstrate that a large fraction of high-column density gas is in a starless or embedded phase. We outline a methodology for estimating relative and absolute lifetimes on a pixel-by-pixel basis. This pixel-by-pixel method could easily be implemented to derive lifetimes for dense, molecular regions throughout the Milky Way. We thank the anonymous referee for many insightful and important comments that have greatly improved the manuscript. We also thank P. Meyers, Y. Shirley, H. Beuther, J. Tackenberg, A. Ginsburg, and J. Tan for helpful conversations regarding this work. Data processing and map production of the Herschel data has been possible thanks to generous support from the Italian Space Agency via contract I/038/080/0. Data presented in this paper were also analyzed using The Herschel interactive processing environment (HIPE), a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS, and SPIRE consortia. This material is based upon work supported by the National Science Foundation under Award No. 1602583 and by NASA through an award issued by JPL/Caltech via NASA Grant \#1350780. [^1]: For an O6 star, the main sequence lifetime is about 2.4 $\times$ 10$^{6}$ years [@mae87], so 15% is 3.6 $\times$ 10$^{5}$ years. Note that a less massive mid-B star would have a lifetime about 5$\times$ longer, changing our absolute lifetime estimates by that factor. This variation gives a sense of the uncertainties involved in deriving absolute lifetimes. [^2]: @rob08, the Red MSX Catalog from @urq08, and the EGO catalog from @cyg08.
--- abstract: 'We study supersymmetric black holes in $AdS_4$ in the framework of four dimensional gauged $\N=2$ supergravity coupled to hypermultiplets. We derive the flow equations for a general electrically gauged theory where the gauge group is Abelian and, restricting them to the fixed points, we derive the gauged supergravity analogue of the attractor equations for theories coupled to hypermultiplets. The particular models we analyze are consistent truncations of M-theory on certain Sasaki-Einstein seven-manifolds. We study the space of horizon solutions of the form $AdS_2\times \Sig_g$ with both electric and magnetic charges and find a four-dimensional solution space when the theory arises from a reduction on $Q^{111}$. For other $SE_7$ reductions, the solutions space is a subspace of this. We construct explicit examples of spherically symmetric black holes numerically.' --- 1.5cm Nick Halmagyi$^*$, Michela Petrini$^*$, Alberto Zaffaroni$^{\dagger}$\ 0.5cm $^{*}$ Laboratoire de Physique Théorique et Hautes Energies,\ Université Pierre et Marie Curie, CNRS UMR 7589,\ F-75252 Paris Cedex 05, France\ 0.5cm $\dagger$ Dipartimento di Fisica, Università di Milano–Bicocca,\ I-20126 Milano, Italy\ and\ INFN, sezione di Milano–Bicocca,\ I-20126 Milano, Italy\ 0.5cm halmagyi@lpthe.jussieu.fr\ petrini@lpthe.jussieu.fr\ alberto.zaffaroni@mib.infn.it Introduction ============ Supersymmetric, asymptotically $AdS_4$ black holes[^1] with regular spherical horizons have recently been discovered in $\N=2$ gauged supergravities with vector multiplets [@Cacciatori:2009iz]. These solutions have been further studied in [@DallAgata2011; @Hristov:2010ri]. The analytic solution for the entire black hole was constructed and shown to be one quarter-BPS. For particular choices of prepotential and for particular values of the gauge couplings, these black holes can be embedded into M-theory and are asymptotic to $AdS_4\times S^7$. The goal of this work is to study supersymmetric, asymptotically $AdS_4$ black holes in more general gauged supergravities, with both vector and hypermultiplets. The specific theories we focus on are consistent truncations of string or M-theory. Supersymmetric black holes in these theories involve running hypermultiplet scalars and are substantially different from the examples in [@Cacciatori:2009iz]. The presence of hypers prevents us from finding analytic solutions of the BPS conditions, nevertheless we study analytically the space of supersymmetric horizon solutions $AdS_2\times \Sigma_g$ and show that there is a large variety of them. We will then find explicit spherically symmetric black hole solutions interpolating between $AdS_4$ and $AdS_2\times S^2$ by numerical methods. The black holes we construct have both electric and magnetic charges. Our demand that the supergravity theory is a consistent truncation of M-theory and that the asymptotic $AdS_4$ preserves $\N=2$ supersymmetry limits our search quite severely. Some of the gauged supergravity theories studied in [@Cacciatori:2009iz] correspond to the $\N=2$ truncations [@Cvetic1999b; @Duff:1999gh] of the de-Wit/Nicolai $\N=8$ theory [@deWit:1981eq] where only massless vector multiplets are kept. In this paper we will focus on more general theories obtained as consistent truncations of M-theory on seven-dimensional Sasaki-Einstein manifolds. A consistent truncation of eleven-dimensional supergravity on a Sasaki-Einstein manifold to a universal sector was obtained in [@Gauntlett:2007ma; @Gauntlett:2009zw]. More recently the general reduction of eleven-dimensional supergravity to four dimensions on left-invariant coset manifolds with $SU(3)$-structure has been performed in [@Cassani:2012pj][^2]. Exploiting the coset structure of the internal manifold it is possible to truncate the theory in such a way to also keep massive Kaluza-Klein multiplets. These reductions can, by their very construction, be lifted directly to the higher dimensional theory and are guaranteed to solve the higher dimensional equations of motion. The black holes we construct represent the gravitational backreaction of bound states of M2 and M5-branes wrapped on curved manifolds in much the same manner as was detailed by Maldacena and Nunez [@Maldacena:2000mw] for D3-branes in $AdS_5 \times S^5$ and M5-branes in $AdS_7 \times S^4$. To preserve supersymmetry, a certain combination of the gauge connections in the bulk is set equal to the spin connection, having the effect of twisting the worldvolume gauge theory in the manner of [@Witten:1988xj]. For D3-branes, for particular charges, the bulk system will flow to $AdS_3 \times \Sigma_g$ in the IR and the entire solution represents an asymptotically $AdS_5$ black string. The general regular flow preserves just 2 real supercharges and thus in IIB string theory it is $\frac{1}{16}$-BPS. Similarly, for the asymptotically $AdS_7$, black M5-brane solutions, depending on the charges, the IR geometry is $AdS_5\times \Sig_g$ and the dual $CFT_4$ may have $\N=2$ or $\N=1$ supersymmetry. These $\N=2$ SCFT’s and their generalizations have been of much recent interest [@Gaiotto2012h; @Gaiotto2009] and the $\N=1$ case has also been studied [@Benini:2009mz; @Bah:2012dg]. By embedding the $AdS_4$ black holes in M-theory we can see them as M2-brane wrapping a Riemann surface. For particular charges, the bulk system will flow to $AdS_2 \times \Sigma_g$ in the IR and represents a black hole with regular horizon. The original examples found in [@Caldarelli1999] can be reinterpreted in this way; it has four equal magnetic charges and can be embedded in $AdS_4 \times S^7$. The explicit analytic solution is known and it involves constant scalars and a hyperbolic horizon. A generalization of [@Maldacena:2000mw] to M2-branes wrapping $\Sig_g$ was performed in [@Gauntlett2002] where certain very symmetric twists were considered. Fully regular solutions for M2 branes wrapping a two-sphere with running scalars were finally found in [@Cacciatori:2009iz] in the form of $AdS_4$ black holes. It is note-worthy that of all these scenarios of branes wrapping Riemann surfaces, the complete analytic solution for general charges is known only for M2-branes on $\Sig_g$ with magnetic charges [@Cacciatori:2009iz]. One way to generalize these constructions of branes wrapped on $\Sig_g$ is to have more general transverse spaces. This is the focus of this article. For M5-branes one can orbifold $S^4$ while for D3-branes one can replace $S^5$ by an arbitrary $SE_5$ manifold and indeed a suitable consistent truncation on $T^{11}$ has indeed been constructed [@Bena:2010pr; @Cassani:2010na]. For M2-branes one can replace $S^7$ by a seven-dimensional Sasaki-Einstein manifold $SE_7$ and, as discussed above, the work of [@Cassani:2012pj] provides us with a rich set of consistent truncations to explore. Interestingly, in our analysis we find that there are no solutions for pure M2-brane backgrounds, there must be additional electric and magnetic charges corresponding to wrapped M2 and M5-branes on internal cycles. Asymptotically $AdS_4$ black holes with more general transverse space can be found in [@Donos:2008ug] and [@Donos2012d] where the solutions were studied directly in M-theory. These include the M-theory lift of the solutions we give in Sections \[sec:Q111Simp\] and \[numericalQ111\]. The BPS black holes we construct in this paper are asymptotically $AdS_4$ and as such they are states in particular (deformed) three-dimensional superconformal field theories on $S^2\times \mathbb{R}$. The solution in [@Cacciatori:2009iz] can be considered as a state in the twisted ABJM theory [@Aharony:2008ug]. The solutions we have found in this paper can be seen as states in (twisted and deformed) three dimensional Chern-Simons matter theory dual to the M-theory compactifications of homogeneous Sasaki-Einstein manifolds[^3]. One feature of these theories compared to ABJM is the presence of many baryonic symmetries that couple to the vector multiplets arising from non trivial two-cycles in the Sasaki-Einstein manifold. In terms of the worldvolume theory, the black holes considered in this paper are then electrically charged states of a Chern-Simons matter theory in a monopole background for $U(1)_R$ symmetry and other global symmetries, including the baryonic ones[^4]. Gauged $\N=2$ supergravity with hypermultiplets is the generic low-energy theory arising from a Kaluza-Klein reduction of string/M-theory on a flux background. The hypermultiplet scalars interact with the vector-multiplet scalars through the scalar potential: around a generic $AdS_4$ vacuum the eigenmodes mix the hypers and vectors. In the models we study, we employ a particular simplification on the hypermultiplet scalar manifold and find solutions where only one real hypermultiplet scalar has a non-trivial profile. Given that the simplification is so severe it is quite a triumph that solutions exist within this ansatz. It would be interesting to understand if this represents a general feature of black holes in gauged supergravity.\ The paper is organized as follows. In Section 2 we summarize the ansatz we use and the resulting BPS equations for an arbitrary electrically gauged $\N=2$ supergravity theory. The restriction of the flow equations to the horizon produces gauged supergravity analogues of the attractor equations. In Section 3 we describe the explicit supergravity models we consider. A key step is that we use a symplectic rotation to a frame where the gauging parameters are purely electric so that we can use the supersymmetry variations at our disposal. In Section 4 we study horizon geometries of the form $AdS_2\times \Sig_g$ where $g\neq 1$. We find a four parameter solution space for $Q^{111}$ and the solutions spaces for all the other models are truncations of this space. In Section 5 we construct numerically black hole solutions for $Q^{111}$ and for $M^{111}$. The former solution is a gauged supergravity reproduction of the solution found in [@Donos2012d] and is distinguished in the space of all solutions by certain simplifications. For this solution, the phase of the four dimensional spinor is constant and in addition the massive vector field vanishes. The solution which we construct in $M^{111}$ turns out to be considerably more involved to compute numerically and has all fields of the theory running. In this sense we believe it to be representative of the full solution space in $Q^{111}$. The Black Hole Ansatz ===================== We want to study static supersymmetric asymptotically $AdS_4$ black holes in four-dimensional $\mathcal{N}=2$ gauged supergravity. The standard conventions and notations for $\mathcal{N}=2$ gauged supergravity [@Andrianopoli:1996vr; @Andrianopoli:1996cm] are briefly reviewed in Appendix \[gsugra\]. Being supersymmetric, these black holes can be found by solving the supersymmetry variations - plus Maxwell equations . In this section we give the ansatz for the metric and the gauge fields, and a simplified form of the SUSY variations we will study in the rest of this paper. The complete SUSY variations are derived and discussed in Appendix \[sec:BPSEqs\]. The Ansatz {#sec:bhansatz} ---------- We will focus on asymptotically $AdS_4$ black holes with spherical ($AdS_2\times S^2$) or hyperbolic ($AdS_2\times \HH^2$) horizons. The modifications required to study $AdS_2\times \Sig_g$ horizons, where $\Sigma_g$ is a Riemann surface of genus $g$, are discussed at the end of Section \[sec:BPSflow\]. The ansatz for the metric and gauge fields is \[ansatz\] ds\^2&=& e\^[2U]{} dt\^2- e\^[-2U]{} dr\^2- e\^[2(V-U)]{} (d\^2+F()\^2 d\^2) \[metAnsatz\]\ A\^&=& \^(r) dt- p\^(r) F’() d, with F()={ :& S\^2 (=1)\ :& \^2 (=-1) . The electric and magnetic charges are p\^&=& \_[S\^2]{} F\^\[elinv\] ,\ q\_&& \_[S\^2]{} G\_= -e\^[2(V-U)]{} \_ ’\^+\_ p\^ , \[maginv\] where $G_\Lam$ is the symplectic-dual gauge field strength G\_ =R\_ F\^-\_ \*F\^ . In addition, we assume that all scalars in the theory, the fields $z^i$ from the $n_v$-vector multiplets and $q^u$ from the $n_h$-hypermultiplets, are functions of the radial coordinate $r$, only. Moreover, we will restrict our analysis to abelian gaugings of the hypermultiplet moduli space and assume that the gauging is purely electric. As discussed in [@deWit:2005ub], for Abelian gauge groups one can always find a symplectic frame where this is true. The BPS Flow Equations {#sec:BPSflow} ---------------------- In Appendix \[sec:BPSEqs\], we derive the general form that the SUSY conditions take with our ansatz for the metric and gauge fields and the hypothesis discuss above for the gaugings. We will only consider spherical and hyperbolic horizons. Throughout the text, when looking for explicit black hole solutions we make one simplifying assumption, namely that the Killing prepotentials $P^x_\Lam$ of the hypermultiplet scalar manifold $\cM_h$ satisfy[^5] P\^1\_=P\^2\_=0. \[P120\] The flow equations given in this section reduce to the equations in [@DallAgata2011; @Hristov:2010ri] when the hypermultiplets are truncated away and thus $P^3_\Lam$ are constant.\ The preserved supersymmetry is \_A= e\^[U/2]{} e\^[i/2]{} \_[0A]{} where $\eps_{0A}$ is an $SU(2)$-doublet of constant spinors which satisfy the following projections \_[0A]{}&=&i \_[AB]{}\^[0]{} \_0\^[B]{},\ \_[0A]{}&=& (\^3)\_A\^[ B]{} \^[01]{} \_[0B]{} . As a result only $2$ of the 8 supersymmetries are preserved along any given flow. Imposing these two projections, the remaining content of the supersymmetry equations reduces to a set of bosonic BPS equations. Some are algebraic p\^P\_\^3&=&1 \[pP1\] ,\ p\^k\_\^u &=& 0 \[pk1\] ,\ \_r\^P\^3\_&=& e\^[2(U-V)]{}e\^[-i]{}\[Alg1\] ,\ \^P\^3\_&=& 2 e\^U \_r\^P\^3\_\[qP\] ,\ \^k\^u\_&=& 2 e\^U \_r\^k\^u\_\[qk\] , and some differential (e\^U)’&=& \_i\^P\^3\_ - e\^[2(U-V)]{}( e\^[-i]{} ) \[UEq\] ,\ V’ &=& 2 e\^[-U]{} \_i\^P\^3\_ \[VEq\] ,\ z’\^i &=& e\^[i]{}e\^[U-2V]{}g\^[i]{}D\_ i e\^[i]{}e\^[-U]{} g\^[i]{} [|f]{}\_\^ P\_\^3 \[tauEq\] ,\ q’\^u&=&2e\^[-U]{} h\^[uv]{} \_v \_i\^ P\^3\_\[qEq\] ,\ ’&=&-A\_re\^[-2U]{}\^P\_\^3 \[psiEq\] ,\ p’\^&=& 0 , where we have absorbed a phase in the definition of the symplectic sections \^=\_r\^+i \_i\^= e\^[-i]{} L\^ . $\cZ$ denotes the central charge &=&p\^M\_- q\_L\^\ &=& L\^\_ (e\^[2(V-U)]{} \^+ ip\^),\ D\_ &=&\^\_ \_ e\^[2(V-U)]{} ’\^ +ip\^ . Once $P^3_\Lam$ are fixed, the $\pm$-sign in the equations above can be absorbed by a redefinition $(p^\Lam,q_\Lam,e^U)\ra -(p^\Lam,q_\Lam,e^U)$.\ Since the gravitino and hypermultiplets are charged, there are standard Dirac quantization conditions which must hold in the vacua of the theory p\^P\^3\_&& ,\ p\^k\^u\_&& . We see from and that the BPS conditions select a particular integer quantization.\ Maxwell’s equation becomes q’\_=2 e\^[-2U]{} e\^[2(V-U)]{}h\_[uv]{} k\^u\_k\^v\_\^\[Max1\] . Notice that for the truncations of M-theory studied in this work, the non-trivial RHS will play a crucial role since massive vector fields do not carry conserved charges.\ Using standard special geometry relations, one can show that the variation for the vector multiplet scalars and the warp factor $U$, and , are equivalent to a pair of constraints for the sections $\cL^\Lam$ \_r e\^[U]{} \_r\^& =& ’\^ , \[delLr2\]\ \_r e\^[-U]{} \_i\^& =& \^ P\_\^3 2 e\^[-3U]{} \^P\^3\_\_r\^ . \[delLi2\] Importantly we can integrate to get \^=2 e\^U \_r\^+ c\^\[qLr\] for some constant $c^\Lam$. From and we see that this gauge invariance is constrained to satisfy c\^P\^3\_=0,    c\^k\^u\_=0. We note that due to the constraint on the sections \_ \^\^=-, and give $(2n_v+1)$-equations.\ One can show that the algebraic relation is an integral of motion for the rest of the system. Specifically, differentiating one finds a combination of the BPS equations plus Maxwell equations contracted with $\cL_i^\Lam$. One can solve for $\psi$ and find that it is the phase of a modified “central charge" $\hcZ$: &=& e\^[i]{}||,     =(e\^[2(U-V)]{}i L\^P\^3\_). Our analysis also applies to black holes with $AdS_2\times \Sig_g$ horizons, where $\Sig_g$ is a Riemann surface of genus $g\ge 0$. The case $g>1$ is trivially obtained by taking a quotient of $\HH^2$ by a discrete group, since all Riemann surfaces with $g > 1$ can be obtained in this way. Our system of BPS equations (\[pP1\]) - (\[psiEq\]) also applies to the case of flat or toroidal horizons ($g=1$) ds\^2&=& e\^[2U]{} dt\^2- e\^[-2U]{} dr\^2- e\^[2(V-U)]{} (d x\^2+ d y\^2)\ A\^&=& \^(r) dt- p\^(r) x d y , with q\_&& -e\^[2(V-U)]{} \_ ’\^- \_ p\^ ,\ &=&L\^\_ (e\^[2(V-U)]{} \^- i p\^), provided we substitute the constraint (\[pP1\]) with p\^P\_\^3= 0 . We will not consider explicitly the case of flat horizons in this paper although they have attracted some recent interest [@Donos2012d]. $AdS_2\times S^2$ and $AdS_2\times \HH^2$ Fixed Point Equations {#sec:horizonEqs} --------------------------------------------------------------- At the horizon the scalars $(z^i,q^u)$ are constant, while the functions in the metric and gauge fields take the form e\^U=,      e\^V=,       \^= r\_0\^\ with $q_0^\Lam$ constant. The BPS equations are of course much simpler, in particular they are all algebraic and there are additional superconformal symmetries. There are the two Dirac quantization conditions p\^P\_\^3 &=&1, \[pP\]\ p\^k\_\^u &=&0 , \[pk\] and give two constraints on the electric component of the gauge field \_0\^P\^x\_&=& 0 , \[tqP\]\ \_0\^k\^u\_&=& 0 \[tqk\] . The radii are given by and &=&2 \_i\^P\^3\_\[R1hor\] ,\ &=&- 2( e\^[-i]{} ) \[R2hor\] . In addition, the algebraic constraint becomes (e\^[-i]{} )=0 and the hyperino variation gives \_i\^k\^u\_=0.\[hyphor\] Finally, combining , and , we can express the charges in terms of the scalar fields p\^&=& - \_i\^R\_2\^2 \^ P\_\^3 \[phor\] ,\ q\_&=& - \_[i]{} R\_2\^2 \_ \^ P\^3\_ , \[qhor\] with $\cM_{i\,\Lam}=\Im (e^{-i\psi} M_\Lam)$. These are the gauged supergravity analogue of the [*attractor equations*]{}.\ It is of interest to solve explicitly for the spectrum of horizon geometries in any given gauged supergravity theory. In particular this should involve inverting and to express the scalar fields in terms of the charges. Even in the ungauged case, this is in general not possible analytically and the equations here are considerably more complicated. Nonetheless one can determine the dimension of the solution space and, for any particular set of charges, one can numerically solve the horizon equations to determine the value of the various scalars. In this way one can check regularity of the solutions. Consistent Truncations of M-theory {#sec:truncations} ================================== Having massaged the BPS equations into a neat set of bosonic equations we now turn to particular gauged supergravity theories in order to analyze the space of black hole solutions. We want to study models which have consistent lifts to M-theory and which have an $\cN=2$ $\ AdS_4$ vacuum somewhere in their field space, this limits our search quite severely. Two examples known to us are $\N=2$ truncations of the de-Wit/Nicolai $\N=8$ theory [@deWit:1981eq] and the truncation of M-theory on $SU(3)$-structure cosets [@Cassani:2012pj]. In this paper we will concentrate on some of the models constructed in [@Cassani:2012pj]. The ones of interest for us are listed in Table 1.\ \[tb1\] $M_7$ $n_v:m^2=0$ $n_v:m^2\neq0$ $n_h$ ----------------------- ------------- ---------------- ------- $Q^{111}$ 2 1 1 $M^{111}$ 1 1 1 $N^{11}$ 1 2 2 $\frac{Sp(2)}{Sp(1)}$ 0 2 2 $\frac{SU(4)}{SU(3)}$ 0 1 1 : The consistent truncations on $SU(3)$-structure cosets being considered in this work. $M_7$ is the 7-manifold, the second column is the number of massless vector multiplets at the $AdS_4$ vacuum, the third column is the number of massive vector multiplets and final column is the number of hypermultiplets. For each of these models there exists a consistent truncation to an $\N=2$ gauged supergravity with $n_v$ vector multiplets and $n_h$ hypermultiplets. We summarize here some of the features of these models referring to [@Cassani:2012pj] for a more detailed discussion. We denote the vector multiplets scalars z\^i = b\^i + i v\^i i = 1, …, n\_v where the number of vector multiplets $n_v$ can vary from 0 to 3. Notice that all models contain some massive vector multiplets. For the hypermultiplets, we use the notation ( z\^i, a, , \^A) where $a, \phi$ belong to the universal hypermultiplet. This is motivated by the structure of the quaternionic moduli spaces in these models, which can be seen as images of the c-map. The metric on quaternionic Kähler manifolds of this kind can be written in the form [@Ferrara:1989ik] ds\_[QK]{}\^2=d\^2 +g\_[i]{} dz\^i d\^ +e\^[4]{}da+\^T d \^2 -e\^[2]{}d\^T d, where $\{z^i,\zbar^{\jbar}|i=1,\ldots,n_h-1\}$ are special coordinates on the special Kähler manifold $\cM_c$ and $\{ \xi^A,\txi_A| A=1,\ldots,n_h\}$ form the symplectic vector $\xi^T=(\xi^A,\txi_A)$ and are coordinates on the axionic fibers.\ All these models, and more generally of $\mathcal{N}=2$ actions obtained from compactifications, have a cubic prepotential for the vector multiplet scalars and both magnetic and electric gaugings of abelian isometries of the hypermultiplet scalar manifold. In ungauged supergravity the vector multiplet sector is invariant under $Sp(2n_v+2,\RR)$. The gauging typically breaks this invariance, and we can use such an action to find a symplectic frame where the gauging is purely electric[^6]. Since $Sp(2n_v+2,\RR)$ acts non trivially on the prepotential $\mathcal{F}$, the rotated models we study will have a different prepotential than the original ones in [@Cassani:2012pj] . The Gaugings {#sec:gaugings} ------------ In the models we consider, the symmetries of the hypermultiplet moduli space that are gauged are non compact shifts of the axionic fibers $\xi_A$ and $U(1)$ rotations of the special Kähler basis $z^i$. The corresponding Killing vectors are the Heisenberg vector fields: h\^A&=& \_[\_A]{} + \^A \_a,    h\_A= \_[\^A]{} - \_A \_a,     h=\_a which satisfy $[h_A,h^B]=\delta_A^B h$, as well as f\^A&=&\_A\_[\^A]{}- \^A\_[\_A]{},     ([indices not summed]{})\ g&=&\_[z]{}+ z \_ . For some purposes it is convenient to work in homogeneous coordinates on $\cM_c$ = \^A\ \_A Z = Z\^A\ Z\_A with $z^i = Z^i/Z^0$ and to define k\_=(Z)\^A +()\^A +()\^A +()\_A , where $\UU$ is a $2n_h\times 2n_h$ matrix of gauging parameters. In special coordinates $k_\UU$ is a sum of the Killing vectors $f^A$ and $g$. A general electric Killing vector field of the quaternionic Kähler manifold is given by k\_=k\^u\_=\_[0]{} k\_ +Q\_[A]{} h\^A+Q\_\^[ A]{} h\_A -e\_ h , where $Q_{\Lam A}$ and $Q_\Lam^A$ are also matrices of gauge parameters, while the magnetic gaugings are parameterized by [@Cassani:2012pj] \^=-m\^h. For these models, the resulting Killing prepotentials can be worked out using the property \[Pkw\] P\^x\_=k\^u\_\^x\_u \^[x ]{} = \^[u ]{} \^x\_u , where $\om^x_u$ is the spin connection on the quaternionic Kähler manifold [@Ferrara:1989ik] \^1+i \^2&=& e\^[+ K\_[c]{}/2]{} Z\^T d,\ \^3 &=& da + \^T d- 2 e\^[K\_c]{} Z\^A\_[AB]{} d\^B . The Killing vector $k_\UU$ may contribute a constant shift to $P^3_0$, and this is indeed the case for the examples below. As already mentioned, we will work in a rotated frame where all gaugings are electric. The form of the Killing vectors and prepotentials is the same, with the only difference that now $\tk^{\Lam}=-m^\Lam h$ and $ \tilde{P}^{x \, \Lam}$ will add an extra contribution to the electric ones. The Models ---------- The models which we will study are summarized in Table 1. They all contain an $AdS_4$ vacuum with $\mathcal{N}=2$ supersymmetry. The vacuum corresponds to the ansatz (\[ansatz\]) with warp factors e\^U=,    e\^[V]{}=, and no electric and magnetic charges p\^= q\_= 0 . The $AdS_4$ radius and the non trivial scalar fields are R=\^[3/4]{},      v\_i= ,     e\^[-2]{} = .\[AdS4Sol\] This is not an exact solution of the flow equations in Section \[sec:BPSflow\] which require a non-zero magnetic charge to satisfy . The black holes of this paper will asymptotically approach $AdS_4$ in the UV but will differ by non-normalizable terms corresponding to the magnetic charge. The corresponding asymptotic behavior has been dubbed [*magnetic*]{} $AdS$ in [@Hristov:2011ye]. ### $Q^{111}$ The scalar manifolds for the $Q^{111}$ truncation are \_v=\^3,   \_h= \_[2,1]{} = . The metric on $\cM_{2,1}$ is ds\^2\_[2,1]{}=d\^2 +e\^[4]{} da+(\^0 d\_0-\_0 d\^0) \^2 + e\^[2]{}(d\^0)\^2 + d\_0\^2 , and the special Kähler base $\cM_c$ is trivial. Nonetheless we can formally use the prepotential and special coordinates on $\cM_c$ =,    Z\^0=1 to construct the spin connection and Killing prepotentials.\ The natural duality frame which arises upon reduction has a cubic prepotential[^7] F=- \[FQ111\] ,\ with sections $X^\Lam = (1,z^)$ and both electric and magnetic gaugings = 0 & 4\ -4 & 0,    e\_00, m\^1=m\^2=m\^3=-2. \[gaugeQ111\] Using an element $\cS_0\in Sp(8,\ZZ)$ we rotate to a frame where the gaugings are purely electric. Explicitly we have \_0=A & B\ C& D ,   A=D= (1,0,0,0) ,    B=-C= (0,-1,-1,-1)\[S0rotation\] and the new gaugings are = 0 & 4\ -4 & 0,    e\_00,  e\_1=e\_2=e\_3=-2. \[gaugeQ11elec\] The Freund-Rubin parameter $e_0>0$ is unfixed. In this duality frame the special geometry data are F&=&2,\ X\^&=& (1,z\^2 z\^3,z\^1z\^3, z\^1 z\^2),\ F\_&=& (z\^1z\^2 z\^3,z\^1,z\^2,z\^3). ### $M^{111}$ The consistent truncation on $M^{111}$ has \_v=\^2,   \_h= \_[2,1]{} and is obtained from the $Q^{111}$ reduction by truncating a single massless vector multiplet. This amounts to setting v\_3=v\_1,   b\_3=b\_1,   A\^3=A\^1. \[M110trunc\] ### $N^{11}$ The consistent truncation of M-theory on $N^{11}$ has one massless and two massive vector multiplets, along with two hypermultiplets. The scalar manifolds are \_v=\^3,   \_h= \_[4,2]{} =. The metric on $\cM_{2,4}$ is ds\_[4,2]{}\^2&=&d\^2 + +e\^[-2]{} d\^2+e\^[4]{} da+ (\^0 d\_0-\_0 d\^0+\^1 d\_1-\_1 d\^1)\ &&+e\^[2+]{}d\^0+ d\^1 \^2+e\^[2+]{}d\_0- d\_1 \^2\ && +e\^[2-]{} d\^0- d\^1 + (d\_0- d\_1) \^2\ && +e\^[2-]{} d\_0+ d\_1- ( d\^0+ d\^1 ) \^2 , \[SO42met\] and the special coordinate $z$ on the base is given by e\^+i = ,          d\^2 + e\^[-2]{} d\^2= . This differs slightly from the special coordinate used in [@Cassani:2012pj], where the metric is taken on the upper half plane instead of the disk. The prepotential and special coordinates on $\cM_c$ are given by =,    Z\^A=(1,z). The cubic prepotential on $\cM_v$ obtained from dimensional reduction is the same as for $Q^{111}$, , however the models differ because of additional gaugings Q\_1\^[ 1]{}=Q\_2\^[ 1]{}=2,  Q\_3\^[ 1]{}=-4. \[QelecN11\] The duality rotation we used for the $Q^{111}$ model to make the gaugings electric would not work here since it would then make magnetic. However using the fact that $m^\Lam$ and $Q_\Lam^{\ 1} $ are orthogonal m\^Q\_\^[ 1]{}=0, we can find a duality frame where all parameters are electric and $Q_{\Lam}^{\ A}$ is unchanged. Explicitly we use \_1= \^[-1]{} where &=& 1 & 0& 0& 0\ 0 & c\_& s\_&0\ 0& -s\_& c\_&0\ 0 & 0&0 & 1 1 & 0& 0& 0\ 0 & 1& 0& 0\ 0 & 0& c\_& s\_\ 0& 0 & -s\_& c\_,     =/4,  =,\ &=& \^[-1]{} & 0\ 0& ,\ &=& A & B\ C& D ,   A=D= (1,0,1,1), B=-C=(0,-1,0,0). The Killing vectors are then given by and . The prepotential in this frame is rather complicated in terms of the new sections, which are in turn given as a function of the scalar fields $z^i$ by X\^&=&(3, 2z\^1-z\^2-z\^3+z\^[123]{} ,2z\^2-z\^1-z\^3+z\^[123]{} ,2z\^3-z\^1-z\^2+z\^[123]{}),\ z\^[123]{}&=& z\^1 z\^2 +z\^2z\^3 + z\^3 z\^1. ### Squashed $S^7$ $\sim\frac{Sp(2)}{Sp(1)}$ This is obtained from the $N^{11}$ model by eliminating the massless vector multiplet. Explicitly, this is done by setting v\_2=v\_1,   b\_2=b\_1,    A\^2=A\^1. In addition to the $\N=2$, round $S^7$ solution this model contains in its field space the squashed $S^7$ solution, although this vacuum has only $\cN=1$ supersymmetry. Thus flows from this solution lie outside the ansatz employed in this work. ### Universal $\frac{SU(4)}{SU(3)}$ Truncation This model was first considered in [@Gauntlett:2009zw]. It contains just one massive vector multiplet and one hypermultiplet, and can be obtained from the $M^{111}$ truncation by setting v\_2=v\_1,   b\_2=b\_1,   A\^2=A\^1. Horizon Geometries {#sec:hyperhorizons} ================== We now apply the horizon equations of Section \[sec:horizonEqs\] to the models of Section \[sec:truncations\]. We find that there is a four dimensional solution space within the $Q^{111}$ model and that this governs all the other models, even though not all the other models are truncations of $Q^{111}$. The reason is that the extra gaugings present in the $N^{11}$ and squashed $S^{7}$ model can be reinterpreted as simple algebraic constraints on our $Q^{111}$ solution space. In the following, we will use the minus sign in and subsequent equations. We also recall that $\kappa =1$ refers to $AdS_2\times S^2$ and $\kappa =-1$ to $AdS_2\times \HH^2$ horizons. M-theory Interpretation ----------------------- The charges of the four-dimensional supergravity theory have a clear interpretation in the eleven-dimensional theory. This interpretation is different from how the charges lift in the theory used in [@Cacciatori:2009iz], which we now review. In the consistent truncation of M-theory on $S^7$ [@deWit:1984nz; @Nicolai:2011cy] the $SO(8)$-vector fields lift to Kaluza-Klein metric modes in eleven-dimensions. In the further truncation of [@Cvetic1999b; @Duff:1999gh] only the four-dimensional Cartan subgroup of $SO(8)$ is retained, the magnetic charges of the four vector fields in [@Cacciatori:2009iz] lift to the Chern numbers of four $U(1)$-bundles over $\Sig_g$. One can interpret the resulting $AdS_4$ black holes as the near horizon limit of a stack of M2-branes wrapping $\Sig_g\subset X_5$, where $X_5$ is a praticular non-compact Calabi-Yau five-manifold, constructed as four line bundles over $\Sig_g$: A similar description holds for wrapped D3-branes and wrapped M5-branes in the spirit of [@Maldacena:2000mw]. The general magnetic charge configurations have been analyzed recently for D3 branes in [@Benini2013a] and M5-branes in [@Bah:2012dg]. Both these works have computed the field theory central charge and matched the gravitational calculation [^8]. This alone provides convincing evidence that the holographic dictionary works for general twists. There has not yet been any such computation performed from the quantum mechanics dual to the solutions of [@Cacciatori:2009iz], but, as long as the charges are subject to appropriate quantization so as to make $X_5$ well defined, one might imagine there exist well defined quantum mechanical duals of these solutions. Now returning to the case at hand, the eleven-dimensional metric from which the four-dimensional theory is obtained is [@Cassani:2012pj] ds\_[11]{}\^2= e\^[2V]{} \^[-1]{} ds\_4\^2 + e\^[-V]{} ds\_[B\_6]{}\^2+ e\^[2V]{}(+A\^0)\^2 , where $B_6$ is a Kähler-Einstein six-manifold, $\tha$ is the Sasaki fiber, $V$ is a certain combination of scalar fields (not to be confused with $V$ in ), $\cK=\coeff{1}{8}e^{-K}$ with $K$ the Kähler potential, and $A^0$ is the four-dimensional graviphoton[^9]. In addition, vector fields of massless vector multiplets come from the three-form potential expanded in terms of cohomologically non-trivial two forms $\om_i$ C\^[(3)]{}\~A\^i\_i. The truncations discussed above come from reductions with additional, cohomologically trivial two-forms, which give rise to the vector fields of massive vector mutliplets. This is an important issue for our black hole solutions since only massless vector fields carry conserved charges.\ The solutions described in this section carry both electric and magnetic charges. The graviphoton will have magnetic charge $p^0$ given by , which means the eleven-dimensional geometry is really of the form AdS\_2M\_9 , where $M_9$ is a nine-manifold which can be described as a $U(1)$ fibration The electric potential $\tq^0$ will vanish from which we learn that this $U(1)$ is not fibered over $AdS_2$, or in other words the M2 branes that wrap $\Sig_g$ do not have momentum along this $U(1)$. In addition the charges that lift to $G^{(4)}$ correspond to the backreaction of wrapped M2 and M5-branes on $H_2(SE_7,\ZZ)$ and $H_5(SE_7,\ZZ)$. We can check that the Chern number of this $U(1)$ fibration is quantized as follows. First we have +A\^0=d++ A\^0 where $\psi$ has periodicity $2\pi \ell$ for some $\ell\in \RR$ and $\eta$ is a Kähler potential one-form on $B_6$ which satisfies $d\eta=2J$. Such a fibration over a sphere is well defined if n= .\[nZZ\] Recalling and preempting , we see that n=p\^0 =-. For the $SE_7$ admitting spherical horizons used in this paper one has Q\^[111]{},N\^[11]{}:&&=,\ M\^[111]{}:&&= and is satisfied. $Q^{111}$ {#sec:Q111Horizons} --------- To describe the solution space of $AdS_2\times S^2$ or $AdS_2\times \HH^2$ solutions, we will exploit the fact that the gaugings are symmetric in the indices $i=1,2,3$. We can therefore express the solution in terms of invariant polynomials under the diagonal action of the symmetric group $\cS_3$[^10] (v\_[1]{}\^[i\_1]{}v\_2\^[i\_2]{} v\_3\^[i\_3]{}b\_[1]{}\^[i\_1]{}b\_2\^[i\_2]{} b\_3\^[i\_3]{}) =\_[S\_3]{}v\_[(1)]{}\^[i\_1]{}v\_[(2)]{}\^[i\_2]{} v\_[(3)]{}\^[i\_3]{}b\_[(1)]{}\^[i\_1]{}b\_[(2)]{}\^[i\_2]{} b\_[(3)]{}\^[i\_3]{}. First we enforce , which gives \^0=0, \_0=0 . The Killing prepotentials are then given by P\^3\_=(4-e\^[2]{}e\_0, -e\^[2]{},-e\^[2]{},-e\^[2]{}) and the non-vanishing components of the Killing vectors by k\^a\_=-(e\_0, 2,2,2). Solving and we get two constraints on the magnetic charges p\^0=- ,   p\^1+p\^2+p\^3=- . \[pLamReps\] We find that the phase of the spinor is fixed = , while and are redundant (v\_1b\_2)=0.\[constr1\] Then from we get (v\_1v\_2)-(b\_1b\_2) =e\_0.\[constr2\] We can of course break the symmetry and solve the equations above for, for instance, $(b_3,v_3)$ v\_3&=& ,\ b\_3&=&- . Using we find the radius of $AdS_2$ to be R\_1\^2&=& . The algebraic constraint is nontrivial and can be used to solve for $q_0$ in terms of $(p^\Lam,q_{i},v_j,b_k)$. Using the value of $p^0$ given in we can solve and and find e\^[2]{}&=& ,\ R\_2\^2&=& R\_1\^2 1- ,\ &&\ q\_0&=& ,\ q\_[0n]{}&=& -(v\_1\^3v\_3 b\_1\^3)+(v\_1v\_3\^3b\_1\^2b\_2) - (v\_1v\_2v\_3)\^2 (b\_1)-b\_1b\_2b\_3(v\_1\^2 b\_2\^2)+(v\_1\^2 b\_2b\_3)\ &&-v\_1v\_2v\_3 (v\_1 b\_1 b\_2\^2) -2 (v\_1 b\_2\^2 b\_3) -2 (v\_1\^2 v\_2 b\_3) ,\ &&\ p\^1&=& ,\ p\^1\_n&=& 2 v\_1\^2v\_2v\_3(v\_2\^2+v\_3\^2+v\_2v\_3) ,\ && +v\_2 v\_3 (v\_2\^2+v\_3\^2) b\_1\^2 -2 v\_1 v\_2 v\_3(v\_2+v\_3) b\_2 b\_3 +2(v\_2\^2+v\_3\^2)b\_1\^2 b\_2 b\_3 +2 v\_1\^2 b\_2\^2 b\_3\^2\ &&-2 v\_1 v\_3\^2(v\_2+v\_3) b\_1 b\_3 +(-v\_1\^2 v\_2+2 v\_1 v\_2v\_3 + (2 v\_1+v\_2) v\_3\^2)v\_3 b\_2\^2 + (23)\ &&+ 2 v\_3\^2 b\_1b\_2\^2 b\_3 + (v\_1\^2+v\_3\^2) b\_2\^3 b\_3 + (23) ,\ &&\ q\_1&=& ,\ q\_[1n]{}&=& -v\_1v\_2v\_3 (v\_1) b\_1 -v\_1\^2b\_2 (v\_1v\_2) +(23)\ &&+2 v\_1\^2 b\_1b\_2b\_3 +v\_2\^2b\_1\^3 + 2 v\_3\^2 b\_1\^2 b\_2 + (v\_1\^2+v\_3\^2) b\_1 b\_2\^2 +(23) , where = v\_1v\_2v\_3 (v\_1) - (v\_1\^2 b\_2\^2)-( v\_1\^2 b\_2b\_2). The charges $(p^2,p^2,q_2,q_3)$ are related to $(p^1,q_1)$ by symmetry of the $i=1,2,3$ indices. The general solution space has been parameterized by $(v_i,b_j)$ subject to the two constraints and leaving a four dimensional space. From these formula, one can easily establish numerically regions where the horizon geometry is regular. A key step omitted here is to invert these formulae and express the scalars $(b_i,v_j)$ in terms of the charges $(p^\Lam,q_\Lam)$. This would allow one to express the entropy and the effective $AdS_2$ radius in term of the charges [@wip]. ### A $Q^{111}$ simplification {#sec:Q111Simp} The space of solutions in the $Q^{111}$ model simplifies considerably if one enforces a certain symmetry p\^1=p\^2,   q\_1=-q\_2.\[Q111Simp\] One then finds a two-dimensional space of solutions part of which was found in [@Donos:2008ug; @Donos2012d] v\_2&=&v\_1,   b\_3=0,   b\_2=-b\_1\ b\_1&=& \_1\ e\^[2]{} &=&\ R\_1&=&\ R\_2\^2&=& R\_1\^2\ q\_0&=& 0\ q\_1&=&-\_1 \[q1Q111Simp\]\ q\_3&=& 0\ p\^0&=&-\ p\^1&=&- \[p1Q111Simp\]\ p\^3&=& -2p\^1 , where $\eps_1=\pm$ is a choice. One cannot analytically invert and to give $(v_1,v_3)$ in terms of $(p^1,q_1)$ but one can numerically map the space of charges for which regular solutions exist. $M^{111}$ {#sec:M110Solutions} --------- The truncation to the $M^{111}$ model does not respect the simplification . The general solution space is two-dimensional b\_3&=&b\_1,   v\_3=v\_1,   p\^3=p\^1,   q\_3=q\_1,\ b\_1&=& \_2 ,\ b\_2&=&- ,\ e\^[2]{}&=& ,\ R\_1&=& ,\ R\_2\^2 &=& R\_1\^2 , p\^0&=&-, \[p0M110\]\ p\^2&=&-2p\^1 ,\ p\^1&=&- ,\ q\_0&=&-\ && ,\ &&\ q\_1&=&- ,\ q\_2&=& - , \[q3M110\] where $\eps_2$ is a choice of sign. $N^{11}$ -------- In setting $P^1_\Lam=P^2_\Lam=0 $ we get \^A=\_A=0,    z\^1=\^1=0 , and so the only remaining hyper-scalars are $(\phi,a)$. With this simplification the Killing prepotentials are the same as for $Q^{111}$ P\^3\_=(4-e\^[2]{}e\_0, -e\^[2]{},-e\^[2]{},-e\^[2]{}) , while the Killing vectors have an additional component in the $\xi^1$-direction: k\^a\_&=&-(e\_0, 2,2,2),\ k\^[\^1]{}\_&=& (0,-2,-2,4). From this one can deduce that the spectrum of horizon solutions will be obtained from that of $Q^{111}$ by imposing two additional constraints p\^k\^[\^1]{}\_&=& 0,\ \^k\^[\^1]{}\_&=& 0, which amount to p\^3&=& (p\^1+p\^2 ), \[N11constraint1\]\ v\_3&=& (v\_1+v\_2 ).\[N11constraint2\] One can then deduce that the $AdS_2\times \Sig_g$ solution space in the $N^{11}$ model is a two-dimensional restriction of the four dimensional space from the $Q^{111}$ model. While can easily be performed on the general solution space, it is somewhat more difficult to enforce since the charges are given in terms of the scalars. We can display explicitly a one-dimensional subspace of the $N^{11}$ family by further setting $v_3=v_1$: v\_1&=& ,\ b\_1&=& -+1 ,\ b\_3&=& --+1 ,\ R\_1\^2&=& v\_1\^[3/2]{} ,\ R\_2\^2&=& - ,\ e\^[2]{}&=& ,\ p\^1&=& ,\ p\^2&=& ,\ p\^3&=& ,\ q\_0&=& - ,\ q\_1&=& - ,\ q\_2&=&- ,\ q\_3&=& - . $\frac{Sp(2)}{Sp(1)}$ --------------------- The truncation of M-theory on $\frac{Sp(2)}{Sp(1)}$ is obtained from the $N^{11}$ truncation by removing a massless vector multiplet. Explcitly, this is done by setting v\_2=v\_1,   b\_2=b\_1,    A\^2=A\^1. Alternatively one can set p\^2=p\^1,  v\_2=v\_1 on the two-dimensional $M^{111}$ solution space of Section \[sec:M110Solutions\]. This leaves a unique solution, the universal solution of $\frac{SU(4)}{SU(3)}$ we next describe. $\frac{SU(4)}{SU(3)}$ {#sec:SU4SU3Sols} --------------------- This solution is unique and requires $\kappa=-1$. Therefore it only exists for hyperbolic horizons: v\_1&=& ,\ b\_1&=& 0,\ R\_1&=& \^[3/4]{},\ R\_2&=& \^[3/4]{}. It is connected to the central $AdS_4$ vacuum by a flow with constant scalars, which is known analytically [@Caldarelli1999] . Black Hole solutions: numerical analysis {#numerical} ======================================== Spherically symmetric, asymptotically $AdS$ static black holes can be seen as solutions interpolating between $AdS_4$ and $AdS_2\times S^2$. We have seen that $AdS_2\times S^2$ vacua are quite generic in the consistent truncations of M-theory on Sasaki-Einstein spaces and we may expect that they arise as horizons of static black holes. In this section we will show that this is the case in various examples and we expect that this is true in general. The system of BPS equations (\[pP1\]) - (\[psiEq\]) can be consistently truncated to the locus $$\label{hyperlocus} \xi^A =0\, , \qquad \tilde\xi_A=0 \, ;$$ this condition is satisfied at the fixed points and enforces (\[P120\]) along the flow. The only running hyperscalar is the dilaton $\phi$. The solutions of (\[pP1\]) - (\[psiEq\]) will have a non trivial profile for the dilaton, all the scalar fields in the vector multiplets, the gauge fields and the phase of the spinor. This makes it hard to solve the equations analytically. We will find asymptotic solutions near $AdS_4$ and $AdS_2\times S^2$ by expanding the equations in series and will find an interpolating solution numerically. The problem simplifies when symmetries allow to set all the massive gauge fields and the phase of the spinor to zero. A solution of this form can be found in the model corresponding to the truncation on $Q^{111}$. The corresponding solution is discussed in Section \[numericalQ111\] and it corresponds to the class of solutions found in eleven dimensions in [@Donos2012d]. The general case is more complicated. The $M^{111}$ solution discussed in Section \[numericalM110\] is an example of the general case, with most of the fields turned on. Black Hole solutions in $Q^{111}$ {#numericalQ111} --------------------------------- We now construct a black hole interpolating between the $AdS_4 \times Q^{111}$ vacuum and the horizon solutions discussed in Section \[sec:Q111Simp\] with p\^1=p\^2,   q\_1=-q\_2.\[Q111Simp2\] The solution should correspond to the M-theory one found in [@Donos2012d]. Due to the high degree of symmetry of the model, we can truncate the set of fields appearing in the solution and consistently set $$v_2=v_1\,,\ \ \ b_3=0\,,\ \ \ b_2=-b_1$$ along the flow. This restriction is compatible with the following simplification on the gauge fields $$\tilde q_2(r)=-\tilde q_1(r)\,,\ \ \ \tilde q_0(r)=0\,,\ \ \ \tilde q_3(r)=0\, .$$ It follows that $$\label{simpcond} k^a_\Lambda \, \tilde q^\Lambda =0\, , \\\ \qquad P^3_\Lambda\, \tilde q^\Lambda =0$$ for all $r$. The latter conditions lead to several interesting simplifications. $k^a_\Lambda \, \tilde q^\Lambda =0$ implies that the right hand side of Maxwell equations (\[Max1\]) vanishes and no massive vector field is turned on. Maxwell equations then reduce to conservation of the invariant electric charges $q_\Lambda$, and we can use the definition (\[maginv\]) to find an algebraic expression for $\tilde q_\Lambda$ in terms of the scalar fields. Moreover, the condition $P^3_\Lambda\, \tilde q^\Lambda =0$ implies that the phase $\psi$ of the spinor is constant along the flow. Indeed, with our choice of fields, $A_r=0$ and the equation (\[psiEq\]) reduces to $\psi^\prime =0$. The full set of BPS equations reduces to six first order equations for the six quantities $$\{ U,V,v_1,v_3,b_1,\phi \} \, .$$ For simplicity, we study the interpolating solution corresponding to the horizon solution in Section \[sec:Q111Simp\] with $v_1=v_3$. This restriction leaves a family of $AdS_2\times S^2$ solutions which can be parameterized by the value of $v_1$ or, equivalently, by the magnetic charge $p^1$. We perform our numerical analysis for the model with e\^[-2]{} = , v\_1 = v\_3 = , b\_1 = - and electric and magnetic charges p\^1=- 12 , q\_1= . We fixed $e_0=8 \sqrt{2}$. The values of the scalar fields at the $AdS_4$ point are given in (\[AdS4Sol\]).\ It is convenient to define a new radial coordinate by $dt= e^{-U} dr$. $t$ runs from $+\infty$ at the $AdS_4$ vacuum to $-\infty$ at the horizon. It is also convenient to re-define some of the scalar fields $$v_i(t) = v_i^{AdS} e^{e_i(t)}\, , \qquad \phi(t)=\phi_{AdS} -\frac12 \rho(t) \, ,$$ such that they vanish at the $AdS_4$ point. The metric functions will be also re-defined $$U(t) =u(t)+\log(R_{AdS})\, , \qquad V(t)=v(t)$$ with $u(t)=t,v(t)=2t$ at the $AdS_4$ vacuum. The BPS equations read $$\begin{aligned} u'&=& e^{-e_1-\frac{e_3}{2}} - \frac{3}{4} e^{-e_1-\frac{e_3}{2}-\rho} +\frac{1}{4} e^{e_1-\frac{e_3}{2}-\rho} +\frac{1}{2} e^{\frac{e_3}{2}-\rho} +\frac{3}{8} e^{-\frac{e_3}{2}+2 u-2 v} -\frac{3}{4} e^{-e_1+\frac{e_3}{2}+2 u-2 v}\nonumber\\ &&-\frac{1}{8} e^{e_1+\frac{e_3}{2}+2 u-2 v} - \frac{15 \sqrt{5} e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1}{32\ 2^{3/4}}+\frac{3 e^{-e_1-\frac{e_3}{2}-\rho} b_1^2}{16 \sqrt{2}}-\frac{3 e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1^2}{32 \sqrt{2}}\, , \nonumber\\ v'&=& 2 e^{-e_1-\frac{e_3}{2}}-\frac{3}{2} e^{-e_1-\frac{e_3}{2}-\rho}+\frac{1}{2} e^{e_1-\frac{e_3}{2}-\rho}+e^{\frac{e_3}{2}-\rho}+\frac{3 e^{-e_1-\frac{e_3}{2}-\rho} b_1^2}{8 \sqrt{2}}\, ,\nonumber\\ e'_1&=& 2 e^{-e_1-\frac{e_3}{2}} -\frac{3}{2} e^{-e_1-\frac{e_3}{2}-\rho} -\frac{1}{2} e^{e_1-\frac{e_3}{2}-\rho} +\frac{3}{2} e^{-e_1+\frac{e_3}{2}+2 u-2 v} -\frac{1}{4} e^{e_1+\frac{e_3}{2}+2 u-2 v}\nonumber \\ &&+\frac{15 \sqrt{5} e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1}{16\ 2^{3/4}} +\frac{3 e^{-e_1-\frac{e_3}{2}-\rho} b_1^2}{8 \sqrt{2}} +\frac{3 e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1^2}{16 \sqrt{2}} \, ,\\ e'_3&=& 2 e^{-e_1-\frac{e_3}{2}} -\frac{3}{2} e^{-e_1-\frac{e_3}{2}-\rho} +\frac{1}{2} e^{e_1-\frac{e_3}{2}-\rho} -e^{\frac{e_3}{2}-\rho} -\frac{3}{4} e^{-\frac{e_3}{2}+2 u-2 v} -\frac{3}{2} e^{-e_1+\frac{e_3}{2}+2 u-2 v}\nonumber\\ &&-\frac{1}{4} e^{e_1+\frac{e_3}{2}+2 u-2 v} -\frac{15 \sqrt{5} e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1}{16\ 2^{3/4}} +\frac{3 e^{-e_1-\frac{e_3}{2}-\rho} b_1^2}{8 \sqrt{2}} -\frac{3 e^{-e_1+\frac{e_3}{2}+2 u-2 v} b_1^2}{16 \sqrt{2}} \, ,\nonumber\\ b'_1&=& - \frac{5 \sqrt{5} e^{e_1+\frac{e_3}{2}+2 u-2 v}}{4\ 2^{1/4}} -e^{e_1-\frac{e_3}{2}-\rho} b_1- \frac{1}{2} e^{e_1+\frac{e_3}{2}+2 u-2 v} b_1 \, ,\nonumber\\ \rho'&=& - 3 e^{-e_1-\frac{e_3}{2}-\rho}+e^{e_1-\frac{e_3}{2}-\rho}+2 e^{\frac{e_3}{2}-\rho}+\frac{3 e^{-e_1-\frac{e_3}{2}-\rho} b_1^2}{4 \sqrt{2}} \, .\nonumber\end{aligned}$$ This set of equations has two obvious symmetries. Given a solution, we can generate other ones by u(t)u(t) + d\_1 , v(t)v(t) + d\_1 , \[simm1\] or by translating all fields $\phi_i$ in the solution \_(t) \_i(t- d\_2) , \[simm2\] where $d_1$ and $d_2$ are arbitrary constants.\ We can expand the equations near the $AdS_4$ UV point. We should stress again that $AdS_4$ is not strictly a solution due to the presence of a magnetic charge at infinity. However, the metric functions $u$ and $v$ approach the $AdS_4$ value and, for large $t$, the linearized equations of motion for the scalar fields are not affected by the magnetic charge, so that we can use much of the intuition from the AdS/CFT correspondence. The spectrum of the consistent truncation around the $AdS_4$ vacuum in absence of charges have been analyzed in details in [@Cassani:2012pj]. It consists of two massless and one massive vector multiplet (see Table 1). By expanding the BPS equations for large $t$ we find that there exists a family of asymptotically (magnetic) $AdS$ solutions depending on three parameters, corresponding to two operators of dimension $\Delta=1$ and an operator of dimension $\Delta=4$. The asymptotic expansion of the solution is $$\begin{aligned} \label{expUVQ111} u(t) &=& t+ \frac{1}{64} e^{-2 t} \left(16-6 \epsilon_1^2-3 \sqrt{2} \beta_1^2\right)+ \cdots \nonumber \\ v(t) &=& 2 t -\frac{3}{32} e^{-2 t} \left(2 \epsilon_1^2+\sqrt{2} \beta_1^2\right) +\cdots \nonumber \\ e_1(t)&=& -\frac{1}{2} e^{-t} \epsilon_1+\frac{1}{80} e^{-2 t} \left(-100-4 \epsilon_1^2-3 \sqrt{2} \beta_1^2\right) +\cdots \non \\ && +\frac{1}{140} e^{-4 t} \left(140 \epsilon_4+ \left(-\frac{375}{8}+ \cdots \right) t\right) +\cdots \nonumber\\ e_3(t)&=& e^{-t} \epsilon_1+\frac{1}{80} e^{-2 t} \left(200-34 \epsilon_1^2-3 \sqrt{2} \beta_1^2\right) +\cdots \non \\ &&+e^{-4 t} \frac{1}{448} \left(1785 + 448 \epsilon_4 - 150 t+\cdots \right ) +\cdots \nonumber\\ b_1(t)&=& e^{-t} \beta_1+e^{-2 t} \left(\frac{5 \sqrt{5}}{4\ 2^{1/4}}-\epsilon_1 \beta_1\right)+ \cdots \\ \rho(t) &=& \frac{3}{40} e^{-2 t} \left(2 \epsilon_1^2-\sqrt{2} \beta_1^2\right)+\cdots +\frac{1}{17920}e^{-4 t} \left(-67575-26880 \epsilon_4 +9000 t +\cdots \right) +\cdots \, .\nonumber \end{aligned}$$ where the dots refer to exponentially suppressed terms in the expansion in $e^{-t}$ or to terms at least quadratic in the parameters $(\epsilon_1,\epsilon_4,\beta_1)$. We also set two arbitrary constant terms appearing in the expansion of $u(t)$ and $v(t)$ to zero for notational simplicity; they can be restored by applying the transformations (\[simm1\]) and (\[simm2\]). The constants $\epsilon_1$ and $\beta_1$ correspond to scalar modes of dimension $\Delta=1$ in the two different massless vector multiplets (cfr Table 7 of [@Cassani:2012pj]). The constant $\epsilon_4$ corresponds to a scalar mode with $\Delta=4$ belonging to the massive vector multiplet. A term $t e^{-4t}$ shows up at the same order as $\epsilon_4$ and it is required for consistency. Notice that, although $e_1=e_3$ both at the UV and IR, the mode $e_1-e_3$ must be turned on along the flow. In the IR, $AdS_2\times S^2$ is an exact solution of the BPS system. The relation between the two radial coordinates is $r-r_0\sim e^{ a t }$ with $a= 8 \, 2^{1/4} /3^{1/4}$, where $r_0$ is the position of the horizon. By linearizing the BPS equations around $AdS_2\times S^2$ we find three normalizable modes with behavior $e^{a \Delta t}$ with $\Delta = 0$, $\Delta=1$ and $\Delta=1.37$. The IR expansion is obtained as a double series in $e^{a t}$ and $e^{1.37 a t}$ $$\begin{aligned} \{u(t),v(t),e_1(t),e_3(t),b_1(t),\rho(t) \} =&& \hskip -0.6truecm \{ 1.49 + a \, t , 0.85 + a\, t , -0.49, -0.49, -1.88, -0.37 \} \nonumber \\ && \hskip -0.8truecm +\{1, 1, 0, 0, 0, 0 \} c_1 + \{-1.42, -0.53, 0.76, 0.53, -0.09, 1\} \, c_2 \, e^{a t} \nonumber \\ && \hskip -0.8truecm +\{0.11, 0.11, 0.07, 0.93, -0.54, 1\} \, c_3 \, e^{1.37 a t}\nonumber \\ && \hskip -0.8truecm +\sum_{p,q} {\vec d}^{p,q} c_2^p c_3^q e^{(p + 1.37 q) a t}\, , \end{aligned}$$ where the numbers ${\vec d}^{p,q}$ can be determined numerically at any given order. The two symmetries (\[simm1\]) and (\[simm2\]) are manifest in this expression and correspond to combinations of a shift in $c_1$ and suitable rescalings of $c_2$ and $c_3$. With a total number of six parameters for six equations we expect that the given IR and UV expansions can be matched at some point in the middle, since the equations are first order and the number of fields is equal to the number of parameters. There will be precisely one solution with the UV and IR asymptotics given above; the general solution will be obtained by applying the transformations (\[simm1\]) and (\[simm2\]). We have numerically solved the system of BPS equation and tuned the parameters in order to find an interpolating solution. The result is shown in Figure \[fig:flowQ111\].\     ![Plots of $u',v'$ and $\rho$ on the left and of $e_1,e_2$ and $b_1/2$ on the right corresponding to the IR parameters $c_1=-1.208,c_2=0.989,c_3=-0.974$ and the UV parameters $\beta_1=-2.08,\epsilon_1=-1.325, \epsilon_4=5$. []{data-label="fig:flowQ111"}](PlotQ1.pdf) ![Plots of $u',v'$ and $\rho$ on the left and of $e_1,e_2$ and $b_1/2$ on the right corresponding to the IR parameters $c_1=-1.208,c_2=0.989,c_3=-0.974$ and the UV parameters $\beta_1=-2.08,\epsilon_1=-1.325, \epsilon_4=5$. []{data-label="fig:flowQ111"}](PlotQ2.pdf) \ We would like to stress that the asymptotic expansions of the solutions contain integer powers of $r$ (and logs) in the UV ($AdS_4$) and irrational powers depending on the charges in the IR ($AdS_2\times S^2$). This suggests that it would be hard to find analytic solutions of the system of BPS equations (\[pP1\]) - (\[psiEq\]) with running hypermultiplets. By contrast, the static $AdS_4$ black holes in theories without hypermultiplets [@Cacciatori:2009iz] depends only on rational functions of $r$ which made it possible to find an explicit analytic solution. Black Hole solutions in $M^{111}$ {#numericalM110} --------------------------------- Whenever we cannot enforce any symmetry on the flow, things are much harder. This is the case of the interpolating solutions for $M^{111}$ which we now discuss. The solution can be also embedded in the $Q^{111}$ model and it is a general prototype of the generic interpolating solution between $AdS_4$ and the horizons solutions discussed in Section \[sec:hyperhorizons\].\ Let us consider an interpolating solution corresponding to the horizon discussed in Section \[sec:M110Solutions\]. The conditions (\[simpcond\]) cannot be imposed along the flow. As a consequence, the phase of the spinor will run and a massive gauge field will be turned on. Moreover, the IR conditions $b_2=-2 b_1$ and $\tilde q_0=\tilde q_3=0, \tilde q_2=-\tilde q_1$ do not hold for finite $r$ and all gauge and vector scalar fields are turned on. The only simplification comes from the fact that on the locus (\[hyperlocus\]) the right hand side of Maxwell equations (\[Max1\]) is proportional to $k^a_\Lambda$. For $M^{111}$, $k^a_1=k^a_2$ and we still have two conserved electric charges $$( q_1 -q_2 )' =0\, , \qquad ( k^a_1 q_0 -k^a_0 q_1)' =0 \, .$$ In other words, two Maxwell equations can be reduced to first order constraints while the third remains second order. It is convenient to transform the latter equation into a pair of first order constraints. This can be done by introducing $q_0$ as a new independent field and by using one component of Maxwell equations and the definition (\[maginv\]) of $q_\Lambda$ as a set of four first order equations for ($\tilde q_0,\tilde q_1,\tilde q_2, q_0$). The set of BPS and Maxwell equations consists of twelve first order equations for twelve variables $$\{ U,V,v_1,v_2,b_1,b_2,\phi, \psi ,\tilde q_0,\tilde q_1,\tilde q_2, q_0 \} \, .$$ A major simplification arises if we integrate out the gauge fields using (\[qLr\]). The system collapses to a set of eight first order equations for eight unknowns. The resulting set of equations have singular denominators and it is convenient to keep the extra field $q_0$ and study a system of nine first order equations for $$\{ U,V,v_1,v_2,b_1,b_2,\phi, \psi , q_0 \} \, .$$ The final system has an integral of motion which would allow to eliminate algebraically $q_0$ in terms of the other fields.\ The system of BPS equations is too long to be reported here but it can be studied numerically and by power series near the UV and the IR. We will study the flow to the one-parameter family of horizon solutions with $v_1=v_2$ and $b_2=-2 b_1$. These horizons can be parametrized by the value of $v_1$ or, equivalently, by the magnetic charge $p^2$. We perform our numerical analysis for the model with e\^[-2]{} = , v\_1 = v\_2 = 2\^[1/4]{} , b\_1 = 2\^[1/4]{} and electric and magnetic charges p\^2=- 2 , q\_2= . We fixed $e_0=24 \sqrt{2}$. The values of the scalar fields at the $AdS_4$ point are given in (\[AdS4Sol\]). As in the previous section, it is also convenient to define a new radial coordinate by $dt= e^{-U} dr$ and to re-define some of the scalar fields and metric functions $$v_i(t) = v_i^{AdS} e^{e_i(t)}\, , \,\,\, \phi(t)=\phi_{AdS} -\frac12 \rho(t)\, , \,\,\, U(t) =u(t)+\log(R_{AdS})\, , \,\,\, V(t)=v(t) \, .$$ In absence of charges, the spectrum of the consistent truncation around the $AdS_4$ vacuum consists of one massless and one massive vector multiplet [@Cassani:2012pj] (see Table 1). By expanding the BPS equations for large $t$ we find that there exists a family of asymptotically (magnetic) $AdS$ solutions depending on three parameters corresponding to operators of dimension $\Delta=1$, $\Delta=4$ and $\Delta=5$. The asymptotic expansion of the solution is $$\begin{aligned} u(t) &=& t -\frac{1}{64} e^{-2 t} \left(-16+24 \epsilon_1^2+3 \sqrt{2} \beta_1^2\right)+\cdots \nonumber\\ v(t) &=& 2 t -\frac{3}{32} e^{-2 t} \left(8 \epsilon_1^2+\sqrt{2} \beta_1^2\right)+\cdots \nonumber\\ e_1(t) &=& e^{-t} \epsilon_1-\frac{1}{80} e^{-2 t} \left(-60+16 \epsilon_1^2+3 \sqrt{2} \beta_1^2\right) +\cdots \non \\ && -\frac{e^{-4 t} (1317+7168 \rho_4+864 t +\cdots )}{10752} +\cdots \nonumber\\ e_2(t) &=& -2 e^{-t} \epsilon_1-\frac{1}{80} e^{-2 t} \left(120+136 \epsilon_1^2+3 \sqrt{2} \beta_1^2\right) +\cdots \non \\ &&-\frac{e^{-4 t} (6297+3584 \rho_4+432 t+\cdots )}{5376} +\cdots \nonumber\\ b_1(t) &=& e^{-t} \beta_1-\frac{1}{4} e^{-2 t} \left(3\ 2^{1/4} \sqrt{3}+4 \epsilon_1 \beta_1\right)+\cdots +\frac{1}{12} e^{-5 t}( m_3 +\cdots ) +\cdots \nonumber\\ b_2(t) &=& -2 e^{-t} \beta_1+\frac{1}{2} e^{-2 t} \left(3\ 2^{1/4} \sqrt{3}+10 \epsilon_1 \beta_1 \right)+\cdots +\frac{1}{12} e^{-5 t}( m_3 +\cdots ) +\cdots \nonumber\\ \rho(t) &=& \frac{3}{40} e^{-2 t} \left(8 \epsilon_1^2-\sqrt{2} \beta_1^2\right) +\cdots + \frac{1}{224} e^{-4 t} (224 \rho_4+27 t +\cdots) +\cdots \nonumber\\ \theta(t) &=& -\frac{15}{64} \sqrt{3} e^{-2 t}+\frac{9}{40} e^{-3 t} \left(3 \sqrt{3} \epsilon_1+2^{3/4} \beta_1\right)+\cdots \nonumber\\ &+ &\!\!\!\!\! \frac{e^{-5 t} \! \left(12 \sqrt{3} \epsilon_1 (2529+3312 t)+2^{1/4} 7\left(160 \sqrt{2} m_3-9 \sqrt{2} \beta_1 (-157+264 t)\right) +\cdots \right)}{35840}+\cdots \nonumber\\ q_0(t) &=& -\frac{15 \sqrt{3}}{8\ 2^{3/4}}+\frac{27}{5} e^{-t} \left(2^{1/4} \sqrt{3} \epsilon_1-\beta_1\right)+\cdots \non \\ &&+\frac{1}{140} e^{-3 t} \left(140 m_3+27 \left(92\ 2^{1/4} \sqrt{3} \epsilon_1-77 \beta_1\right) t\right)+\cdots \nonumber \end{aligned}$$ where the dots refer to exponentially suppressed terms in the expansion in $e^{-t}$ or to terms at least quadratic in the parameters $(\epsilon_1,\rho_4,\beta_1,m_3)$. As for the $Q^{111}$ black hole, we set two arbitrary constant terms in the expansion of $u(t)$ and $v(t)$ to zero for notational simplicity; they can be restored applying the transformations (\[simm1\]) and (\[simm2\]). The parameters $\epsilon_1$ and $\beta_1$ are associated with two modes with $\Delta=1$ belonging to the massless vector multiplet, while the parameters $\rho_4$ and $m_3$ correspond to a scalar with $\Delta=4$ and a gauge mode with $\Delta=5$ in the massive vector multiplet (cfr Table 7 of [@Cassani:2012pj]).\ Around the $AdS_2\times S^2$ vacuum there are four normalizable modes with behavior $e^{a \Delta t}$ with $\Delta = 0$, $\Delta=1$, $\Delta=1.44$ and $\Delta=1.58$ where $a=4 \sqrt{2}$. At linear order the corresponding fluctuations are given by modes $(U,V,v_1,v_2,b_1,b_2,\phi, \psi , q_0 )$ proportional to $$\begin{aligned} && \{1,1,0,0,0,0,0,0,0\} \nonumber \\ && \{-2.45, -0.97, 1.22, 0.31, -0.09, 0.40, 0.82, \ -0.09, 1\} \nonumber \\ && \{0.05, 0.05, 0.30, -0.39, -0.17, -0.64, 0.26, -0.41, 1\} \nonumber\\ && \{-0.27, -0.27, -1.85, 2.62, -4.81, -2.22, -1.23, -3.22, 1\} \end{aligned}$$ The mode with $\Delta=0$ is just a common shift in the metric functions corresponding to the symmetry (\[simm1\]). The other modes give rise to a triple expansion in powers \_[p,q,r]{} d\_[p,q,r]{} c\_1\^p c\_2\^q c\_3 \^r e\^[(p + 1.44 q + 1.58 r) a t]{}of all the fields. We have a total number of eight parameters for nine equations which possess an algebraic integral of motion. We thus expect that the given IR and UV expansions can be matched at finite $t$. With some pain and using a precision much greater than the one given in the text above, we have numerically solved the system of BPS equation and found an interpolating solution. The result is shown in Figure \[fig:flowM110\].     ![Plots of $u',v', (2 b_1+b_2)/3,\rho$ on the left and of $(b_2-b_1)/3, e_1,e_2,\pi-\psi$ on the right corresponding to the value $c_1=1.7086,c_2=-2.4245,c_3=0.6713,c_4=-3.7021$. The UV expansion will be matched up to the transformations (\[simm1\]) and (\[simm2\]).[]{data-label="fig:flowM110"}](PlotM1.pdf) ![Plots of $u',v', (2 b_1+b_2)/3,\rho$ on the left and of $(b_2-b_1)/3, e_1,e_2,\pi-\psi$ on the right corresponding to the value $c_1=1.7086,c_2=-2.4245,c_3=0.6713,c_4=-3.7021$. The UV expansion will be matched up to the transformations (\[simm1\]) and (\[simm2\]).[]{data-label="fig:flowM110"}](PlotM2.pdf) \ [**Acknowledgements**]{} A.Z. is supported in part by INFN, the MIUR-FIRB grant RBFR10QS5J “String Theory and Fundamental Interactions”, and by the MIUR-PRIN contract 2009-KHZKRX. We would like to thank I. Bah, G. Dall’Agata, J. Gauntlett, K. Hristov, D. Klemm, J. Simon and B. Wecht for useful discussions and comments. M. P. would like to thank the members of the Theory Group at Imperial College for their kind hospitality and support while this work was being completed. Four Dimensional Gauged Supergravity {#gsugra} ==================================== In this Appendix, in order to fix notation and conventions, we recall few basic facts about $\mathcal{N}=2$ gauged supergravity. We use the standard conventions of [@Andrianopoli:1996vr; @Andrianopoli:1996cm]. The fields of $\N=2$ supergravity are arranged into one graviton multiplet, $n_v$ vector multiplets and $n_h$ hypermultiplets. The graviton multiplet contains the metric, the graviphoton, $A_\mu^0 $ and an $SU(2)$ doublet of gravitinos of opposite chirality, ($ \psi_\mu^A, \psi_{\mu \, A} $), where $A=1,2$ is an $SU(2)$ index. The vector multiplets consist of a vector, $A^I_\mu,$, two spin 1/2 of opposite chirality, transforming as an $SU(2)$ doublet, ($\lambda^{i \,A}, \lambda^{\bar{i}}_A$), and one complex scalar $z^i$. $A=1,2$ is the $SU(2)$ index, while $I$ and $i$ run on the number of vector multiplets $I= 1, \dots, n_{\rm V}$, $i= 1, \dots, n_{\rm V}$. Finally the hypermultiplets contain two spin 1/2 fermions of opposite chirality, ($\zeta_\alpha, \zeta^\alpha$), and four real scalar fields, $q_u$, where $\alpha = 1, \dots 2 n_{\rm H}$ and $u = 1, \ldots, 4 n_{\rm H}$. The scalars in the vector multiplets parametrise a special Kähler manifold of complex dimension $n_{\rm V}$, $\mathcal{M}_{\rm SK}$, with metric g\_[i |[j]{}]{} = - \_i \_[|[j]{}]{} K(z, |[z]{}) where $ K(z, \bar{z})$ is the Kähler potential on $\mathcal{M}_{\rm SK}$. This can be computed introducing homogeneous coordinates $X^\Lambda(z)$ and define a holomorphic prepotential $\mathcal{F}(X)$, which is a homogeneous function of degree two \[Kpotdef\] K(z |[z]{}) = - i (|[X]{}\^F\_- X\^|[F]{}\_) , where $F_\Lambda = \del_\Lambda F$. In the paper we will use both the holomorphic sections $(X^\Lambda, F_\Lambda)$ and the symplectic sections (L\^, M\_) = e\^[K/2]{} (X\^, F\_) . The scalars in the hypermultiplets parametrise a quaternionic manifold of real dimension $4 n_{\rm H}$, $\mathcal{M}_{\rm Q}$, with metric $h_{uv}$. The bosonic Lagrangian is \[boslag\] \_[bos]{} & =& - R + i ( |\_ \^[- ]{}\_[ ]{} \^[- ]{} - \_ \^[+ ]{}\_[ ]{} \^[+ ]{} )\ && + g\_[i |[j]{}]{} \^z\^i \_|[z]{}\^[|[j]{}]{} + h\_[u v]{} \^q\^u \_q\^[v]{} - (z, |[z]{}, q) , where $\Lambda, \Sigma = 0, 1, \ldots, n_{\rm V}$. The gauge field strengths are defined as \^\_ = F\^\_\_F\^ , with $F^\Lambda_{\mu \nu} = \frac{1}{2} (\partial_\mu A^\Lambda_\nu - \partial_\nu A^\Lambda_\mu)$. In this notation, $A^0$ is the graviphoton and $A^\Lambda$, with $\Lambda = 1, \ldots, n_{\rm V}$, denote the vectors in the vector multiplets. The matrix $\cN_{\Lambda \Sigma}$ of the gauge kinetic term is a function of the vector multiplet scalars \[periodmat\] \_ = |\_ + 2 i The covariant derivatives are defined as \[scalarder\] && \_z\^i = \_z\^i + k\^i\_[ ]{} A\^\_[ ]{} ,\ && \_q\^u = \_q\^u + k\^u\_[ ]{} A\^\_[ ]{} , where $k^i_\Lambda$ and $k^u_\Lambda$ are the Killing vectors associated to the isometries of the vector and hypermultiplet scalar manifold that have been gauged. In this paper we will only gauge (electrically) abelian isometries of the hypermultiplet moduli space. The Killing vectors corresponding to quaternionic isometries have associated prepotentials: these are a set of real functions in the adoint of SU(2), satisfying \^x\_[uv]{}k\^u\_=-\_v P\^x\_ , where $\Om^x_{uv} = d \omega^x + 1/2 \epsilon^{x y z} \omega^y \wedge \omega^z$ and $\nabla_v$ are the curvature and covariant derivative on $\cM_{{\rm Q}}$. In the specific models we consider in the text, one can show that the Killing vectors preserve the connection $\omega^x$ and the curvature $\Omega^x_{uv}$. This allows to simplify the prepotential equations, which reduce to P\^x\_= k\^u\_\^x\_u . Typically in models obtained from $M$/string theory compactifications, the scalar fields have both electric and magnetic charges under the gauge symmetries. However, by a symplectic transformation of the sections $(X^\Lambda, F_\Lambda)$, it is always possible to put the theory in a frame where all scalars are electrically charged. Such a transformation[^11] leaves the Kähler potential invariant, but changes the period matrix and the preprepotential $\mathcal{F}(X)$ . The models we consider in this paper [@Cassani:2012pj] are of this type: they have a cubic prepotential and both electrical and magnetic gaugings of some isometries of the hypermultiplet moduli space. The idea is then to perform a sympletic rotation to a frame with purely electric gaungings, allowing for sections $(\tX^{\Lam},\widetilde{F}_{\Lam})$ which are a general symplectic rotation of those obtained from the cubic prepotential. The scalar potential in couples the hyper and vector multiplets, and is given by (z, |[z]{}, q) = ( g\_[i |[j]{}]{} k\^i\_k\^[|j]{}\_ + 4 h\_[u v]{} k\^u\_k\^v\_) |[L]{}\^L\^+ ( f\_i\^g\^[i |[j]{}]{} f\^ \_[|j]{} - 3 |[L]{}\^L\^) P\^x\_P\^x\_ , where $L^\Lambda$ are the symplectic sections on $\mathcal{M}_{\rm SK}$, $f_i^\Lambda= (\partial_i + \frac{1}{2} \partial_i K) L^\Lambda$ and $P^x_\Lambda$ are the Killing prepotentials. Maxwell’s equation is \[Maxwelleq\] \_ \_ F\^[ ]{} + \_ \^F\^\_= h\_[uv]{} k\^u\_\^q\^v where, for simplicity of notation, we have defined the following matrices \_ = [Re]{} \_ \_ = [Im]{} \_ . The full Lagrangian is invariant under $\cN =2$ supersymmetry. In the electric frame, the variations of the fermionic fields are given by \[gravitinoeq\] \_[A]{}&=& \_\_A + i S\_[AB]{} \_\^B + 2i \_ L\^ \_\^[- ]{} \^\_[AB]{} \^B ,\ \[gauginoeq\] \^[iA]{}&=& i \_z\^i \^\^A -g\^[i]{} \^\_ \_ \^[- ]{}\_ \^\^[AB]{}\_B + W\^[i A B]{} \_B ,\ \[hyperinoeq\] \_&=& i \^[B]{}\_u\_q\^u \^\^A \_[AB]{}\_ + N\^[A]{}\_ \_A , where $ \cU^{B\beta}_u$ are the vielbeine on the quaternionic manifold and S\_[AB]{}&=& (\_x)\_A\^[C]{} \_[BC]{} P\^x\_L\^ ,\ W\^[iAB]{}&=&\^[AB]{}k\_\^i |L\^+ [i]{}(\_x)\_[C]{}\^[B]{} \^[CA]{} P\^x\_ g\^[ij\^]{} [|f]{}\_[j\^]{}\^ , \[pesamatrice\]\ [N]{}\^A\_&=& 2 [U]{}\_[u]{}\^A k\^u\_ |L\^ . Notice that the covariant derivative on the spinors \_\_A = \_\_A + (\^x)\_A\^[ B]{} A\^\_ P\^x\_ \_B . contains a contribution from the gauge fields from the vector-$U(1)$ connection \_\_A = (D\_+ A\_)\_A +\^x\_(\^x)\_A\^[ B]{}\_B , \[DepsA\] the hyper-$SU(2)$ connection and the gaugings (see eqs. 4.13,7.57, 8.5 in [@Andrianopoli:1996cm]) && \_\^x= \_q\^u \^x\_u ,\ && A\_= (K\_i \_z\^i -K\_ \_z\^ ) \[A1\] . Derivation of the BPS Equations {#sec:BPSEqs} =============================== In this section we consider an ansatz for the metric and the gauge fields that allows for black-holes with spherical or hyperbolic horizons, and we derive the general conditions for 1/4 BPS solutions. The metric and the gauge fields are taken to be \[ansApp\] ds\^2&=& e\^[2U]{} dt\^2- e\^[-2U]{} dr\^2- e\^[2(V-U)]{} (d\^2+F()\^2 d\^2)\ A\^&=& \^(r) dt- p\^(r) F’() d, where the warp factors $U$ and $V$ are functions of the radial coordinate $r$ and F()={ & S\^2 (=1)\ & \^2 (=-1) . The modifications needed for the flat case are discussed at the end of Section \[sec:BPSflow\]. We also assume that all scalars in the vector and hypermultiplets, as well as the Killing spinors $\epsilon_A$ are functions of the radial coordinate only. To derive the BPS conditions it is useful to introduce the central charge &=&p\^M\_- q\_L\^\ &=& L\^\_ (e\^[2(V-U)]{} \^+ ip\^) , where $q_\Lam$ is defined in (\[maginv\]) and its covariant derivative D\_ =\^\_ \_ e\^[2(V-U)]{} ’\^ +ip\^ . In the case of flat space we need to replace $\kappa p^\Lam \rightarrow - p^\Lam$ in the definition (\[maginv\]) of $q_\Lam$ and in the above expression for ${\cal Z}$. Gravitino Variation ------------------- With the ansatz , the gravitino variations become 0&=&\^[1]{}\_A + e\^[-U]{}\^P\^x\_ \^0(\^x)\_A\^[ B]{} \_B +iS\_[AB]{}\^B - e\^[2 (U- V)]{} \_+ \_[AB]{} \^B \[gr1\] ,\ 0&=&\^1\_1\_A +i S\_[AB]{}\^B - e\^[2 (U- V)]{} \_- \_[AB]{} \^B \[gr2\] ,\ 0&=& (V’-U’)e\^U \^1\_A+ i S\_[AB]{}\^B + e\^[2 (U- V)]{} \_- \_[AB]{} \^B \[gr3\] ,\ 0&=&e\^[U-V]{} \^2 \_A+(V’-U’)e\^U \^1 \_A - e\^[U-V]{} p\^P\^x\_ \^3(\^x)\_A\^[ B]{} \_B +iS\_[AB]{}\^B\ && + e\^[2 (U- V)]{} \_+ \_[AB]{} \^B \[gr4\] , where, to simplify notations, we introduced the quantity \_= \^[01]{} i \^[02]{} (F\^[-1]{} F\^ \_ L\^p’\^) . Let us consider first . The term proportional to $F^\prime$ must be separately zero, since it is the only $\tha$-dependent one. This implies \[algc\] \_ L\^p’\^=0 . Similarly, setting to zero the $\tha$-dependent terms in , which is the usual statement of [*setting the gauge connection equal to the spin connection*]{}, gives the projector \[proj1\] | | \_A = - p\^P\^x\_(\^x)\_A\^[ B]{} \^[01]{} \_B . This constraint also holds in the case of flat horizon if we set $\kappa=0$. The $\tha$-independent parts of and are equal and give a second projector \[proj2\] S\_[AB]{}\^B = (V’-U’)e\^U \^1\_A - e\^[2 (U-V)]{} \^[01]{} \_[AB]{} \^B . Subtracting the $\tha$ independent parts of and gives a third projector \[proj3\] \_A= - e\^[U-2V]{} \_[AB]{}\^[0]{} \^B + e\^[-2U]{} \^P\^x\_ \^[01]{}(\^x)\_A\^[ B]{} \_B . Finally, subtracting and we obtain an equation for the radial dependence of the spinor \_1 \_A=\_A + e\^[-U]{}\^P\^x\_ \^[01]{}(\^x)\_A\^[ B]{} \_B. \[raddep\] In total we get three projectors, - , one differential relation on the spinor and one algebraic constraint . The idea is to further simply these equations so as to ensure that we end up with two projectors. From now on we will specify to the case of spherical or hyperbolic symmetry, since this is what we will use in the paper. In order to reduce the number of projectors we impose the constraint \^P\_\^x = c e\^[2U]{} p\^P\_\^x,   x=1,2,3 \[pcq\] for some real function $c$. By squaring we obtain the algebraic condition (p\^P\_\^x)\^2= 1 which can be used to rewrite as \[ceq\] c= e\^[-2U]{}\^P\_\^x p\^P\_\^x. Substituting in and using , we obtain the projector \[projNa\] \_A= - \_[A B]{} \^0 \^B which, squared, gives the norm of $\cZ$ ||\^2= e\^[4V - 2U]{}\[(2U’-V’)\^2 +c\^2\] . \[normZ\] Then we can rewrite as \_A=ie\^[i ]{} \_[AB]{}\^[0]{} \^B \[proj4\] , where $e^{i \psi}$ is the relative phase between $\cZ$ and $2 U^\prime - V^\prime - i c$ \[phase\] e\^[i ]{} = - . Using the definition of $S_{AB}$ given in and the projectors and , we can reduce to a scalar equation i \^P\^x\_ p\^P\_\^x = e\^[2(U-V)]{} e\^[- i]{} -(V’-U’)e\^[U]{} , \[LPrel\] where we defined \^= e\^[- i]{}L\^=\_r+ i \_i. Combining and , we can also write two equations for the warp factors && e\^U U’= -i \^P\^x\_ p\^P\_\^x - e\^[2 (U- V)]{} e\^[-i ]{} + i c e\^[U]{} ,\ && e\^U V’ = -2i \^P\^x\_ p\^P\_\^x + i c e\^[U]{} . Using the projectors above, becomes \_r \_A=-A\_r\_A -\^x\_[r]{}(\^x)\_A\^[ B]{}\_B+\_A - \_A. Gaugino Variation ----------------- The gaugino variation is \[zdot1\] i e\^[U]{} z’\^i \^1 \^A +e\^[2 (U-V)]{} g\^[i |[j]{}]{} \^[AB]{}\_B + W\^[i A B]{} \_B = 0 . $\cM$ is the only $\tha$-dependent term and must be set to zero separately, giving \[Bpfbar\] \^\_ \_ p’\^= 0 . Combining and , and using standard orthogonality relations between the sections $X^\Lambda$, we conclude that p’\^=0 . Continuing with , we use again and to obtain \[gauge3\] e\^[-i ]{} e\^[U]{} z’\^i = e\^[2 (U-V)]{} g\^[i |[j]{}]{} D\_ - i g\^[i]{} [|f]{}\_\^ P\_\^xp\^P\_\^x . Hyperino Variation ------------------ The hyperino variation gives i \_ \^[B]{}\_u e\^U \^1 q’\^u + \^k\^u\_ e\^[-U]{} \^0 -F\^[-1]{}F’ e\^[U-V]{} p\^k\^u\_\^3 \_[AB]{}\^A+ 2 [U]{}\_[u]{}\^A k\^u\_ \^ \_A = 0 . First off, we need to set the $\tha$-dependent part to zero k\_\^u p\^= 0 . \[kqzero1\] The projectors and can be used to simply the remaining equation - e\^[U]{} q’\^u \^[B]{}\_[ u]{} p\^P\^x\_(\^x)\_B\^C \_C + \^[A]{}\_[ u]{} ( 2 k\^u\_ \^ - e\^[-U]{} \^k\^u\_) \^A = 0 , which can then be reduced to a scalar equation \[hyper1\] -i h\_[uv]{} q’\^u +e\^[-2U]{}p\^P\^y\_\^\_v P\^y\_- 2e\^[-U]{} p\^P\^x\_\_v (\^ P\^x\_) = 0 . Using the standard relations (we use the conventions of [@Andrianopoli:1996vr]) -i\_[u]{}\^[xv]{} \_[v]{}\^[A]{}&=& \^[B]{}\_[u]{}(\^x)\_B\^[ A]{} ,\ \^x\_[uw]{} \^[yw]{}\_[  v]{}&=& -\^[xy]{}h\_[uv]{}-\^[xyz]{}\^z\_[uv]{} ,\ k\^u\_\^x\_[uv]{}&=&- \_v P\^x\_ , we can reduce to -i h\_[uv]{} q’\^u +e\^[-2U]{}p\^P\^y\_\^\_v P\^y\_- 2e\^[-U]{} p\^P\^x\_\_v (\^ P\^x\_) =0 . The real and imaginary parts give q’\^u &=&2e\^[-U]{} h\^[uv]{}\_v p\^P\^x\_\_i\^ P\^x\_ ,\ 0&=& \^k\^u\_ - 2 e\^U \_r\^k\^u\_ . Summary of BPS Flow Equations ----------------------------- It is worthwhile at this point to summarize the BPS equations. The algebraic equations are p’\^&=& 0 \[BPS1\] ,\ ( p\^P\_\^x)\^2&=&1 , \[BPS2\]\ k\_\^u p\^&=& 0 , \[BPS3\]\ \^P\^x\_&=& c e\^[2U]{} p\^P\^x\_ , \[BPS4\]\ \^k\^u\_ &=& 2 e\^U \_r\^k\^u\_ , \[BPS5\] while the differential equations are e\^U U’&=& -i \^P\^x\_ p\^P\_\^x + e\^[-i ]{} + i c e\^[U]{} ,\ e\^U V’ &=& -2i \^P\^x\_ p\^P\_\^x + i c e\^[U]{} ,\ e\^[-i ]{} e\^[U]{} z’\^i &=& \^i - i g\^[i]{} [|f]{}\_\^ P\_\^xp\^P\_\^x , \[gaugino\]\ q’\^u&=&2e\^[-U]{} h\^[uv]{}\_v p\^P\^x\_\_i\^ P\^x\_ . In the case of flat horizon equation (\[BPS2\]) is replaced by $( p^\Lam P_\Lam^x)^2=0$. Maxwell’s Equation ------------------ Maxwell’s equation is \_ \_ F\^[ ]{} + \_ \^F\^\_= h\_[uv]{} k\^u\_\^q\^v , which gives q’\_-e\^[2(V-U)]{} \_ ’\^+\_ p\^’= 2 e\^[2V-4U]{}h\_[uv]{} k\^u\_k\^v\_\^ In the case of flat horizon we need to replace $\kappa p^\Lam\rightarrow - p^\Lam$. [10]{} S. L. Cacciatori and D. Klemm, “[Supersymmetric AdS(4) black holes and attractors]{},” [*JHEP*]{} [**1001**]{} (2010) 085, [[0911.4926]{}](http://arXiv.org/abs/0911.4926). G. Dall’Agata and A. Gnecchi, “[Flow equations and attractors for black holes in N = 2 U(1) gauged supergravity]{},” [*JHEP*]{} [**1103**]{} (2011) 037, [[1012.3756]{}](http://arXiv.org/abs/1012.3756). K. Hristov and S. Vandoren, “[Static supersymmetric black holes in AdS4 with spherical symmetry]{},” [*JHEP*]{} [**1104**]{} (2011) 047, [[1012.4314]{}](http://arXiv.org/abs/1012.4314). M. Cvetic, M. Duff, P. Hoxha, J. T. Liu, H. Lu, [*et al.*]{}, “[Embedding AdS black holes in ten-dimensions and eleven-dimensions]{},” [*Nucl.Phys.*]{} [ **B558**]{} (1999) 96–126, [[hep-th/9903214]{}](http://arXiv.org/abs/hep-th/9903214). M. J. Duff and J. T. Liu, “Anti-de sitter black holes in gauged n = 8 supergravity,” [*Nucl. Phys.*]{} [**B554**]{} (1999) 237–253, [[hep-th/9901149]{}](http://arXiv.org/abs/hep-th/9901149). B. de Wit and H. Nicolai, “N=8 supergravity with local so(8) x su(8) invariance,” [*Phys. Lett.*]{} [**B108**]{} (1982) 285. J. P. Gauntlett and O. Varela, “[Consistent Kaluza-Klein reductions for general supersymmetric AdS solutions]{},” [*Phys.Rev.*]{} [**D76**]{} (2007) 126007, [[0707.2315]{}](http://arXiv.org/abs/0707.2315). J. P. Gauntlett, S. Kim, O. Varela, and D. Waldram, “[Consistent supersymmetric Kaluza–Klein truncations with massive modes]{},” [*JHEP*]{} [**04**]{} (2009) 102, [[arXiv:0901.0676]{}](http://arXiv.org/abs/arXiv:0901.0676). D. Cassani, P. Koerber, and O. Varela, “[All homogeneous N=2 M-theory truncations with supersymmetric AdS4 vacua]{},” [*JHEP*]{} [**1211**]{} (2012) 173, [[1208.1262]{}](http://arXiv.org/abs/1208.1262). A. Donos, J. P. Gauntlett, N. Kim, and O. Varela, “[Wrapped M5-branes, consistent truncations and AdS/CMT]{},” [*JHEP*]{} [**1012**]{} (2010) 003, [[1009.3805]{}](http://arXiv.org/abs/1009.3805). D. Cassani and P. Koerber, “[Tri-Sasakian consistent reduction]{},” [*JHEP*]{} [**1201**]{} (2012) 086, [[1110.5327]{}](http://arXiv.org/abs/1110.5327). A.-K. Kashani-Poor and R. Minasian, “Towards reduction of type ii theories on su(3) structure manifolds,” [*JHEP*]{} [**03**]{} (2007) 109, [[hep-th/0611106]{}](http://arXiv.org/abs/hep-th/0611106). A.-K. Kashani-Poor, “[Nearly Kaehler Reduction]{},” [*JHEP*]{} [**11**]{} (2007) 026, [[arXiv:0709.4482]{}](http://arXiv.org/abs/arXiv:0709.4482). J. P. Gauntlett and O. Varela, “[Universal Kaluza-Klein reductions of type IIB to N=4 supergravity in five dimensions]{},” [*JHEP*]{} [**1006**]{} (2010) 081, [[1003.5642]{}](http://arXiv.org/abs/1003.5642). K. Skenderis, M. Taylor, and D. Tsimpis, “[A Consistent truncation of IIB supergravity on manifolds admitting a Sasaki-Einstein structure]{},” [ *JHEP*]{} [**1006**]{} (2010) 025, [[ 1003.5657]{}](http://arXiv.org/abs/1003.5657). D. Cassani, G. Dall’Agata, and A. F. Faedo, “[Type IIB supergravity on squashed Sasaki-Einstein manifolds]{},” [*JHEP*]{} [**1005**]{} (2010) 094, [[1003.4283]{}](http://arXiv.org/abs/1003.4283). J. T. Liu, P. Szepietowski, and Z. Zhao, “[Supersymmetric massive truncations of IIb supergravity on Sasaki-Einstein manifolds]{},” [*Phys.Rev.*]{} [ **D82**]{} (2010) 124022, [[1009.4210]{}](http://arXiv.org/abs/1009.4210). I. Bena, G. Giecold, M. Grana, N. Halmagyi, and F. Orsi, “[Supersymmetric Consistent Truncations of IIB on $T^{1,1}$]{},” [*JHEP*]{} [**1104**]{} (2011) 021, [[arXiv:1008.0983]{}](http://arXiv.org/abs/arXiv:1008.0983). D. Cassani and A. F. Faedo, “[A Supersymmetric consistent truncation for conifold solutions]{},” [*Nucl.Phys.*]{} [**B843**]{} (2011) 455–484, [[1008.0883]{}](http://arXiv.org/abs/1008.0883). J. M. Maldacena and C. Nunez, “Supergravity description of field theories on curved manifolds and a no go theorem,” [*Int. J. Mod. Phys.*]{} [**A16**]{} (2001) 822–855, [[hep-th/0007018]{}](http://arXiv.org/abs/hep-th/0007018). E. Witten, “Topological sigma models,” [*Commun. Math. Phys.*]{} [**118**]{} (1988) 411. D. Gaiotto, “[N=2 dualities]{},” [*JHEP*]{} [**1208**]{} (2012) 034, [[0904.2715]{}](http://arXiv.org/abs/0904.2715). D. Gaiotto, G. W. Moore, and A. Neitzke, “[Wall-crossing, Hitchin Systems, and the WKB Approximation]{},” [[0907.3987]{}](http://arXiv.org/abs/0907.3987). F. Benini, Y. Tachikawa, and B. Wecht, “[Sicilian gauge theories and N=1 dualities]{},” [*JHEP*]{} [**1001**]{} (2010) 088, [[0909.1327]{}](http://arXiv.org/abs/0909.1327). I. Bah, C. Beem, N. Bobev, and B. Wecht, “[Four-Dimensional SCFTs from M5-Branes]{},” [*JHEP*]{} [**1206**]{} (2012) 005, [[1203.0303]{}](http://arXiv.org/abs/1203.0303). M. M. Caldarelli and D. Klemm, “[Supersymmetry of Anti-de Sitter black holes]{},” [*Nucl.Phys.*]{} [**B545**]{} (1999) 434–460, [[hep-th/9808097]{}](http://arXiv.org/abs/hep-th/9808097). J. P. Gauntlett, N. Kim, S. Pakis, and D. Waldram, “[Membranes wrapped on holomorphic curves]{},” [*Phys.Rev.*]{} [**D65**]{} (2002) 026003, [[hep-th/0105250]{}](http://arXiv.org/abs/hep-th/0105250). A. Donos, J. P. Gauntlett, and N. Kim, “[AdS Solutions Through Transgression]{},” [*JHEP*]{} [**0809**]{} (2008) 021, [[0807.4375]{}](http://arXiv.org/abs/0807.4375). A. Donos and J. P. Gauntlett, “[Supersymmetric quantum criticality supported by baryonic charges]{},” [*JHEP*]{} [**1210**]{} (2012) 120, [[1208.1494]{}](http://arXiv.org/abs/1208.1494). O. Aharony, O. Bergman, D. L. Jafferis, and J. Maldacena, “[N=6 Superconformal Chern-Simons-matter Theories, M2-branes and Their Gravity Duals]{},” [ *JHEP*]{} [**0810**]{} (2008) 091, [[0806.1218]{}](http://arXiv.org/abs/0806.1218). D. Fabbri [*et al.*]{}, “3d superconformal theories from sasakian seven-manifolds: New nontrivial evidences for ads(4)/cft(3),” [*Nucl. Phys.*]{} [**B577**]{} (2000) 547–608, [[hep-th/9907219]{}](http://arXiv.org/abs/hep-th/9907219). Tomasiello. D. L. Jafferis and A. Tomasiello, “[A simple class of N=3 gauge/gravity duals]{},” [[arXiv:0808.0864]{}](http://arXiv.org/abs/arXiv:0808.0864). A. Hanany and A. Zaffaroni, “[Tilings, Chern-Simons Theories and M2 Branes]{},” [*JHEP*]{} [**0810**]{} (2008) 111, [[ 0808.1244]{}](http://arXiv.org/abs/0808.1244). D. Martelli and J. Sparks, “[Notes on toric Sasaki-Einstein seven-manifolds and $AdS_4/CFT_3$]{},” [[arXiv:0808.0904]{}](http://arXiv.org/abs/arXiv:0808.0904). A. Hanany, D. Vegh, and A. Zaffaroni, “[Brane Tilings and M2 Branes]{},” [ *JHEP*]{} [**0903**]{} (2009) 012, [[ 0809.1440]{}](http://arXiv.org/abs/0809.1440). D. Martelli and J. Sparks, “[AdS4/CFT3 duals from M2-branes at hypersurface singularities and their deformations]{},” [*JHEP*]{} [**12**]{} (2009) 017, [[0909.2036]{}](http://arXiv.org/abs/0909.2036). S. Franco, I. R. Klebanov, and D. Rodriguez-Gomez, “[M2-branes on Orbifolds of the Cone over Q\*\*1,1,1]{},” [*JHEP*]{} [**0908**]{} (2009) 033, [[0903.3231]{}](http://arXiv.org/abs/0903.3231). F. Benini, C. Closset, and S. Cremonesi, “[Chiral flavors and M2-branes at toric CY4 singularities]{},” [*JHEP*]{} [**1002**]{} (2010) 036, [[0911.4127]{}](http://arXiv.org/abs/0911.4127). K. Hristov, A. Tomasiello, and A. Zaffaroni, “[Supersymmetry on Three-dimensional Lorentzian Curved Spaces and Black Hole Holography]{},” [[1302.5228]{}](http://arXiv.org/abs/1302.5228). L. Andrianopoli [*et al.*]{}, “[General Matter Coupled N=2 Supergravity]{},” [*Nucl. Phys.*]{} [**B476**]{} (1996) 397–417, [[hep-th/9603004]{}](http://arXiv.org/abs/hep-th/9603004). L. Andrianopoli [*et al.*]{}, “[N = 2 supergravity and N = 2 super Yang-Mills theory on general scalar manifolds: Symplectic covariance, gaugings and the momentum map]{},” [*J. Geom. Phys.*]{} [**23**]{} (1997) 111–189, [[hep-th/9605032]{}](http://arXiv.org/abs/hep-th/9605032). B. de Wit, H. Samtleben, and M. Trigiante, “[Magnetic charges in local field theory]{},” [*JHEP*]{} [**09**]{} (2005) 016, [[hep-th/0507289]{}](http://arXiv.org/abs/hep-th/0507289). S. Ferrara and S. Sabharwal, “Quaternionic manifolds for type ii superstring vacua of calabi-yau spaces,” [*Nucl. Phys.*]{} [**B332**]{} (1990) 317. K. Hristov, C. Toldo, and S. Vandoren, “[On BPS bounds in D=4 N=2 gauged supergravity]{},” [*JHEP*]{} [**1112**]{} (2011) 014, [[1110.2688]{}](http://arXiv.org/abs/1110.2688). B. de Wit, H. Nicolai, and N. P. Warner, “[The Embedding Of Gauged N=8 Supergravity Into d = 11 Supergravity]{},” [*Nucl. Phys.*]{} [**B255**]{} (1985) 29. H. Nicolai and K. Pilch, “[Consistent Truncation of d = 11 Supergravity on AdS$_4 \times S^7$]{},” [*JHEP*]{} [**1203**]{} (2012) 099, [[1112.6131]{}](http://arXiv.org/abs/1112.6131). F. Benini and N. Bobev, “[Two-dimensional SCFTs from wrapped branes and c-extremization]{},” [[1302.4451]{}](http://arXiv.org/abs/1302.4451). P. Szepietowski, “[Comments on a-maximization from gauged supergravity]{},” [*JHEP*]{} [**1212**]{} (2012) 018, [[1209.3025]{}](http://arXiv.org/abs/1209.3025). P. Karndumri and E. O Colgain, “[Supergravity dual of c-extremization]{},” [*Phys. Rev.*]{} [**D 87**]{} (2013) 101902, [[1302.6532]{}](http://arXiv.org/abs/1302.6532). N. Halmagyi, M. Petrini, and A. Zaffaroni, “[work in progress]{},”. [^1]: To be precise, the black holes we are discussing will asymptotically approach $AdS_4$ in the UV but will differ by non-normalizable terms corresponding to some magnetic charge. We will nevertheless refer to them as asymptotically $AdS_4$ black holes. [^2]: Other M-theory reductions have been studied in [@Donos:2010ax; @Cassani:2011fu] and similar reductions have been performed in type IIA/IIB, see for example [@Kashani-Poor:2006si; @KashaniPoor:2007tr; @Gauntlett:2010vu; @Skenderis:2010vz; @Cassani:2010uw; @Liu:2010pq; @Bena:2010pr; @Cassani:2010na] [^3]: For a discussion of these compactifications from the point of view of holography and recent results in identifying the dual field theories see[@Fabbri:1999hw; @Jafferis:2008qz; @Hanany:2008cd; @Martelli:2008rt; @Hanany:2008fj; @Martelli:2009ga; @Franco:2009sp; @Benini:2009qs]. [^4]: For a recent discussion from the point of view of holography see [@Hristov:2013spa]. [^5]: For the models studied in this paper, this also implies $\omhat_\mu^x=0$ in [^6]: This is always possible when the gauging is abelian [@deWit:2005ub]. [^7]: We slightly abuse notation by often refering to the components of $z^i$ as $(v_i,b_i)$. This is not meant to imply that the metric has been used to lower the index. [^8]: One can also identify holographically the exact R-symmetry [@Szepietowski:2012tb; @Karndumri:2013iqa]. [^9]: There is a factor of $\sqrt{2}$ between $A^\Lam$ here and in [@Cassani:2012pj], see footnote 10 of that paper. [^10]: For example $\sig(v_1^2b_2)=v_1^2b_2+v_2^2b_1+v_3^2b_2+v_1^2b_3+v_2^2b_3+v_3^2b_1$ and $\sig(v_1v_2)=2(v_1v_2+v_2v_3+v_1v_3)$ [^11]: An $Sp( 2 + 2 n_{\rm V}, \mathbb{R})$ transformation of the sections (X\^, F\_) (\^, \_) = A& B\ C & D (X\^, F\_) , acts on the period matrix $\cN_{\Lambda \Sigma}$ by a fractional transformation \_ (X, F) \_ (, ) = ( C + D \_ (X, F)) (A + B \_ (X, F))\^[-1]{} .
--- abstract: 'The number $R(4,3,3)$ is often presented as the unknown Ramsey number with the best chances of being found “soon”. Yet, its precise value has remained unknown for almost 50 years. This paper presents a methodology based on *abstraction* and *symmetry breaking* that applies to solve hard graph edge-coloring problems. The utility of this methodology is demonstrated by using it to compute the value $R(4,3,3)=30$. Along the way it is required to first compute the previously unknown set ${{\cal R}}(3,3,3;13)$ consisting of 78[,]{}892 Ramsey colorings.' author: - Michael Codish - Michael Frank - Avraham Itzhakov - Alice Miller title: 'Computing the Ramsey Number R(4,3,3) using Abstraction and Symmetry breaking[^1]' --- Introduction {#sec:intro} ============ This paper introduces a general methodology that applies to solve graph edge-coloring problems and demonstrates its application in the search for Ramsey numbers. These are notoriously hard graph coloring problems that involve assigning colors to the edges of a complete graph. An $(r_1,\ldots,r_k;n)$ Ramsey coloring is a graph coloring in $k$ colors of the complete graph $K_n$ that does not contain a monochromatic complete sub-graph $K_{r_i}$ in color $i$ for each $1\leq i\leq k$. The set of all such colorings is denoted ${{\cal R}}(r_1,\ldots,r_k;n)$. The Ramsey number $R(r_1,\ldots,r_k)$ is the least $n>0$ such that no $(r_1,\ldots,r_k;n)$ coloring exists. In particular, the number $R(4,3,3)$ is often presented as the unknown Ramsey number with the best chances of being found “soon”. Yet, its precise value has remained unknown for more than 50 years. It is currently known that $30\leq R(4,3,3)\leq 31$. Kalbfleisch [@kalb66] proved in 1966 that $R(4,3,3)\geq 30$, Piwakowski [@Piwakowski97] proved in 1997 that $R(4,3,3)\leq 32$, and one year later Piwakowski and Radziszowski [@PR98] proved that $R(4,3,3)\leq 31$. We demonstrate how our methodology applies to computationally prove that $R(4,3,3)=30$. Our strategy to compute $R(4,3,3)$ is based on the search for a $(4,3,3;30)$ Ramsey coloring. If one exists, then because $R(4,3,3)\leq 31$, it follows that $R(4,3,3) = 31$. Otherwise, because $R(4,3,3)\geq 30$, it follows that $R(4,3,3) = 30$. In recent years, Boolean SAT solving techniques have improved dramatically. Today’s SAT solvers are considerably faster and able to manage larger instances than were previously possible. Moreover, encoding and modeling techniques are better understood and increasingly innovative. SAT is currently applied to solve a wide variety of hard and practical combinatorial problems, often outperforming dedicated algorithms. The general idea is to encode a (typically, NP) hard problem instance, $\mu$, to a Boolean formula, $\varphi_\mu$, such that the satisfying assignments of $\varphi_\mu$ correspond to the solutions of $\mu$. Given such an encoding, a SAT solver can be applied to solve $\mu$. Our methodology in this paper combines SAT solving with two additional concepts: *abstraction* and *symmetry breaking*. The paper is structured to let the application drive the presentation of the methodology in three steps. Section \[sec:prelim\] presents: preliminaries on graph coloring problems, some general notation on graphs, and a simple constraint model for Ramsey coloring problems. Section \[sec:embed\] presents the first step in our quest to compute $R(4,3,3)$. We introduce a basic SAT encoding and detail how a SAT solver is applied to search for Ramsey colorings. Then we describe and apply a well known embedding technique, which allows to determine a set of partial solutions in the search for a $(4,3,3;30)$ Ramsey coloring such that if a coloring exists then it is an extension of one of these partial solutions. This may be viewed as a preprocessing step for a SAT solver which then starts from a partial solution. Applying this technique we conclude that if a $(4,3,3;30)$ Ramsey coloring exists then it must be ${\langle 13,8,8 \rangle}$ regular. Namely, each vertex in the coloring must have 13 edges in the first color, and 8 edges in each of the other two colors. This result is already considered significant progress in the research on Ramsey numbers as stated in [@XuRad2015]. To further apply this technique to determine if there exists a ${\langle 13,8,8 \rangle}$ regular $(4,3,3;30)$ Ramsey coloring requires to first compute the currently unknown set ${{\cal R}}(3,3,3;13)$. Sections \[sec:symBreak\]—\[sec:33313b\] present the second step: computing ${{\cal R}}(3,3,3;13)$. Section \[sec:symBreak\] illustrates how a straightforward approach, combining SAT solving with *symmetry breaking*, works for smaller instances but not for ${{\cal R}}(3,3,3;13)$. Then Section \[sec:abs\] introduces an *abstraction*, called degree matrices, Section \[sec:33313\] demonstrates how to compute degree matrices for ${{\cal R}}(3,3,3;13)$, and Section \[sec:33313b\] shows how to use the degree matrices to compute ${{\cal R}}(3,3,3;13)$. Section \[sec:433\_30\] presents the third step re-examining the embedding technique described in Section \[sec:embed\] which given the set ${{\cal R}}(3,3,3;13)$ applies to prove that there does not exist any $(4,3,3;30)$ Ramsey coloring which is also ${\langle 13,8,8 \rangle}$ regular. Section \[sec:conclude\] presents a conclusion. Preliminaries and Notation {#sec:prelim} ========================== In this paper, graphs are always simple, i.e. undirected and with no self loops. For a natural number $n$ let $[n]$ denote the set $\{1,2,\ldots,n\}$. A graph coloring, in $k$ colors, is a pair $(G,\kappa)$ consisting of a simple graph $G=(V,E)$ and a mapping $\kappa\colon E\to[k]$. When $G$ is clear from the context we refer to $\kappa$ as the graph coloring. We typically represent $G=([n],E)$ as a (symmetric) $n\times n$ adjacency matrix, $A$, defined such that $$A_{i,j}= \begin{cases} \kappa((i,j)) & \mbox{if } (i,j) \in E\\ 0 & \mbox{otherwise} \end{cases}$$ Given a graph coloring $(G,\kappa)$ in $k$ colors with $G=(V,E)$, the set of neighbors of a vertex $u\in V$ in color $c\in [k]$ is $N_c(u) = {\left\{~v \left| \begin{array}{l}(u,v)\in E, \kappa((u,v))=c\end{array} \right. \right\}} $ and the color-$c$ degree of $u$ is $deg_{c}(u) = |N_c(u)|$. The color degree tuple of $u$ is the $k$-tuple $deg(u)={\langle deg_{1}(u),\ldots,deg_{k}(u) \rangle}$. The sub-graph of $G$ on the $c$ colored neighbors of $x\in V$ is the projection of $G$ to vertices in $N_c(x)$ defined by $G^c_x = (N_c(x),{\left\{~(u,v)\in E \left| \begin{array}{l}u,v\in N_c(x)\end{array} \right. \right\}})$. For example, take as $G$ the graph coloring depicted by the adjacency matrix in Figure \[embed\_12\_8\_8\] with $u$ the vertex corresponding to the first row in the matrix. Then, $N_1(u) = \{2,3,4,5,6,7,8,9,10,11,12,13\}$, $N_2(u) = \{14,15,16,17,18,19,20,21\}$, and $N_3(u)=\{22,23,24,25,26,27,28,29\}$. The subgraphs $G^1_u$, $G^2_u$, and $G^3_u$ are highlighted by the boldface text in Figure \[embed\_12\_8\_8\]. An $(r_1,\ldots,r_k;n)$ Ramsey coloring is a graph coloring in $k$ colors of the complete graph $K_n$ that does not contain a monochromatic complete sub-graph $K_{r_i}$ in color $i$ for each $1\leq i\leq k$. The set of all such colorings is denoted ${{\cal R}}(r_1,\ldots,r_k;n)$. The Ramsey number $R(r_1,\ldots,r_k)$ is the least $n>0$ such that no $(r_1,\ldots,r_k;n)$ coloring exists. In the multicolor case ($k>2$), the only known value of a nontrivial Ramsey number is $R(3,3,3)=17$. Prior to this paper, it was known that $30\leq R(4,3,3)\leq 31$. Moreover, while the sets of $(3,3,3;n)$ colorings were known for $14\leq n\leq 16$, the set of colorings for $n=13$ was never published.[^2] More information on recent results concerning Ramsey numbers can be found in the electronic dynamic survey by Radziszowski [@Rad]. $$\begin{aligned} \varphi_{adj}^{n,k}(A) &=& \hspace{-2mm}\bigwedge_{1\leq q<r\leq n} \left(\begin{array}{l} 1\leq A_{q,r}\leq k ~~\land~~ A_{q,r} = A_{r,q} ~~\land ~~ A_{q,q} = 0 \end{array}\right) \label{constraint:simple} \\ \varphi_{r}^{n,c}(A) &=& \bigwedge_{I\in \wp_r([n])} \bigvee {\left\{~A_{i,j}\neq c \left| \begin{array}{l}i,j \in I, i<j\end{array} \right. \right\}} \label{constraint:nok}\end{aligned}$$ $$\begin{aligned} \small \label{constraint:coloring} \varphi_{(r_1,\ldots,r_k;n)}(A) & = & \varphi_{adj}^{n,k}(A) \land \hspace{-2mm} \bigwedge_{1\leq c\leq k} \hspace{-1mm} \varphi_{r_c}^{n,c}(A)\end{aligned}$$ A graph coloring problem on $k$ colors is about the search for a graph coloring which satisfies a given set of constraints. Formally, it is specified as a formula, $\varphi(A)$, where $A$ is an $n\times n$ adjacency matrix of integer variables with domain $\{0\}\cup [k]$ and $\varphi$ is a constraint on these variables. A solution is an assignment of integer values to the variables in $A$ which satisfies $\varphi$ and determines both the graph edges and their colors. We often refer to a solution as an integer adjacency matrix and denote the set of solutions as $sol(\varphi(A))$. Figure \[fig:gcp\] presents the $k$-color graph coloring problems we focus on in this paper: $(r_1,\ldots,r_k;n)$ Ramsey colorings. Constraint (\[constraint:simple\]), $\varphi_{adj}^{n,k}(A)$, states that the graph represented by matrix $A$ has $n$ vertices, is $k$ colored, and is simple. Constraint (\[constraint:nok\]) $\varphi_{r}^{n,c}(A)$ states that the $n\times n$ matrix $A$ has no embedded sub-graph $K_r$ in color $c$. Each conjunct, one for each set $I$ of $r$ vertices, is a disjunction stating that one of the edges between vertices of $I$ is not colored $c$. Notation: $\wp_r(S)$ denotes the set of all subsets of size $r$ of the set $S$. Constraint (\[constraint:coloring\]) states that $A$ is a $(r_1,\ldots,r_k;n)$ Ramsey coloring. For graph coloring problems, solutions are typically closed under permutations of vertices and of colors. Restricting the search space for a solution modulo such permutations is crucial when trying to solve hard graph coloring problems. It is standard practice to formalize this in terms of graph (coloring) isomorphism. Let $G=(V,E)$ be a graph (coloring) with $V=[n]$ and let $\pi$ be a permutation on $[n]$. Then $\pi(G) = (V,{\left\{~ (\pi(x),\pi(y)) \left| \begin{array}{l} (x,y) \in E\end{array} \right. \right\}})$. Permutations act on adjacency matrices in the natural way: If $A$ is the adjacency matrix of a graph $G$, then $\pi(A)$ is the adjacency matrix of $\pi(G)$ and $\pi(A)$ is obtained by simultaneously permuting with $\pi$ both rows and columns of $A$. \[def:weak\_iso\] Let $(G,{\kappa_1})$ and $(H,{\kappa_2})$ be $k$-color graph colorings with $G=([n],E_1)$ and $H=([n],E_2)$. We say that $(G,{\kappa_1})$ and $(H,{\kappa_2})$ are weakly isomorphic, denoted $(G,{\kappa_1})\approx(H,{\kappa_2})$ if there exist permutations $\pi \colon [n] \to [n]$ and $\sigma \colon [k] \to [k]$ such that $(u,v) \in E_1 \iff (\pi(u),\pi(v)) \in E_2$ and $\kappa_1((u,v)) = \sigma(\kappa_2((\pi(u), \pi(v))))$. We denote such a weak isomorphism: $(G,{\kappa_1})\approx_{\pi,\sigma}(H,{\kappa_2})$. When $\sigma$ is the identity permutation, we say that $(G,{\kappa_1})$ and $(H,{\kappa_2})$ are isomorphic. The following lemma emphasizes the importance of weak graph isomorphism as it relates to Ramsey numbers. Many classic coloring problems exhibit the same property. \[lemma:closed\] Let $(G,{\kappa_1})$ and $(H,{\kappa_2})$ be graph colorings in $k$ colors such that $(G,\kappa_1) \approx_{\pi,\sigma} (H,\kappa_2)$. Then, $$(G,\kappa_1) \in {{\cal R}}(r_1,r_2,\ldots,r_k;n) \iff (H,\kappa_2) \in {{\cal R}}(\sigma(r_1),\sigma(r_2),\ldots,\sigma(r_k);n)$$ We make use of the following theorem from [@PR98]. \[thm:433\] $30\leq R(4,3,3)\leq 31$ and, $R(4,3,3)=31$ if and only if there exists a $(4,3,3;30)$ coloring $\kappa$ of $K_{30}$ such that: (1) For every vertex $v$ and $i\in\{2,3\}$, $5\leq deg_{i}(v)\leq 8$, and $13\leq deg_{1}(v)\leq 16$. (2) Every edge in the third color has at least one endpoint $v$ with $deg_{3}(v)=13$. (3) There are at least 25 vertices $v$ for which $deg_{1}(v)=13$, $deg_{2}(v)=deg_{3}(v)=8$. \[cor:degrees\] Let $G=(V,E)$ be a $(4,3,3;30)$ coloring, $v\in V$ a selected vertex, and assume without loss of generality that $deg_2(v)\geq deg_3(v)$. Then, $deg(v)\in{\left\{ \begin{array}{l}{\langle 13, 8, 8 \rangle},{\langle 14, 8, 7 \rangle},{\langle 15, 7, 7 \rangle},{\langle 15, 8, 6 \rangle},{\langle 16, 7, 6 \rangle},{\langle 16, 8, 5 \rangle}\end{array} \right\}}$. Consider a vertex $v$ in a $(4,3,3;n)$ coloring and focus on the three subgraphs induced by the neighbors of $v$ in each of the three colors. The following states that these must be corresponding Ramsey colorings. \[obs:embed\] Let $G$ be a $(4,3,3;n)$ coloring and $v$ be any vertex with $deg(v)={\langle d_1,d_2,d_3 \rangle}$. Then, $d_1+d_2+d_3=n-1$ and $G^1_v$, $G^2_v$, and $G^3_v$ are respectively $(3,3,3;d_1)$, $(4,2,3;d_2)$, and $(4,3,2;d_3)$ colorings. Note that by definition a $(4,2,3;n)$ coloring is a $(4,3;n)$ Ramsey coloring in colors 1 and 3 and likewise a $(4,3,2;n)$ Ramsey coloring is a $(4,3;n)$ coloring in colors 1 and 2. This is because the “2” specifies that the coloring does not contain a subgraph $K_2$ in the corresponding color and this means that it contains no edge with that color. For $n\in\{14,15,16\}$, the sets ${{\cal R}}(3,3,3;n)$ are known and consist respectively of 115, 2, and 2 colorings. Similarly, for $n\in\{5,6,7,8\}$ the sets ${{\cal R}}(4,3;n)$ are known and consist respectively of 9, 15, 9, and 3 colorings. In this paper computations are performed using the CryptoMiniSAT [@Crypto] SAT solver. SAT encodings (CNF) are obtained using the finite-domain constraint compiler  [@jair2013]. The use of  facilitates applications to find a single (first) solution, or to find all solutions for a constraint, modulo a specified set of variables. When solving for all solutions, our implementation iterates with the SAT solver, adding so called *blocking clauses* each time another solution is found. This technique, originally due to McMillan [@McMillan2002], is simplistic but suffices for our purposes. All computations were performed on a cluster with a total of $228$ Intel E8400 cores clocked at 2 GHz each, able to run a total of $456$ parallel threads. Each of the cores in the cluster has computational power comparable to a core on a standard desktop computer. Each SAT instance is run on a single thread. Basic SAT Encoding and Embeddings {#sec:embed} ================================= Throughout the paper we apply a SAT solver to solve CNF encodings of constraints such as those presented in Figure \[fig:gcp\]. In this way it is straightforward to find a Ramsey coloring or prove its non-existence. Ours is a standard encoding to CNF. To this end: nothing new. For an $n$ vertex graph coloring problem in $k$ colors we take an $n\times n$ matrix $A$ where $A_{i,j}$ represents in $k$ bits the edge $(i,j)$ in the graph: exactly one bit is true indicating which color the edge takes, or no bit is true indicating that the edge $(i,j)$ is not in the graph. Already at the representation level, we use the same Boolean variables to represent the color in $A_{i,j}$ and in $A_{j,i}$ for each $1\leq i<j\leq n$. We further fix the variables corresponding to $A_{i,i}$ to ${\mathit{false}}$. The rest of the SAT encoding is straightforward. Constraint (\[constraint:simple\]) is encoded to CNF by introducing clauses to state that for each $A_{i,j}$ with $1\leq i<j\leq n$ at most one of the $k$ bits representing the color of the edge $(i,j)$ is true. In our setting typically $k=3$. For three colors, if $b_1,b_2,b_3$ are the bits representing the color of an edge, then three clauses suffice: $(\bar b_1\lor \bar b_2),(\bar b_1\lor \bar b_3),(\bar b_2\lor \bar b_3)$. Constraint (\[constraint:nok\]) is encoded by a single clause per set $I$ of $r$ vertices expressing that at least one of the bits corresponding to an edge between vertices in $I$ does not have color $c$. Finally Constraint (\[constraint:coloring\]) is a conjunction of constraints of the previous two forms. In Section \[sec:symBreak\] we will improve on this basic encoding by introducing symmetry breaking constraints (encoded to CNF). However, for now we note that, even with symmetry breaking constraints, using the basic encoding, a SAT solver is currently not able to solve any of the open Ramsey coloring problems such as those considered in this paper. In particular, directly applying a SAT solver to search for a $(4,3,3;30)$ Ramsey coloring is hopeless. To facilitate the search for $(4,3,3;30)$ Ramsey coloring using a SAT encoding, we apply a general approach where, when seeking a $(r_1,\ldots,r_k;n)$ Ramsey coloring one selects a “preferred” vertex, call it $v_1$, and based on its degrees in each of the $k$ colors, embeds $k$ subgraphs which are corresponding smaller colorings. Using this approach, we apply Corollary \[cor:degrees\] and Observation \[obs:embed\] to establish that a $(4,3,3;30)$ coloring, if one exists, must be ${\langle 13,8,8 \rangle}$ regular. Specifically, all vertices must have 13 edges in the first color and 8 each, in the second and third colors. This result is considered significant progress in the research on Ramsey numbers [@XuRad2015]. This “embedding” approach is often applied in the Ramsey number literature where the process of completing (or trying to complete) a partial solution (an embedding) to a Ramsey coloring is called *gluing*. See for example the presentations in [@PiwRad2001; @FKRad2004; @PR98]. \[thm:regular\] Any $(4,3,3;30)$ coloring, if one exists, is ${\langle 13,8,8 \rangle}$ regular. By computation as described in the rest of this section. [r]{}[.460]{} We seek a $(4,3,3;30)$ coloring of $K_{30}$, represented as a $30\times 30$ adjacency matrix $A$. Let $v_1$ correspond to the the first row in $A$ with $deg(v_1)={\langle d_1,d_2,d_3 \rangle}$ as prescribed by Corollary \[cor:degrees\]. For each possible triplet ${\langle d_1,d_2,d_3 \rangle}$, except ${\langle 13,8,8 \rangle}$, we take each of the known corresponding colorings for the subgraphs $G^1_{v_1}$, $G^2_{v_1}$, and $G^3_{v_1}$ and embed them into $A$. We then apply a SAT solver, to (try to) complete the remaining cells in $A$ to satisfy $\varphi_{4,3,3;30}(A)$ as defined by Constraint (\[constraint:coloring\]) of Figure \[fig:gcp\]. If the SAT solver fails, then no such completion exists. To illustrate the approach, consider the case where $deg(v_1)={\langle 14,8,7 \rangle}$. Figure \[embed\_14\_8\_7\] details one of the embeddings corresponding to this case. The first row and column of $A$ specify the colors of the edges of the 29 neighbors of $v_1$ (in bold). The symbol “$\_$” indicates an integer variable that takes a value between 1 and 3. The neighbors of $v_1$ in color 1 form a submatrix of $A$ embedded in rows (and columns) 2–15 of the matrix in the Figure. By Corollary \[obs:embed\] these are a $(3,3,3;14)$ Ramsey coloring and there are 115 possible such colorings modulo weak isomorphism. The Figure details one of them. Similarly, there are 3 possible $(4,2,3;8)$ colorings which are subgraphs for the neighbors of $v_1$ in color 2. In Figure \[embed\_14\_8\_7\], rows (and columns) 16–23 detail one such coloring. Finally, there are 9 possible $(4,3,2;7)$ colorings which are subgraphs for the neighbors of $v_1$ in color 3. In Figure \[embed\_14\_8\_7\], rows (and columns) 24–30 detail one such coloring. To summarize, Figure \[embed\_14\_8\_7\] is a partially instantiated adjacency matrix. The first row determines the degrees of $v_1$, in the three colors, and 3 corresponding subgraphs are embedded. The uninstantiated values in the matrix must be completed to obtain a solution that satisfies $\varphi_{4,3,3;30}(A)$ as specified in Constraint (\[constraint:coloring\]) of Figure \[fig:gcp\]. This can be determined using a SAT solver. For the specific example in Figure \[embed\_14\_8\_7\], the CNF generated using our tool set consists of 33[,]{}959 clauses, involves 5[,]{}318 Boolean variables, and is shown to be unsatisfiable in 52 seconds of computation time. For the case where $v_1$ has degrees ${\langle 14,8,7 \rangle}$ in the three colors this is one of $115\times 3\times 9 = 3105$ instances that need to be checked. Table \[table:regular\] summarizes the experiment which proves Theorem \[thm:regular\]. For each of the possible degrees of vertex 1 in a $(4,3,3;30)$ coloring as prescribed by Corollary \[cor:degrees\], except ${\langle 13,8,8 \rangle}$, and for each possible choice of colorings for the derived subgraphs $G^1_{v_1}$, $G^2_{v_1}$, and $G^3_{v_1}$, we apply a SAT solver to show that the instance $\varphi_{(4,3,3;30)}(A)$ of Constraint (\[constraint:coloring\]) of Figure \[fig:gcp\] cannot be satisfied. The table details for each degree triple, the number of instances, their average size (number of clauses and Boolean variables), and the average and total times to show that the constraint is not satisfiable. $v_1$ degrees \# clauses (avg.) \# vars (avg.) unsat (avg) unsat (total) --------------- ------ ------------------- ---------------- ------------- --------------- ------------- (16,8,5) 54 = 2\*3\*9 32432 5279 51 sec. 0.77 hrs. (16,7,6) 270 = 2\*9\*15 32460 5233 420 sec. 31.50 hrs. (15,8,6) 90 = 2\*3\*15 33607 5450 93 sec. 2.32 hrs. (15,7,7) 162 = 2\*9\*9 33340 5326 1554 sec. 69.94 hrs. (14,8,7) 3105 = 115\*3\*9 34069 5324 294 sec. 253.40 hrs. : Proving that any $(4,3,3;30)$ Ramsey coloring is ${\langle 13,8,8 \rangle}$ regular (summary).[]{data-label="table:regular"} All of the SAT instances described in the experiment summarized by Table \[table:regular\] are unsatisfiable. The solver reports “unsat”. To gain confidence in our implementation, we illustrate its application on a satisfiable instance: to find a, known to exist, $(4,3,3;29)$ coloring. This experiment involves some reverse engineering. [r]{}[.460]{} In 1966 Kalbfleisch [@kalb66] reported the existence of a circulant $(3,4,4;29)$ coloring. Encoding instance $\varphi_{(4,3,3;29)}(A)$ of Constraint (\[constraint:coloring\]) together with a constraint that states that the adjacency matrix $A$ is circulant, results in a CNF with 146[,]{}506 clauses and 8[,]{}394 variables. Using a SAT solver, we obtain a corresponding $(4,3,3;29)$ coloring in less than two seconds of computation time. The solution is ${\langle 12,8,8 \rangle}$ regular and isomorphic to the adjacency matrix depicted as Figure \[embed\_12\_8\_8\]. Now we apply the embedding approach. We take the partial solution (the boldface elements) corresponding to the three subgraphs: $G^1_{v_1}$, $G^2_{v_1}$ and $G^3_{v_1}$ which are respectively $(3,3,3;12)$, $(4,2,3;8)$ and $(4,3,2;8)$ Ramsey colorings. Applying a SAT solver to complete this partial solution to a $(4,3,3;29)$ coloring satisfying Constraint (\[constraint:coloring\]) involves a CNF with 30[,]{}944 clauses and 4[,]{}736 variables and requires under two hours of computation time. Figure \[embed\_12\_8\_8\] portrays the solution (the gray elements). To apply the embedding approach described in this section to determine if there exists a $(4,3,3;30)$ Ramsey coloring which is ${\langle 13,8,8 \rangle}$ regular would require access to the set ${{\cal R}}(3,3,3;13)$. We defer this discussion until after Section \[sec:33313b\] where we describe how we compute the set of all 78[,]{}892 $(3,3,3;13)$ Ramsey colorings modulo weak isomorphism. Symmetry Breaking: Computing ${{\cal R}}(r_1,\ldots,r_k;n)$ {#sec:symBreak} =========================================================== In this section we prepare the ground to apply a SAT solver to find the set of all $(r_1,\ldots,r_k;n)$ Ramsey colorings modulo weak isomorphism. The constraints are those presented in Figure \[fig:gcp\] and their encoding to CNF is as described in Section \[sec:embed\]. Our final aim is to compute the set of all $(3,3,3;13)$ colorings modulo weak isomorphism. Then we can apply the embedding technique of Section \[sec:embed\] to determine the existence of a ${\langle 13,8,8 \rangle}$ regular $(4,3,3;30)$ Ramsey coloring. Given Theorem \[thm:regular\], this will determine the value of $R(4,3,3)$. Solving hard search problems on graphs, and graph coloring problems in particular, relies heavily on breaking symmetries in the search space. When searching for a graph, the names of the vertices do not matter, and restricting the search modulo graph isomorphism is highly beneficial. When searching for a graph coloring, on top of graph isomorphism, solutions are typically closed under permutations of the colors: the names of the colors do not matter and the term often used is “weak isomorphism” [@PR98] (the equivalence relation is weaker because both node names and edge colors do not matter). When the problem is to compute the set of all solutions modulo (weak) isomorphism the task is even more challenging. Often one first attempts to compute all the solutions of the coloring problem, and to then apply one of the available graph isomorphism tools, such as `nauty` [@nauty] to select representatives of their equivalence classes modulo (weak) isomorphism. This is a *generate and test* approach. However, typically the number of solutions is so large that this approach is doomed to fail even though the number of equivalence classes itself is much smaller. The problem is that tools such as `nauty` apply after, and not during, generation. To this end, we follow [@CodishMPS14] where Codish [[*et al.*]{}]{} show that the symmetry breaking approach of [@DBLP:conf/ijcai/CodishMPS13] holds also for graph coloring problems where the adjacency matrix consists of integer variables. This is a *constrain and generate approach*. But, as symmetry breaking does not break all symmetries, it is still necessary to perform some reduction using a tool like `nauty`.[^3] This form of symmetry breaking is an important component in our methodology. **[@DBLP:conf/ijcai/CodishMPS13].** \[def:SBlexStar\] Let $A$ be an $n\times n$ adjacency matrix. Then, $$\label{eq:symbreak} {\textsf{sb}}^*_\ell(A) = \bigwedge{\left\{~A_{i}\preceq_{\{i,j\}}A_{j} \left| \begin{array}{l}i<j\end{array} \right. \right\}}$$ where $A_{i}\preceq_{\{i,j\}}A_{j}$ denotes the lexicographic order between the $i^{th}$ and $j^{th}$ rows of $A$ (viewed as strings) omitting the elements at positions $i$ and $j$ (in both rows). We omit the precise details of how Constraint (\[eq:symbreak\]) is encoded to CNF. In our implementation this is performed by the finite domain constraint compiler  and details can be found in [@jair2013]. Table \[tab:333n1\] illustrates the impact of the symmetry breaking Constraint (\[eq:symbreak\]) on the search for the Ramsey colorings required in the proof of Theorem \[thm:regular\]. The first four rows in the table portray the required instances of the forms $(4,3,2;n)$ and $(4,2,3;n)$ which by definition correspond to $(4,3;n)$ colorings (respectively in colors 1 and 3, and in colors 1 and 2). The next three rows correspond to $(3,3,3;n)$ colorings where $n\in\{14,15,16\}$. The last row illustrates our failed attempt to apply a SAT encoding to compute ${{\cal R}}(3,3,3;13)$. The first column in the table specifies the instance. The column headed by “\#${\setminus}_{\approx}$” specifies the known (except for the last row) number of colorings modulo weak isomorphism [@Rad]. The columns headed by “vars” and “clauses” indicate, the numbers of variables and clauses in the corresponding CNF encodings of the coloring problems with and without the symmetry breaking Constraint (\[eq:symbreak\]). The columns headed by “time” indicate the time (in seconds) to find all colorings iterating with a SAT solver. The timeout assumed here is 24 hours. The column headed by “\#” specifies the number of colorings found by iterated SAT solving. In the first four rows, notice the impact of symmetry breaking which reduces the number of solutions by 1–3 orders of magnitude. In the next three rows the reduction is more acute. Without symmetry breaking the colorings cannot be computed within the 24 hour timeout. The sets of colorings obtained with symmetry breaking have been verified to reduce, using `nauty` [@nauty], to the known number of colorings modulo weak isomorphism indicated in the second column. Abstraction: Degree Matrices for Graph Colorings {#sec:abs} ================================================ This section introduces an abstraction on graph colorings defined in terms of *degree matrices*. The motivation is to solve a hard graph coloring problem by first searching for its degree matrices. Degree matrices are to graph coloring problems as degree sequences [@ErdosGallai1960] are to graph search problems. A degree sequence is a monotonic nonincreasing sequence of the vertex degrees of a graph. A graphic sequence is a sequence which can be the degree sequence of some graph. The idea underlying our approach is that when the combinatorial problem at hand is too hard, then possibly solving an abstraction of the problem is easier. In this case, a solution of the abstract problem can be used to facilitate the search for a solution of the original problem. \[def:dm\] Let $A$ be a graph coloring on $n$ vertices with $k$ colors. The *degree matrix* of $A$, denoted $dm(A)$ is an $n\times k$ matrix, $M$ such that $M_{i,j} = deg_j(i)$ is the degree of vertex $i$ in color $j$. [r]{}[.33]{} Figure \[fig:dm\] illustrates the degree matrix of the graph coloring given as Figure \[embed\_12\_8\_8\]. The three columns correspond to the three colors and the 29 rows to the 29 vertices. The degree matrix consists of 29 identical rows as the corresponding graph coloring is ${\langle 12,8,8 \rangle}$ regular. A degree matrix $M$ represents the set of graphs $A$ such that $dm(A)=M$. Due to properties of weak-isomorphism (vertices as well as colors can be reordered) we can exchange both rows and columns of a degree matrix without changing the set of graphs it represents. In the rest of our construction we adopt a representation in which the rows and columns of a degree matrix are sorted lexicographically. For an $n\times k$ degree matrix $M$ we denote by $lex(M)$ the smallest matrix with rows and columns in the lexicographic order (non-increasing) obtained by permuting rows and columns of $M$. \[def:abs\] Let $A$ be a graph coloring on $n$ vertices with $k$ colors. The *abstraction* of $A$ to a degree matrix is $\alpha(A)=lex(dm(A))$. For a set ${{\cal A}}$ of graph colorings we denote $\alpha({{\cal A}}) = {\left\{~\alpha(A) \left| \begin{array}{l}A\in{{\cal A}}\end{array} \right. \right\}}$. Note that if $A$ and $A'$ are weakly isomorphic, then $\alpha(A)=\alpha(A')$. \[def:conc\] Let $M$ be an $n\times k$ degree matrix. Then, $\gamma(M) = {\left\{~A \left| \begin{array}{l}\alpha(A)=M\end{array} \right. \right\}}$ is the set of graph colorings represented by $M$. For a set ${{\cal M}}$ of degree matrices we denote $\gamma({{\cal M}}) = \cup{\left\{~\gamma(M) \left| \begin{array}{l}M\in{{\cal M}}\end{array} \right. \right\}}$. Let $\varphi(A)$ be a graph coloring problem in $k$ colors on an $n\times n$ adjacency matrix, $A$. Our strategy to compute ${{\cal A}}=sol(\varphi(A))$ is to first compute an over-approximation ${{\cal M}}$ of degree matrices such that $\gamma({{\cal M}})\supseteq{{\cal A}}$ and to then use ${{\cal M}}$ to guide the computation of ${{\cal A}}$. We denote the set of solutions of the graph coloring problem, $\varphi(A)$, which have a given degree matrix, $M$, by $sol_M(\varphi(A))$. Then $$\begin{aligned} \label{eq:approx} sol(\varphi(A)) &=& \bigcup_{M\in{{\cal M}}} sol_M(\varphi(A))\\ \label{eq:solM} sol_M(\varphi(A)) & = & sol(\varphi(A)\wedge\alpha(A){=}M)\end{aligned}$$ Equation (\[eq:approx\]) implies that, we can compute the solutions to a graph coloring problem $\varphi(A)$ by computing the independent sets $sol_M(\varphi(A))$ for any over approximation ${{\cal M}}$ of the degree matrices of the solutions of $\varphi(A)$. This facilitates the computation for two reasons: (1) The problem is now broken into a set of independent sub-problems for each $M\in{{\cal M}}$ which can be solved in parallel, and (2) The computation of each individual $sol_M(\varphi(A))$ is now directed using $M$. The constraint $\alpha(A){=}M$ in the right side of Equation (\[eq:solM\]) is encoded to SAT by introducing (encodings of) cardinality constraints. For each row of the matrix $A$ the corresponding row in $M$ specifies the number of elements with value $c$ (for $1\leq c\leq k$) that must be in that row. We omit the precise details of the encoding to CNF. In our implementation this is performed by the finite domain constraint compiler  and details can be found in [@jair2013]. When computing $sol_M(\varphi(A))$ for a given degree matrix we can no longer apply the symmetry breaking Constraint (\[eq:symbreak\]) as it might constrain the rows of $A$ in a way that contradicts the constraint $\alpha(A)=M$ in the right side of Equation (\[eq:solM\]). However, we can refine Constraint (\[eq:symbreak\], to break symmetries on the rows of $A$ only when the corresponding rows in $M$ are equal. Then $M$ can be viewed as inducing an ordered partition of $A$ and Constraint (\[eq:sbdm\]) is, in the terminology of [@DBLP:conf/ijcai/CodishMPS13], a partitioned lexicographic symmetry break. In the following, $M_i$ and $M_j$ denote the $i^{th}$ and $j^{th}$ rows of matrix $M$. $$\label{eq:sbdm} {\textsf{sb}}^*_\ell(A,M) = \bigwedge_{i<j} \left(\begin{array}{l} \big(M_i=M_j\Rightarrow A_i\preceq_{\{i,j\}} A_j\big) \end{array}\right)$$ The following refines Equation (\[eq:solM\]) introducing the symmetry breaking predicate. $$\label{eq:scenario1} sol_M(\varphi(A)) = sol(\varphi(A)\wedge (\alpha(A){=}M) \wedge{\textsf{sb}}^*_\ell(A,M))$$ To justify that Equations (\[eq:solM\]) and (\[eq:scenario1\]) both compute $sol_M(\varphi(A))$, modulo weak isomorphism, we must show that if ${\textsf{sb}}^*_\ell(A,M)$ excludes a solution then there is another weakly isomorphic solution that is not excluded. \[thm:sbl\_star\] Let $A$ be an adjacency matrix with $\alpha(A) = M$. Then, there exists $A'\approx A$ such that $\alpha(A')=M$ and ${\textsf{sb}}^{*}_\ell(A',M)$ holds. Computing Degree Matrices for $R(3,3,3;13)$ {#sec:33313} =========================================== This section describes how we compute a set of degree matrices that approximate those of the solutions of instance $\varphi_{(3,3,3;13)}(A)$ of Constraint (\[constraint:coloring\]). We apply a strategy mixing SAT solving with brute-force enumeration as follows. The computation of the degree matrices is summarized in Table \[tab:333\_computeDMs\]. In the first step, we compute bounds on the degrees of the nodes in any $R(3,3,3;13)$ coloring. \[lemma:db\] Let $A$ be an $R(3,3,3;13)$ coloring then for every vertex $x$ in $A$, and color $c\in\{1,2,3\}$, $2\leq deg_{c}(x)\leq 5$. By solving instance $\varphi_{(3,3,3;13)}(A)$ of Constraint (\[constraint:coloring\]) seeking a graph with some degree less than 2 or greater than 5. The CNF encoding is of size 13[,]{}672 clauses with 2[,]{}748 Boolean variables and takes under 15 seconds to solve and yields an UNSAT result which implies that such a graph does not exist. In the second step, we enumerate the degree sequences with values within the bounds specified by Lemma \[lemma:db\]. Recall that the degree sequence of an undirected graph is the non-increasing sequence of its vertex degrees. Not every non-increasing sequence of integers corresponds to a degree sequence. A sequence that corresponds to a degree sequence is said to be graphical. The number of degree sequences of graphs with 13 vertices is 836[,]{}315 (see Sequence number `A004251` of The On-Line Encyclopedia of Integer Sequences published electronically at <http://oeis.org>). However, when the degrees are bound by Lemma \[lemma:db\] there are only 280. \[lemma:ds\] There are 280 degree sequences with values between $2$ and $5$. Straightforward enumeration using the algorithm of Erd[ö]{}s and Gallai [@ErdosGallai1960]. In the third step, we test the 280 degree sequences identified by Lemma \[lemma:ds\] to determine which of them might occur as the left column in a degree matrix. \[lemma:ds2\] Let $A$ be a $R(3,3,3;13)$ coloring and let $M=\alpha(A)$. Then, (a) the left column of $M$ is one of the 280 degree sequences identified in Lemma \[lemma:ds\]; and (b) there are only 80 degree sequences from the 280 which are the left column of $\alpha(A)$ for some coloring $A$ in $R(3,3,3;13)$. By solving instance $\varphi_{(3,3,3;13)}(A)$ of Constraint (\[constraint:coloring\]). For each degree sequence from Lemma \[lemma:ds\], seeking a solution with that degree sequence in the first color. This involves 280 instances with average CNF size: 10861 clauses and 2215 Boolean variables. The total solving time is 375.76 hours and the hardest instance required about 50 hours. Exactly 80 of these instances were satisfiable. In the fourth step we extend the 80 degree sequences identified in Lemma \[lemma:ds2\] to obtain all possible degree matrices. \[lemma:dm\] Given the 80 degree sequences identified in Lemma \[lemma:ds2\] as potential left columns of a degree matrix, there are 11[,]{}933 possible degree matrices. By enumeration. For a degree matrix: the rows and columns are lex sorted, the rows must sum to 12, and the columns must be graphical (when sorted). We enumerate all such degree matrices and then select their smallest representatives under permutations of rows and columns. The computation requires a few seconds. In the fifth step, we test the 11[,]{}933 degree matrices identified by Lemma \[lemma:dm\] to determine which of them are the abstraction of some $R(3,3,3;13)$ coloring. \[lemma:dm2\] From the 11[,]{}933 degree matrices identified in Lemma \[lemma:dm\], 999 are $\alpha(A)$ for a coloring $A$ in ${{\cal R}}(3,3,3;13)$. By solving instance $\varphi_{(3,3,3;13)}(A)$ of Constraint (\[constraint:coloring\]) together with a given degree matrix to test if it is satisfiable. This involves 11[,]{}933 instances with average CNF size: 7632 clauses and 1520 Boolean variables. The total solving time is 126.55 hours and the hardest instance required 0.88 hours. Step ------- -------------------------------------------------------------- -------------------- ------------ ------------ compute degree bounds (Lemma \[lemma:db\]) \#Vars \#Clauses (1 instance, unsat)   2748 13672 enumerate 280 possible degree sequences (Lemma \[lemma:ds\]) test degree sequences (Lemma \[lemma:ds2\]) 16.32 hrs. \#Vars \#Clauses (280 instances: 200 unsat, 80 sat) hardest: 1.34 hrs 1215 (avg) 7729(avg) [4]{} enumerate 11[,]{}933 degree matrices (Lemma \[lemma:dm\]) test degree matrices (Lemma \[lemma:dm2\]) 126.55 hrs. \#Vars \#Clauses (11[,]{}933 instances: 10[,]{}934 unsat, 999 sat) hardest: 0.88 hrs. 1520 (avg) 7632 (avg) : Computing the degree matrices for ${{\cal R}}(3,3,3;13)$ step by step.[]{data-label="tab:333_computeDMs"} Computing ${{\cal R}}(3,3,3;13)$ from Degree Matrices {#sec:33313b} ===================================================== We describe the computation of the set ${{\cal R}}(3,3,3;13)$ starting from the 999 degree matrices identified in Lemma \[lemma:dm2\]. Table \[tab:333\_times\] summarizes the two step experiment. Step ------- ------------------------------------------------------ -------------------- compute all $(3,3,3;13)$ Ramsey colorings per total:  136.31 hr. degree matrix (999 instances, 129[,]{}188 solutions) hardest:4.3 hr. [2]{} reduce modulo $\approx$ (78[,]{}892 solutions) : Computing ${{\cal R}}(3,3,3;13)$ step by step.[]{data-label="tab:333_times"} #### **step 1:** For each degree matrix we compute, using a SAT solver, all corresponding solutions of Equation (\[eq:scenario1\]), where $\varphi(A)=\varphi_{(3,3,3;13)}(A)$ of Constraint (\[constraint:coloring\]) and $M$ is one of the 999 degree matrices identified in (Lemma \[lemma:dm2\]). This generates in total 129[,]{}188 $(3,3,3;13)$ Ramsey colorings. Table \[tab:333\_times\] details the total solving time for these instances and the solving times for the hardest instance for each SAT solver. The largest number of graphs generated by a single instance is 3720. #### **step 2:** The 129[,]{}188 $(3,3,3;13)$ colorings from step 1 are reduced modulo weak-isomorphism using `nauty` [@nauty]. This process results in a set with 78[,]{}892 graphs. We note that recently, the set ${{\cal R}}(3,3,3;13)$ has also been computed independently by Stanislaw Radziszowski, and independently by Richard Kramer and Ivan Livinsky [@stas:personalcommunication]. There is no ${\langle 13,8,8 \rangle}$ Regular $(4,3,3;30)$ Coloring {#sec:433_30} ==================================================================== In order to prove that there is no ${\langle 13,8,8 \rangle}$ regular $(4,3,3;30)$ coloring using the embedding approach of Section \[sec:embed\], we need to check that $78{,}892\times 3\times 3 = 710{,}028$ corresponding instances are unsatisfiable. These correspond to the elements in the cross product of ${{\cal R}}(3,3,3;13)$, ${{\cal R}}(4,2,3;8)$ and ${{\cal R}}(4,3,2)$. $\left\{ \fbox{$\begin{scriptsize}\begin{smallmatrix} 0 & 1 & 1 & 1 & 3 & 3 & 3 & 3 \\ 1 & 0 & 3 & 3 & 1 & 1 & 3 & 3 \\ 1 & 3 & 0 & 3 & 1 & 3 & 1 & 3 \\ 1 & 3 & 3 & 0 & 3 & 3 & 1 & 1 \\ 3 & 1 & 1 & 3 & 0 & 3 & 3 & 1 \\ 3 & 1 & 3 & 3 & 3 & 0 & 1 & 1 \\ 3 & 3 & 1 & 1 & 3 & 1 & 0 & 3 \\ 3 & 3 & 3 & 1 & 1 & 1 & 3 & 0 \end{smallmatrix}\end{scriptsize}$}, \fbox{$\begin{scriptsize}\begin{smallmatrix} 0 & 1 & 1 & 1 & 3 & 3 & 3 & 3 \\ 1 & 0 & 3 & 3 & 1 & 3 & 3 & 3 \\ 1 & 3 & 0 & 3 & 3 & 1 & 1 & 3 \\ 1 & 3 & 3 & 0 & 3 & 1 & 3 & 1 \\ 3 & 1 & 3 & 3 & 0 & 1 & 1 & 3 \\ 3 & 3 & 1 & 1 & 1 & 0 & 3 & 3 \\ 3 & 3 & 1 & 3 & 1 & 3 & 0 & 1 \\ 3 & 3 & 3 & 1 & 3 & 3 & 1 & 0 \end{smallmatrix}\end{scriptsize}$}, \fbox{$\begin{scriptsize}\begin{smallmatrix} 0 & 1 & 1 & 1 & 3 & 3 & 3 & 3 \\ 1 & 0 & 3 & 3 & 1 & 3 & 3 & 3 \\ 1 & 3 & 0 & 3 & 3 & 1 & 1 & 3 \\ 1 & 3 & 3 & 0 & 3 & 1 & 3 & 1 \\ 3 & 1 & 3 & 3 & 0 & 1 & 3 & 3 \\ 3 & 3 & 1 & 1 & 1 & 0 & 3 & 3 \\ 3 & 3 & 1 & 3 & 3 & 3 & 0 & 1 \\ 3 & 3 & 3 & 1 & 3 & 3 & 1 & 0 \end{smallmatrix}\end{scriptsize}$}\right\} \subseteq \left\{ \fbox{$\begin{scriptsize}\begin{smallmatrix} 0 & 1 & 1 & 1 & 3 & 3 & 3 & 3 \\ 1 & 0 & 3 & 3 & 1 & {\mathtt{A}}& 3 & 3 \\ 1 & 3 & 0 & 3 & {\mathtt{A}}& {\mathtt{B}}& 1 & 3 \\ 1 & 3 & 3 & 0 & 3 & {\mathtt{B}}& {\mathtt{A}}& 1 \\ 3 & 1 & {\mathtt{A}}& 3 & 0 & {\mathtt{B}}& {\mathtt{C}}& {\mathtt{A}}\\ 3 & {\mathtt{A}}& {\mathtt{B}}& {\mathtt{B}}& {\mathtt{B}}& 0 & {\mathtt{A}}& {\mathtt{A}}\\ 3 & 3 & 1 & {\mathtt{A}}& {\mathtt{C}}& {\mathtt{A}}& 0 & {\mathtt{B}}\\ 3 & 3 & 3 & 1 & {\mathtt{A}}& {\mathtt{A}}& {\mathtt{B}}& 0 \\ \end{smallmatrix}\end{scriptsize}$} \left| \begin{scriptsize}\begin{array}{l} {\tiny {\mathtt{A}},{\mathtt{B}},{\mathtt{C}}\in\{1,3\}} \\ {\mathtt{A}}\neq {\mathtt{B}}\end{array}\end{scriptsize} \right.\right\}$ To decrease the number of instances by a factor of $9$, we approximate the three $(4,2,3;8)$ colorings by a single description as demonstrated in Figure \[figsubsumer\]. The constrained matrix on the right has four solutions which include the three $(4,2,3;8)$ colorings on the left. We apply a similar approach for the $(4,3,2;8)$ colorings. So, in fact we have a total of only $78{,}892$ embedding instances to consider. In addition to the constraints in Figure \[fig:gcp\], we add constraints to specify that each row of the adjacency matrix has the prescribed number of edges in each color (13, 8 and 8). By application of a SAT solver, we have determined all [r]{}[5cm]{} $78{,}892$ instances to be unsatisfiable. The average size of an instance is 36[,]{}259 clauses with 5187 variables. The total solving time is 128.31 years (running in parallel on 456 threads). The average solving time is 14 hours while the median is 4 hours. Only 797 instances took more than one week to solve. The worst-case solving time is 96.36 days. The two hardest instances are detailed in Appendix \[apdx:hardest\]. Table \[hpi\] specifies, in the second column, the total number of instances that can be shown unsatisfiable within the time specified in the first column. The third column indicates the increment in percentage (within 10 hours we solve 71.46%, within 20 hours we solve an additional 12.11%, etc). The last rows in the table indicate that there are 4 instances which require between 1500 and 2000 hours of computation, and 2 that require between 2000 and 2400 hours. Conclusion {#sec:conclude} ========== We have applied SAT solving techniques together with a methodology using abstraction and symmetry breaking to construct a computational proof that the Ramsey number $R(4,3,3)=30$. Our strategy is based on the search for a $(4,3,3;30)$ Ramsey coloring, which we show does not exist. This implies that $R(4,3,3)\leq 30$ and hence, because of known bounds, that $R(4,3,3) = 30$. The precise value $R(4,3,3)$ has remained unknown for almost 50 years. We have applied a methodology involoving SAT solving, abstraction, and symmetry to compute $R(4,3,3)=30$. We expect this methodology to apply to a range of other hard graph coloring problems. The question of whether a computational proof constitutes a [ *proper*]{} proof is a controversial one. Most famously the issue caused much heated debate after publication of the computer proof of the Four Color Theorem [@appel76]. It is straightforward to justify an existence proof (i.e. a [*SAT*]{} result), as it is easy to verify that the witness produced satisfies the desired properties. Justifying an [*UNSAT*]{} result is more difficult. If nothing else, we are certainly required to add the proviso that our results are based on the assumption of a lack of bugs in the entire tool chain (constraint solver, SAT solver, C-compiler etc.) used to obtain them. Most modern SAT solvers, support the option to generate a proof certificate for UNSAT instances (see e.g. [@HeuleHW14]), in the DRAT format [@WetzlerHH14], which can then be checked by a Theorem prover. This might be useful to prove the lack of bugs originating from the SAT solver but does not offer any guarantee concerning bugs in the generation of the CNF. Moreover, the DRAT certificates for an application like that described in this paper are expected to be of unmanageable size. Our proofs are based on two main “computer programs”. The first was applied to compute the set ${{\cal R}}(3,3,3;13)$ with its $78{,}892$ Ramsey colorings. The fact that at least two other groups of researchers (Stanislaw Radziszowski, and independently Richard Kramer and Ivan Livinsky) report having computed this set and quote [@stas:personalcommunication] the same number of elements is reassuring. The second program, was applied to complete partially instantiated adjacency matrices, embedding smaller Ramsey colorings, to determine if they can be extended to Ramsey colorings. This program was applied to show the non-existence of a $(4,3,3;30)$ Ramsey coloring. Here we gain confidence from the fact that the same program does find Ramsey colorings when they are known to exist. For example, the $(4,3,3;29)$ coloring depicted as Figure \[embed\_12\_8\_8\]. All of the software used to obtain our results is publicly available, as well as the individual constraint models and their corresponding encodings to CNF. For details, see the appendix. Acknowledgments {#acknowledgments .unnumbered} --------------- We thank Stanislaw Radziszowski for his guidance and comments which helped improve the presentation of this paper. In particular Stanislaw proposed to show that our technique is able to find the $(4,3,3;29)$ coloring depicted as Figure \[embed\_12\_8\_8\]. [10]{} K. Appel and W. Haken. Every map is four colourable. , 82:711–712, 1976. M. Codish, A. Miller, P. Prosser, and P. J. Stuckey. Breaking symmetries in graph representation. In F. Rossi, editor, [*Proceedings of the 23rd International Joint Conference on Artificial Intelligence, Beijing, China*]{}. [IJCAI/AAAI]{}, 2013. M. Codish, A. Miller, P. Prosser, and P. J. Stuckey. Constraints for symmetry breaking in graph representation. Full version of [@DBLP:conf/ijcai/CodishMPS13] (in preparation)., 2014. P. Erd[ö]{}s and T. Gallai. Graphs with prescribed degrees of vertices (in [H]{}ungarian). , pages 264–274, 1960. Available from <http://www.renyi.hu/~p_erdos/1961-05.pdf>. S. E. Fettes, R. L. Kramer, and S. P. Radziszowski. An upper bound of 62 on the classical [R]{}amsey number r(3, 3, 3, 3). , 72, 2004. M. Heule, W. A. H. Jr., and N. Wetzler. Bridging the gap between easy generation and efficient verification of unsatisfiability proofs. , 24(8):593–607, 2014. J. G. Kalbfleisch. . PhD thesis, University of Waterloo, January 1966. B. McKay. *nauty* user’s guide (version 1.5). Technical Report TR-CS-90-02, Australian National University, Computer Science Department, 1990. K. L. McMillan. Applying [SAT]{} methods in unbounded symbolic model checking. In E. Brinksma and K. G. Larsen, editors, [*Computer Aided Verification, 14th International Conference, Proceedings*]{}, volume 2404 of [*Lecture Notes in Computer Science*]{}, pages 250–264. Springer, 2002. A. Metodi, M. Codish, and P. J. Stuckey. Boolean equi-propagation for concise and efficient [SAT]{} encodings of combinatorial problems. , 46:303–341, 2013. K. Piwakowski. On [R]{}amsey number r(4, 3, 3) and triangle-free edge-chromatic graphs in three colors. , 164(1-3):243–249, 1997. K. Piwakowski and S. P. Radziszowski. $30 \leq {R}(3,3,4) \leq 31$. , 27:135–141, 1998. K. Piwakowski and S. P. Radziszowski. Towards the exact value of the [R]{}amsey number r(3, 3, 4). In [*Proceedings of the 33-rd Southeastern International Conference on Combinatorics, Graph Theory, and Computing*]{}, volume 148, pages 161–167. Congressus Numerantium, 2001. <http://www.cs.rit.edu/~spr/PUBL/paper44.pdf>. S. P. Radziszowski. Personal communication. January, 2015. S. P. Radziszowski. Small [R]{}amsey numbers. , 1994. Revision \#14: January, 2014. M. Soos. , v2.5.1. <http://www.msoos.org/cryptominisat2>, 2010. D. Stolee. Canonical labelings with nauty. Computational Combinatorics (Blog), Entry from September 20, 2012. <http://computationalcombinatorics.wordpress.com> (viewed October 2015). N. Wetzler, M. Heule, and W. A. H. Jr. Drat-trim: Efficient checking and trimming using expressive clausal proofs. In C. Sinz and U. Egly, editors, [*Theory and Applications of Satisfiability Testing, 17th International Conference, Proceedings*]{}, volume 8561 of [*Lecture Notes in Computer Science*]{}, pages 422–429. Springer, 2014. X. Xu and S. P. Radziszowski. On some open questions for [R]{}amsey and [F]{}olkman numbers. , 2015. (to appear). Selected Proofs {#proofs} =============== **Lemma**  \[lemma:closed\].     \[**${{\cal R}}(r_1,r_2,\ldots,r_k;n)$ is closed under $\approx$**\] Let $(G,{\kappa_1})$ and $(H,{\kappa_2})$ be graph colorings in $k$ colors such that $(G,\kappa_1) \approx_{\pi,\sigma} (H,\kappa_2)$. Then, $$(G,\kappa_1) \in {{\cal R}}(r_1,r_2,\ldots,r_k;n) \iff (H,\kappa_2) \in {{\cal R}}(\sigma(r_1),\sigma(r_2),\ldots,\sigma(r_k);n)$$ Assume that $(G,\kappa_1) \in {{\cal R}}(r_1,r_2,\ldots,r_k;n)$ and in contradiction that $(H,\kappa_2) \notin {{\cal R}}(\sigma(r_1),\sigma(r_2),\ldots,\sigma(r_k);n)$. Let $R$ denote a monochromatic clique of size $r_s$ in $H$ and $R^{-1}$ the inverse of $R$ in $G$. From Definition \[def:weak\_iso\], $(u,v) \in R \iff (\pi^{-1}(u), \pi^{-1}(v))\in R^{-1}$ and $\kappa_2(u,v) = \sigma^{-1}(\kappa_1(u,v))$. Consequently $R^{-1}$ is a monochromatic clique of size $r_s$ in $(G,\kappa_1)$ in contradiction to $(G,\kappa_1)$ $\in$ ${{\cal R}}(r_1,r_2,\ldots,r_k;n)$. **Theorem**  \[thm:sbl\_star\].     \[**correctness of ${\textsf{sb}}^{*}_\ell(A,M)$**\] Let $A$ be an adjacency matrix with $\alpha(A) = M$. Then, there exists $A'\approx A$ such that $\alpha(A')=M$ and ${\textsf{sb}}^{*}_\ell(A',M)$ holds. Let $C={\left\{~A' \left| \begin{array}{l}A'\approx A \wedge \alpha(A')=M\end{array} \right. \right\}}$. Obviously $C\neq \emptyset$ because $A\in C$ and therefore there exists a $A_{min}=min_{\preceq} C$. Therefore, $A_{min} \preceq A'$ for all $A' \in C$. Now we can view $M$ as inducing an ordered partion on $A$: vertices $u$ and $v$ are in the same component if and only if the corresponding rows of $M$ are equal. Relying on Theorem 4 from [@DBLP:conf/ijcai/CodishMPS13], we conclude that ${\textsf{sb}}^{*}_\ell(A_{min},M)$ holds. The Two Hardest Instances {#apdx:hardest} ========================= The following partial adjacency matrices are the two hardest instances described in Section \[sec:433\_30\], from the total 78[,]{}892. Both include the constraints: ${\mathtt{A}},{\mathtt{B}},{\mathtt{C}},\in\{1,3\}$, ${\mathtt{D}},{\mathtt{E}},{\mathtt{F}}\in\{1,2\}$, ${\mathtt{A}}\neq {\mathtt{B}}$, ${\mathtt{D}}\neq {\mathtt{E}}$. The corresponding CNF representations consist in 5204 Boolean variables (each), 36[,]{}626 clauses for the left instance and 36[,]{}730 for the right instance. SAT solving times to show these instances UNSAT are 8[,]{}325[,]{}246 seconds for the left instance and 7[,]{}947[,]{}257 for the right. Making the Instances Available ============================== The statistics from the proof that $R(4,3,3)=30$ are available from the domain: > <http://cs.bgu.ac.il/~mcodish/Benchmarks/Ramsey334>. Additionally, we have made a small sample (30) of the instances available. Here we provide instances with the degrees ${\langle 13,8,8 \rangle}$ in the three colors. The selected instances represent the varying hardness encountered during the search. The instances numbered $\{27765$, $39710$, $42988$, $36697$, $13422$, $24578$, $69251$, $39651$, $43004$, $75280\}$ are the hardest, the instances numbered $\{4157$, $55838$, $18727$, $43649$, $26725$, $47522$, $9293$, $519$, $23526$, $29880\}$ are the median, and the instances numbered $\{78857$, $78709$, $78623$, $78858$, $28426$, $77522$, $45135$, $74735$, $75987$, $77387\}$ are the easiest. A complete set of both the  models and the DIMACS CNF files are available upon request. Note however that they weight around 50GB when zipped. The files in [bee\_models.zip](bee_models.zip) detail constraint models, each one in a separate file. The file named `r433_30_Instance#.bee` contains a single Prolog clause of the form > `model(Instance#,Map,ListOfConstraints) :- {...details...} .` where `Instance#` is the instance number, `Map` is a partially instantiated adjacency matrix associating the unknown adjacency matrix cells with variable names, and `ListOfConstraints` are the finite domain constraints defining their values. The syntax is that of , however the interested reader can easily convert these to their favorite fininte domain constraint language. Note that the Boolean values ${\mathit{true}}$ and ${\mathit{false}}$ are represented in  by the constants $1$ and $-1$. Figure \[fig:bee\] details the  constraints which occur in the above mentioned models. ----- ------------------------------------------------------ ------------- ------------------------------------------------------------------ (1) $\mathtt{new\_int(I,c_1,c_2)}$ declare integer: $\mathtt{c_1\leq I\leq c_2}$ (2) $\mathtt{bool\_array\_or([X_1,\ldots,X_n])}$ clause: $\mathtt{X_1 \vee X_2 \cdots \vee X_n}$ (3) $\mathtt{bool\_array\_sum\_eq([X_1,\ldots,X_n],~I)}$ Boolean cardinality: $\mathtt{(\Sigma ~X_i) = I}$ (4) $\mathtt{int\_eq\_reif(I_1,I_2,~X)}$ reified integer equality: $\mathtt{I_1 = I_2 \Leftrightarrow X}$ (5) $\mathtt{int\_neq(I_1,I_2)}$ $\mathtt{}$ $\mathtt{I_1 \neq I_2}$ (6) $\mathtt{int\_gt(I_1,I_2)}$ $\mathtt{}$ $\mathtt{I_1 > I_2}$ ----- ------------------------------------------------------ ------------- ------------------------------------------------------------------ The files in [cnf\_models.zip](cnf_models.zip) correspond to CNF encodings for the constraint models. Each instance is associated with two files: `r433_30_instance#.dimacs` and `r433_30_instance#.map`. These consist respectively in a DIMACS file and a map file which associates the Booleans in the DIMACS file with the integer variables in a corresponding partially instantiated adjacency matrix. The map file specifies for each pair $(i,j)$ of vertices a triplet $[B_1,B_2,B_3]$ of Boolean variables (or values) specifying the presence of an edge in each of the three colors. Each such $B_i$ is either the name of a DIMACS variable, if it is greater than 1, or a truth value $1$ (${\mathit{true}}$), or $-1$ (${\mathit{false}}$). [^1]: Supported by the Israel Science Foundation, grant 182/13. [^2]: Recently, the set ${{\cal R}}(3,3,3;13)$ has also been computed independently by: Stanislaw Radziszowski, Richard Kramer and Ivan Livinsky [@stas:personalcommunication]. [^3]: Note that `nauty` does not directly handle edge colored graphs and weak isomorphism directly. We applied an approach called $k$-layering described by Derrick Stolee [@Stolee].
--- author: - | John Smith,$^{1\ast}$ Jane Doe,$^{1}$ Joe Scientist$^{2}$\ \ \ \ \ \ bibliography: - 'scibib.bib' title: 'A simple [*Science*]{} Template' --- This document presents a number of hints about how to set up your [*Science*]{} paper in LaTeX . We provide a template file, `scifile.tex`, that you can use to set up the LaTeX source for your article. An example of the style is the special `{sciabstract}` environment used to set up the abstract you see here. Introduction {#introduction .unnumbered} ============ In this file, we present some tips and sample mark-up to assure your LaTeX file of the smoothest possible journey from review manuscript to published [*Science*]{} paper. We focus here particularly on issues related to style files, citation, and math, tables, and figures, as those tend to be the biggest sticking points. Please use the source file for this document, `scifile.tex`, as a template for your manuscript, cutting and pasting your content into the file at the appropriate places. [*Science*]{}’s publication workflow relies on Microsoft Word. To translate LaTeX files into Word, we use an intermediate MS-DOS routine [@tth] that converts the TeX source into HTML. The routine is generally robust, but it works best if the source document is clean LaTeX without a significant freight of local macros or `.sty` files. Use of the source file `scifile.tex` as a template, and calling [*only*]{} the `.sty` and `.bst` files specifically mentioned here, will generate a manuscript that should be eminently reviewable, and yet will allow your paper to proceed quickly into our production flow upon acceptance [@use2e]. Formatting Citations {#formatting-citations .unnumbered} ==================== Citations can be handled in one of three ways. The most straightforward (albeit labor-intensive) would be to hardwire your citations into your LaTeX source, as you would if you were using an ordinary word processor. Thus, your code might look something like this: > However, this record of the solar nebula may have been > partly erased by the complex history of the meteorite > parent bodies, which includes collision-induced shock, > thermal metamorphism, and aqueous alteration > ({\it 1, 2, 5--7\/}). Compiled, the last two lines of the code above, of course, would give notecalls in [*Science*]{} style: > …thermal metamorphism, and aqueous alteration ([*1, 2, 5–7*]{}). Under the same logic, the author could set up his or her reference list as a simple enumeration, > {\bf References and Notes} > > \begin{enumerate} > \item G. Gamow, {\it The Constitution of Atomic Nuclei > and Radioactivity\/} (Oxford Univ. Press, New York, 1931). > \item W. Heisenberg and W. Pauli, {\it Zeitschr.\ f.\ > Physik\/} {\bf 56}, 1 (1929). > \end{enumerate} yielding > [**References and Notes**]{} > > 1. G. Gamow, [*The Constitution of Atomic Nuclei and Radioactivity*]{} (Oxford Univ. Press, New York, 1931). > > 2. W. Heisenberg and W. Pauli, [*Zeitschr. f. Physik*]{} [**56**]{}, 1 (1929). > That’s not a solution that’s likely to appeal to everyone, however — especially not to users of BTeX [@inclme]. If you are a BTeX user, we suggest that you use the `Science.bst` bibliography style file and the `scicite.sty` package, both of which are downloadable from our author help site. [**While you can use BTeX to generate the reference list, please don’t submit your .bib and .bbl files; instead, paste the generated .bbl file into the .tex file, creating `{thebibliography}` environment.**]{} You can also generate your reference lists directly by using `{thebibliography}` at the end of your source document; here again, you may find the `scicite.sty` file useful. Whatever you use, be very careful about how you set up your in-text reference calls and notecalls. In particular, observe the following requirements: 1. Please follow the style for references outlined at our author help site and embodied in recent issues of [*Science*]{}. Each citation number should refer to a single reference; please do not concatenate several references under a single number. 2. The reference numbering continues from the main text to the Supplementary Materials (e.g. this main text has references 1-3; the numbering of references in the Supplementary Materials should start with 4). 3. Please cite your references and notes in text [*only*]{} using the standard LaTeX `\cite` command, not another command driven by outside macros. 4. Please separate multiple citations within a single `\cite` command using commas only; there should be [*no space*]{} between reference keynames. That is, if you are citing two papers whose bibliography keys are `keyname1` and `keyname2`, the in-text cite should read `\cite{keyname1,keyname2}`, [*not*]{} `\cite{keyname1, keyname2}`. Failure to follow these guidelines could lead to the omission of the references in an accepted paper when the source file is translated to Word via HTML. Handling Math, Tables, and Figures {#handling-math-tables-and-figures .unnumbered} ================================== Following are a few things to keep in mind in coding equations, tables, and figures for submission to [*Science*]{}. #### In-line math. {#in-line-math. .unnumbered} The utility that we use for converting from LaTeX to HTML handles in-line math relatively well. It is best to avoid using built-up fractions in in-line equations, and going for the more boring “slash” presentation whenever possible — that is, for `$a/b$` (which comes out as $a/b$) rather than `$\frac{a}{b}$` (which compiles as $\frac{a}{b}$). Please do not code arrays or matrices as in-line math; display them instead. And please keep your coding as TeX-y as possible — avoid using specialized math macro packages like `amstex.sty`. #### Tables. {#tables. .unnumbered} The HTML converter that we use seems to handle reasonably well simple tables generated using the LaTeX`{tabular}` environment. For very complicated tables, you may want to consider generating them in a word processing program and including them as a separate file. #### Figures. {#figures. .unnumbered} Figure callouts within the text should not be in the form of LaTeX references, but should simply be typed in — that is, `(Fig. 1)` rather than `\ref{fig1}`. For the figures themselves, treatment can differ depending on whether the manuscript is an initial submission or a final revision for acceptance and publication. For an initial submission and review copy, you can use the LaTeX `{figure}` environment and the `\includegraphics` command to include your PostScript figures at the end of the compiled file. For the final revision, however, the `{figure}` environment should [*not*]{} be used; instead, the figure captions themselves should be typed in as regular text at the end of the source file (an example is included here), and the figures should be uploaded separately according to the Art Department’s instructions. What to Send In {#what-to-send-in .unnumbered} =============== What you should send to [*Science*]{} will depend on the stage your manuscript is in: - [**Important:**]{} If you’re sending in the initial submission of your manuscript (that is, the copy for evaluation and peer review), please send in [*only*]{} a PDF version of the compiled file (including figures). Please do not send in the TeX  source, `.sty`, `.bbl`, or other associated files with your initial submission. (For more information, please see the instructions at our Web submission site.) - When the time comes for you to send in your revised final manuscript (i.e., after peer review), we require that you include source files and generated files in your upload. [**The .tex file should include the reference list as an itemized list (see “Formatting citations” for the various options). The bibliography should not be in a separate file.**]{} Thus, if the name of your main source document is `ltxfile.tex`, you need to include: - `ltxfile.tex`. - `ltxfile.aux`, the auxilliary file generated by the compilation. - A PDF file generated from `ltxfile.tex`. Acknowledgments {#acknowledgments .unnumbered} =============== Include acknowledgments of funding, any patents pending, where raw data for the paper are deposited, etc. Supplementary materials {#supplementary-materials .unnumbered} ======================= Materials and Methods\ Supplementary Text\ Figs. S1 to S3\ Tables S1 to S4\ References *(4-10)* [**Fig. 1.**]{} Please do not use figure environments to set up your figures in the final (post-peer-review) draft, do not include graphics in your source code, and do not cite figures in the text using LaTeX`\ref` commands. Instead, simply refer to the figure numbers in the text per [*Science*]{} style, and include the list of captions at the end of the document, coded as ordinary paragraphs as shown in the `scifile.tex` template file. Your actual figure files should be submitted separately.
--- abstract: 'In this paper we review the recent results on strangeness production measured by HADES in the Ar+KCl system at a beam energy of 1.756 AGeV. A detailed comparison of the measured hadron yields with the statistical model is also discussed.' address: - | GSI Helmholtz Centre for Heavy Ion Research GmbH Planckstrasse 1,\ D-64291 Darmstadt, GERMANY - 'Excellence Cluster ’Universe’, Technische Universität München, Boltzmannstr. 2, D-85748 Garching Germany.' author: - 'J. Pietraszko [^1] L. Fabbietti (for the HADES collaboration)' title: Strangeness Production at SIS measured with HADES --- Hadron production in Ar+KCl collisions ====================================== The study of nuclear matter properties at high densities and temperatures is one of the main objectives in relativistic heavy-ion physics. In this context, the paramount aim of measuring the particle yields emitted from heavy-ion collisions is to learn about the K-N potential, production mechanism of strange particles or the nuclear equation of state. Strangeness production in relativistic heavy-ion collisions at SIS/Bevelac energy range has been extensively studied by various groups including experiments at the Bevelac[@bevelac_1] and at SIS, KaoS[@kaos_1], FOPI[@fopi_1], and in recent years also HADES.\ Recently, the HADES[@hades_nim] collaboration measured charged particle production in the Ar+KCl system[@hades_kaons] at 1.756 AGeV. Although HADES was designed primarily for di-electron measurements, it has also shown an excellent capability for the identification of a wide range of hadrons like $K^-$, $K^+$, K$^0$$\rightarrow$$\pi^+\pi^-$, $\Lambda$$\rightarrow$p$\pi^-$, $\phi$$\rightarrow$$K$$^+$$K$$^-$ and even $\Xi$$\rightarrow$$\Lambda$$\pi^-$. The measured yields and slopes of the transverse mass of kaons and $\Lambda$ particles have been found to be in good agreement with the results obtained by KaoS[@kaos_0] and FOPI[@fopi_0]. The dependencies of the measured $K^-/K^+$ ratio on the centrality and on the collision energy follow the systematics measured by KaoS [@hades_kaons] too.\ A combined and inclusive identification of $K^+K^-$ and $\phi$ mesons was performed for the first time in the same experimental setup at subthreshold beam energy for $K^-$ and $\phi$ production. The obtained $\phi/K^-$ ratio of 0.37$\pm$0.13$\%$ indicates that 18$\pm$7$\%$ of kaons stem from $\phi$ decays. Since the $\phi$ mesons reconstructed via $K^+K^-$ channel are those coming mainly from decays happening outside the nuclear medium, this value should be considered as a lower limit. In addition the non-resonant $K^+K^-$ production can contribute to the measured $K^-$ yield. Unfortunately, this part is not known in heavy-ion collisions, but it has been measured to be about 50$\%$ of the overall $K^+K^-$ yield in elementary p+p collisions[@anke_1]. In this view, the $K^-$ production in heavy-ion collisions at SIS energies can not be explained exclusively by the strangeness exchange mechanism and the processes mentioned above must also be taken into account to achieve a complete description. Comparison to statistical models ================================ The yields of reconstructed hadron species have been extrapolated to the full solid angle and compared to the result from a fit with the statistical model THERMUS[@thermus_1] as shown in Fig. \[fig:thermus\]. The measured yields nicely agree with the results of the model, except for the $\Xi^-$. One should note that in this approach a good description of the $\phi$ meson yield is obtained without assuming any strangeness suppression (net strangeness content of the $\phi$ is S=0). This is very different as compared to higher energies where $\phi$ meson does not behave as a strangeness neutral object but rather as an object with net strangeness between 1 and 2 [@strgns_phi].\ (200,180)(0,0) (-32,-2.6)[ ![Chemical freeze-out parameters obtained in the statistical thermal model (for details see [@cleymans_1]). The HADES point corresponds to Ar+KCl collisions at 1.756 AGeV.[]{data-label="fig:statmodel"}](thermus_fits1.10_publ.eps "fig:"){width="110.00000%" height=".915\textwidth"}]{} (200,180)(0,0) (0,-15)[ ![Chemical freeze-out parameters obtained in the statistical thermal model (for details see [@cleymans_1]). The HADES point corresponds to Ar+KCl collisions at 1.756 AGeV.[]{data-label="fig:statmodel"}](eovern_2009_to_be_publ.eps "fig:"){width=".97\textwidth" height=".97\textwidth"}]{} The $\Xi^-$ baryon yields measured in heavy-ion collisions above the production threshold at RHIC[@rhic_1], SPS[@sps_1] and AGS[@ags_1] nicely agree with the statistical model predictions. On the contrary the result of the first $\Xi^-$ measurement below the production threshold published by HADES[@hades_ksi], shows a deviation of about an order of magnitude from the calculations (Fig. \[fig:thermus\]). Using the measured hadron multiplicities the statistical model predicts that the chemical freeze-out of the Ar+KCl collision at 1.765 AGeV occurs at a temperature of T=73$\pm$6MeV and at chemical baryon potential of $\mu$=770$\pm$43MeV. The strangeness correlation radius[@hades_kaons] of R$_c=$2.4$\pm$0.8fm was used which is significantly smaller than the radius of fireball R$_{fireball}$=4.9$\pm$1.4fm.\ This result nicely follows the striking regularity shown by particle yields at all beam energies [@cleymans_1], as presented in Fig. \[fig:statmodel\]. For all available energies, starting from the highest at RHIC down to the lowest at SIS, the measured particle multiplicities are consistent with the assumption of chemical equilibrium which sets in at the end of the collision phase. Only two parameters (the temperature and baryon chemical potential) are needed within a thermal-statistical model to describe particle yields in a very systematic way at a given collision energy [@cleymans_1]. As one can also see, all experimental results are in good agreement with a fixed-energy-per-particle condition $<E>$$/$$<N>$$\approx1GeV$, which is one of the available freeze-out criteria[@cleymans_1]. The new HADES results on strangeness production shed new light on the understanding of kaon production mechanisms in HI collisions, namely the results have provided compelling evidence that the contribution from the $\phi$ decay to the $K^-$ yield has to be also taken into account. The measured hadron yields have been found in general to be in good agreement with statistical model predictions, besides the $\Xi^-$, which is produced far below the production threshold and shows a considerable deviation from the statistical model. The already performed experiments p+p at 3.5 GeV and p+Nb at 3.5 GeV and further planned HADES experiments with heavier systems, like Au+Au will deliver new valuable data on strangeness production. The on-going upgrade of the HADES spectrometer will increase its performance and capability and the installed Forward Wall detector will allow for reaction plane reconstruction in all upcoming runs, allowing to study kaon flow observables as well. [9]{} S. Schnetzer et al. Phys. Rev. Lett. 49 (1982) 989; Phys. Rev. C, 40 (1989) 640 C. Sturm et al. (KAOS) Phys. Rev. Lett. 86 (2001) 39 J. L. Ritman et al. (FOPI), Z. Phys. A 352 (1995) 355 G. Agakichiev et al. (HADES), Eur. Phys. J. A 41 (2009) 243 G.Agakichev et al., (HADES), in press Phys. Rev. C and arXiv:0902.3487 A. Förster et al. (KAOS) Phys. Rev. C 75 (2007) 024906. M. Merschmeyer et al. (FOPI) Phys. Rev. C 76 (2007) 024906 Y.Maeda et al. (ANKE), Phys. Rev. C 77 (2008) 015204 S. Wheaton and J. Cleymans, hep-ph/0407174\ S. Wheaton and J. Cleymans, J. Phys. G31 (2005) S1069 I. Kraus et al., Phys. Rev. C76, 064903 (2007) J. Adams et al. (STAR), Phys. Rev. Lett. bold[98]{}, 062301 (2007) F. Antinori et al. (NA57) Phys. Lett. B 595, 68 (2004)\ C. Alt et al. (NA49), Phys. Rev. C 78, 034918 (2008) P. Chung et al. (E895), Phys. Rev. Lett. 91, 202301 (2003) G. Agakishiev et al. (HADES) Phys. Rev. Lett. 1003 (2009) 132301 J. Cleymans at al. Phys. Rev. C 73, 034905 (2006) and J. Cleymans private communication [^1]: e-mail: j.pietraszko@gsi.de
--- abstract: | We express the averages of products of characteristic polynomials for random matrix ensembles associated with compact symmetric spaces in terms of Jack polynomials or Heckman and Opdam’s Jacobi polynomials depending on the root system of the space. We also give explicit expressions for the asymptotic behavior of these averages in the limit as the matrix size goes to infinity. [**MSC-class**]{}: primary 15A52; secondary 33C52, 05E05.\ [**Keywords**]{}: characteristic polynomial, random matrix, Jacobi polynomial, Jack polynomial, Macdonald polynomial, compact symmetric space. author: - '<span style="font-variant:small-caps;">Sho MATSUMOTO</span> [^1]' title: '**Moments of characteristic polynomials for compact symmetric spaces and Jack polynomials**' --- Introduction ============ In recent years, there has been considerable interest in the averages of the characteristic polynomials of random matrices. This work is motivated by the connection with Riemann zeta functions and $L$-functions identified by Keating and Snaith [@KS_zetafunctions; @KS_Lfunctions]. The averages of the characteristic polynomials in the cases of compact classical groups and Hermitian matrix ensembles have already calculated, see [@Mehta] and references in [@BG]. In these studies, Bump and Gamburd [@BG] obtain simple proofs for the cases corresponding to compact classical groups by using symmetric polynomial theory. Our aim in this note is to use their technique to calculate averages of the characteristic polynomials for random matrix ensembles associated with compact symmetric spaces. We deal with the compact symmetric spaces $G/K$ classified by Cartan, where $G$ is a compact subgroup in $GL(N,{\mathbb{C}})$ for some positive integer $N$, and $K$ is a closed subgroup of $G$. Assume $G/K$ is realized as a subspace $S$ in $G$, i.e., $S \simeq G/K$, and the probability measure $\dd M$ on $S$ is then induced from $G/K$. We call the probability space $(S, \dd M)$ the random matrix ensemble associated with $G/K$. For example, $U(n)/O(n)$ is the symmetric space with a restricted root system of type A, and is realized by $S=\{M \in U(n) \ | \ M = \trans{M} \}$. Here $\trans{M}$ stands for the transposed matrix of $M$ while $U(n)$ and $O(n)$ denote the unitary and orthogonal group of matrices or order $n$ respectively. The induced measure $\dd M$ on $S$ satisfies the invariance $\dd (H M \trans{H})= \dd M$ for any $H \in U(n)$. This random matrix ensemble $(S, \dd M)$ is well known as the circular orthogonal ensemble (COE for short), see e.g. [@Dyson; @Mehta]. We also consider the classical compact Lie groups $U(n)$, $SO(n)$, and $Sp(2n)$. Regarding these groups as symmetric spaces, the random matrix space $S$ is just the group itself with its Haar measure. The compact symmetric spaces studied by Cartan are divided into A and BC type main branches according to their root systems. There are three symmetric spaces of type A, with their corresponding matrix ensembles called circular orthogonal, unitary, and symplectic ensembles. For these ensembles, the probability density functions (p.d.f.) for the eigenvalues are proportional to $$\Delta^{{\mathrm{Jack}}}(\bz;2/\beta)= \prod_{1 \le i<j \le n} |z_i -z_j|^{\beta},$$ with $\beta=1,2,4$, where $\bz =(z_1,\dots, z_n)$, with $|z_i|=1$, denotes the sequence of eigenvalues of the random matrix. We will express the average of the product of characteristic polynomials $\det(I+ xM)$ for a random matrix $M$ as a Jack polynomial ([@Mac Chapter VI-10]) of a rectangular-shaped Young diagram. Jack polynomials are orthogonal with respect to the weight function $\Delta^{{\mathrm{Jack}}}$. Our theorems are obtained in a simple algebraic way, and contain results given in [@KS_zetafunctions]. For compact symmetric spaces of type BC root systems, the corresponding p.d.f. is given by $$\Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3) = \prod_{1 \le i <j \le n} |1-z_i z_j^{-1}|^{2k_3} |1-z_i z_j|^{2k_3} \cdot \prod_{1 \le j \le n} |1-z_j|^{2k_1} |1-z_j^2|^{2k_2}.$$ Here the $k_i$’s denote multiplicities of roots in the root systems of the symmetric spaces. For example, the p.d.f. induced from the symmetric space $SO(4n+2)/(SO(4n+2) \cap Sp(4n+2))$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;2, \frac{1}{2},2)$. For this class of compact symmetric spaces, Opdam and Heckman’s Jacobi polynomials ([@Diejen; @Heckman]), which are orthogonal with respect to $\Delta^{{\mathrm{HO}}}$, will play the same role as Jack polynomials for type A cases. Namely, we will express the average of the product of characteristic polynomials $\det(I+ xM)$ as the Jacobi polynomial of a rectangular-shaped diagram. This paper is organized as follows: Our main results, which are expressions for the averages of products of characteristic polynomials, will be given §6. As described above, the symmetric spaces corresponding to the two root systems, type A and BC, will be discussed separately. For type A spaces, we use Jack polynomial theory. These discussions can be generalized to Macdonald polynomials. Thus, after preparations in §2, we give some generalized identities involving Macdonald polynomials and a generalization of the weight function $\Delta^{{\mathrm{Jack}}}$ in §3 and §4. In particular, we obtain $q$-analogues of Keating and Snaith’s formulas [@KS_zetafunctions] for the moments of characteristic polynomials and a generalization of the strong Szegö limit theorem for Toeplitz determinants. These identities are reduced to characteristic polynomial expressions for symmetric spaces of the A type root system in §6.1 - §6.3. On the other hand, for type BC spaces, we employ Opdam and Heckman’s Jacobi polynomials. We review the definition and several properties of these polynomials in §5, while in §6.4 - §6.12 we apply them to obtain expressions for the products of characteristic polynomials of random matrix ensembles associated with symmetric spaces of type BC. Basic Properties of Macdonald symmetric functions ================================================= We recall the definition of Macdonald symmetric functions, see [@Mac Chapter VI] for details. Let $\lambda$ be a partition, i.e., $\lambda=(\lambda_1,\lambda_2,\dots)$ is a weakly decreasing ordered sequence of non-negative integers with finitely many non-zero entries. Denote by $\ell(\lambda)$ the number of non-zero $\lambda_j$ and by $|\lambda|$ the sum of all $\lambda_j$. These values $\ell(\lambda)$ and $|\lambda|$ are called the length and weight of $\lambda$ respectively. We identify $\lambda$ with the associated Young diagram $\{(i,j) \in {\mathbb{Z}}^2 \ | \ 1 \le j \le \lambda_i \}$. The conjugate partition $\lambda'=(\lambda'_1,\lambda'_2,\dots)$ is determined by the transpose of the Young diagram $\lambda$. It is sometimes convenient to write this partition in the form $\lambda=(1^{m_1} 2^{m_2} \cdots )$, where $m_i=m_i(\lambda)$ is the multiplicity of $i$ in $\lambda$ and is given by $m_i=\lambda'_i-\lambda'_{i+1}$. For two partitions $\lambda$ and $\mu$, we write $\lambda \subset \mu$ if $\lambda_i \le \mu_i$ for all $i$. In particular, the notation $\lambda \subset (m^n)$ means that $\lambda$ satisfies $\lambda_1 \le m$ and $\lambda_1' \le n$. The dominance ordering associated with the root system of type A is defined as follows: for two partitions $\lambda=(\lambda_1,\lambda_2,\dots)$ and $\mu=(\mu_1,\mu_2,\dots)$, $$\mu \le_{{\mathrm{A}}} \lambda \qquad \Leftrightarrow \qquad |\lambda|=|\mu| \quad \text{and} \quad \mu_1 + \cdots+\mu_i \le \lambda_1+ \cdots +\lambda_i \quad \text{for all $i \ge 1$}.$$ Let $q$ and $t$ be real numbers such that both $|q|<1$ and $|t|<1$. Put $F={\mathbb{Q}}(q,t)$ and ${\mathbb{T}}^n=\{\bz =(z_1,\dots,z_n) \ | \ |z_i|=1 \ (1 \le i \le n)\}$. Denote by $F[x_1,\dots,x_n]^{{\mathfrak{S}}_n}$ the algebra of symmetric polynomials in variables $x_1,\dots,x_n$. Define an inner product on $F[x_1,\dots,x_n]^{{\mathfrak{S}}_n}$ by $$\langle f, g \rangle_{\Delta^{{\mathrm{Mac}}}} = \frac{1}{n!} \int_{{\mathbb{T}}^n} f(\bz) g(\bz^{-1}) \Delta^{{\mathrm{Mac}}}(\bz;q,t) \dd \bz$$ with $$\Delta^{{\mathrm{Mac}}}(\bz;q,t)= \prod_{1 \le i<j \le n} \Bigg| \frac{(z_i z_j^{-1};q)_\infty}{(t z_i z_j^{-1};q)_\infty} \Bigg|^2,$$ where $\bz^{-1}=(z_1^{-1},\dots,z_n^{-1})$ and $(a;q)_\infty= \prod_{r=0}^\infty(1-aq^r)$. Here $\dd \bz$ is the normalized Haar measure on ${\mathbb{T}}^n$. For a partition $\lambda$ of length $\ell(\lambda) \le n$, put $$\label{eq:monomialA} m_{\lambda}^{{\mathrm{A}}} (x_1,\dots,x_n) = \sum_{\nu=(\nu_1,\dots,\nu_n) \in {\mathfrak{S}}_n \lambda} x_1^{\nu_1} \cdots x_n^{\nu_n},$$ where the sum runs over the ${\mathfrak{S}}_n$-orbit ${\mathfrak{S}}_n \lambda = \{ (\lambda_{\sigma(1)},\dots, \lambda_{\sigma(n)}) \ | \ \sigma \in {\mathfrak{S}}_n\}$. Here we add the suffix “A” because ${\mathfrak{S}}_n$ is the Weyl group of type A. Then Macdonald polynomials (of type A) $P_\lambda^{{\mathrm{Mac}}}=P_{\lambda}^{{\mathrm{Mac}}}(x_1,\dots,x_n;q,t) \in F[x_1,\dots,x_n]^{{\mathfrak{S}}_n}$ are characterized by the following conditions: $$P_{\lambda}^{{\mathrm{Mac}}} = m_{\lambda}^{{\mathrm{A}}} + \sum_{\mu <_{{\mathrm{A}}} \lambda} u_{\lambda \mu} m_{\mu}^{{\mathrm{A}}} \quad \text{with $u_{\lambda\mu} \in F$}, \qquad\qquad \langle P_{\lambda}^{{\mathrm{Mac}}}, P_{\mu}^{{\mathrm{Mac}}} \rangle_{\Delta^{{\mathrm{Mac}}}}=0 \quad \text{if $\lambda \not=\mu$}.$$ Denote by $\Lambda_F$ the $F$-algebra of symmetric functions in infinitely many variables $\bx=(x_1,x_2,\dots)$. That is, an element $f =f(\bx) \in \Lambda_F$ is determined by the sequence $(f_n)_{n \ge 0}$ of polynomials $f_n$ in $F[x_1,\dots,x_n]^{{\mathfrak{S}}_n}$, where these polynomials satisfy $\sup_{n \ge 0} \deg (f_n) < \infty$ and $f_m(x_1,\dots,x_n,0,\dots,0)=f_n(x_1,\dots,x_n)$ for any $m \ge n$, see [@Mac Chapter I-2]. Macdonald polynomials satisfy the stability property $$P^{{\mathrm{Mac}}}_\lambda(x_1,\dots,x_n,x_{n+1};q,t) \Big|_{x_{n+1}=0} = P^{{\mathrm{Mac}}}_\lambda(x_1,\dots,x_n;q,t)$$ for any partition $\lambda$ of length $\ell(\lambda) \le n$, and therefore for all partitions $\lambda$, [*Macdonald functions*]{} $P_{\lambda}^{{\mathrm{Mac}}}(\bx ;q,t)$ can be defined. For each square $s=(i,j)$ of the diagram $\lambda$, let $$a(s)=\lambda_i-j, \qquad a'(s)=j-1, \qquad l(s)= \lambda'_j-i, \qquad l'(s)= i-1.$$ These numbers are called the arm-length, arm-colength, leg-length, and leg-colength respectively. Put $$c_{\lambda}(q,t)= \prod_{s \in \lambda} (1-q^{a(s)}t^{l(s)+1}), \qquad c_{\lambda}'(q,t)= \prod_{s \in \lambda} (1-q^{a(s)+1} t^{l(s)}).$$ Note that $c_{\lambda}(q,t)= c'_{\lambda'}(t,q)$. Defining the $Q$-function by $Q_{\lambda}(\bx;q,t)= c_{\lambda}(q,t) c'_{\lambda}(q,t)^{-1} P_\lambda(\bx;q,t)$, we have the dual Cauchy identity [@Mac Chapter VI (5.4)] $$\begin{aligned} & \sum_{\lambda} P_\lambda(\bx;q,t) P_{\lambda'}(\by;t,q)= \sum_{\lambda} Q_\lambda(\bx;q,t) Q_{\lambda'}(\by;t,q) \label{EqDualCauchy} \\ =& \prod_{i\ge 1} \prod_{j\ge 1} (1+x_i y_j) =\exp \(\sum_{k=1}^\infty \frac{(-1)^{k-1}}{k}p_k(\bx)p_k(\by) \), \notag \end{aligned}$$ where $\by=(y_1,y_2,\dots)$. Here $p_k$ is the power-sum function $p_k(\bx)=x_1^k+x_2^k+ \cdots$. We define the generalized factorial $(a)_{\lambda}^{(q,t)}$ by $$(a)_{\lambda}^{(q,t)} = \prod_{s \in \lambda} (t^{l'(s)} - q^{a'(s)} a).$$ Let $u$ be an indeterminate and define the homomorphism $\epsilon_{u,t}$ from $\Lambda_F$ to $F$ by $$\label{EqSpecialPowerSum} \epsilon_{u,t}(p_r) = \frac{1-u^r}{1-t^r} \qquad \text{for all $r \ge 1$}.$$ In particular, we have $\epsilon_{t^n,t}(f)= f(1,t,t^2,\dots, t^{n-1})$ for any $f \in \Lambda_F$. Then we have ([@Mac Chapter VI (6.17)]) $$\label{EqSpecialMac} \epsilon_{u,t}(P_{\lambda}^{{\mathrm{Mac}}})= \frac{(u)_{\lambda}^{(q,t)}}{c_{\lambda}(q,t)}.$$ Finally, the following orthogonality property is satisfied for any two partitions $\lambda$ and $\mu$ of length $\le n$: $$\label{EqOrthogonality} \langle P_{\lambda}^{{\mathrm{Mac}}}, Q_{\mu}^{{\mathrm{Mac}}} \rangle_{\Delta^{{\mathrm{Mac}}}}= \delta_{\lambda \mu} \langle 1,1 \rangle_{\Delta^{{\mathrm{Mac}}}} \prod_{s \in \lambda} \frac{1-q^{a'(s)}t^{n-l'(s)}}{1-q^{a'(s)+1}t^{n-l'(s)-1}}.$$ Averages with respect to $\Delta^{{\mathrm{Mac}}}(\bz;q,t)$ {#sectionMacAverage} =========================================================== As in the previous section, we assume $q$ and $t$ are real numbers in the interval $(-1,1)$. For a Laurent polynomial $f$ in variables $z_1,\dots,z_n$, we define $$\langle f \rangle_{n}^{(q,t)} = \frac{\int_{{\mathbb{T}}^n} f(\bz) \Delta^{{\mathrm{Mac}}}(\bz;q,t) \dd \bz} {\int_{{\mathbb{T}}^n} \Delta^{{\mathrm{Mac}}}(\bz;q,t) \dd \bz}.$$ In this section, we calculate averages of the products of the polynomial $$\Psi^{{\mathrm{A}}}(\bz;\eta)= \prod_{j=1}^n (1+ \eta z_j), \qquad \eta \in {\mathbb{C}}$$ with respect to $\langle \cdot \rangle_{n}^{(q,t)}$. Denoting the eigenvalues of a unitary matrix $M$ by $z_1,\dots,z_n$, the polynomial $\Psi^{{\mathrm{A}}}(\bz;\eta)$ is the characteristic polynomial $\det(I+\eta M)$. The following theorems will induce averages of the products of characteristic polynomials for random matrix ensembles associated with type A root systems, see §\[sectionCBEq\] and §\[subsectionA\] - §\[subsectionAII\] below. \[ThmAverageMac\] Let $K$ and $L$ be positive integers. Let $\eta_1,\dots, \eta_{L+K}$ be complex numbers such that $\eta_j \not=0 \ ( 1\le j \le L)$. Then we have $$\left\langle \prod_{l=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_l^{-1}) \cdot \prod_{k=1}^K \Psi^{{\mathrm{A}}}(\bz;\eta_{L+k}) \right\rangle_{n}^{(q,t)} = (\eta_1 \cdots \eta_L)^{-n} \cdot P_{(n^L)}^{{\mathrm{Mac}}} (\eta_1, \dots, \eta_{L+K};t,q).$$ By the dual Cauchy identity , we have $$\begin{aligned} & \prod_{l=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_l^{-1}) \cdot \prod_{k=1}^K \Psi^{{\mathrm{A}}}(\bz;\eta_{L+k}) = \prod_{l=1}^L \eta_l^{-n} \cdot (z_1 \cdots z_n)^{-L} \cdot \prod_{k=1}^{L+K} \prod_{j=1}^n (1+\eta_k z_j) \\ =& \prod_{l=1}^L \eta_l^{-n} \cdot (z_1 \cdots z_n)^{-L} \sum_{\lambda} Q_{\lambda}^{{\mathrm{Mac}}}(\eta_1,\dots,\eta_{L+K};t,q) Q_{\lambda'}^{{\mathrm{Mac}}}(\bz;q,t).\end{aligned}$$ Therefore, since $P_{(L^n)}^{{\mathrm{Mac}}}(\bz;q,t)=(z_1\cdots z_n)^L$ ([@Mac Chapter VI (4.17)]), we see that $$\begin{aligned} \left\langle \prod_{l=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_l^{-1}) \cdot \prod_{k=1}^K \Psi^{{\mathrm{A}}}(\bz;\eta_{L+k}) \right\rangle_{n}^{(q,t)} =& \prod_{l=1}^L \eta_l^{-n} \sum_{\lambda} Q_{\lambda}^{{\mathrm{Mac}}}(\eta_1,\dots,\eta_{L+K};t,q) \frac{\langle Q_{\lambda'}^{{\mathrm{Mac}}}, P_{(L^n)}^{{\mathrm{Mac}}} \rangle_{ \Delta^{{\mathrm{Mac}}} } } {\langle 1, 1 \rangle_{\Delta^{{\mathrm{Mac}}} } } \\ =& \prod_{l=1}^L \eta_l^{-n} \cdot Q_{(n^L)}^{{\mathrm{Mac}}}(\eta_1,\dots,\eta_{L+K};t,q) \prod_{s \in (L^n)} \frac{1-q^{a'(s)}t^{n-l'(s)}}{1-q^{a'(s)+1}t^{n-l'(s)-1}}\end{aligned}$$ by the orthogonality property . It is easy to check that $$\prod_{s \in (L^n)} \frac{1-q^{a'(s)}t^{n-l'(s)}}{1-q^{a'(s)+1}t^{n-l'(s)-1}} =\frac{c_{(L^n)}(q,t)}{c'_{(L^n)}(q,t)}= \frac{c'_{(n^L)}(t,q)}{c_{(n^L)}(t,q)},$$ and so we obtain the claim. It may be noted that the present proof of Theorem \[ThmAverageMac\] is similar to the corresponding one in [@BG]. \[CorMomentValue\] For each positive integer $k$ and $\xi \in {\mathbb{T}}$, we have $$\left\langle \prod_{i=0}^{k-1} | \Psi^{{\mathrm{A}}}(\bz; q^{i+1/2} \xi )|^2 \right\rangle_{n}^{(q,t)} = \prod_{i=0}^{k-1} \prod_{j=0}^{n-1} \frac{1-q^{k+i+1} t^j}{1-q^{i+1} t^j}.$$ Set $L=K=k$ and $\overline{\eta_i}^{-1} =\eta_{i+k} =q^{i-1/2} \xi \ (1 \le i \le k)$ in Theorem \[ThmAverageMac\]. Then we have $$\begin{aligned} \left\langle \prod_{i=0}^{k-1} | \Psi^{{\mathrm{A}}}(\bz;q^{i+1/2} \xi)|^2 \right\rangle_{n}^{(q,t)} =& \prod_{i=0}^{k-1} q^{(i+1/2)n} \cdot P_{(n^k)} (q^{-k+1/2}, q^{-k+3/2}, \dots, q^{-1/2}, q^{1/2}, \cdots, q^{k-1/2};t,q) \\ =& q^{n k^2/2} \cdot q^{(-k+1/2)kn} P_{(n^k)}(1,q,\cdots, q^{2k-1};t,q) \\ =& q^{-n k(k-1)/2} \epsilon_{q^{2k},q} (P_{(n^k)}(\cdot;t,q)). \end{aligned}$$ From expression , the right-hand side of the above expression equals $$q^{-n k(k-1)/2} \frac{(q^{2k})_{(n^k)}^{(t,q)}}{c_{(n^k)}(t,q)} =q^{-n k(k-1)/2} \prod_{i=1}^k \prod_{j=1}^n \frac{q^{i-1}-t^{j-1} q^{2k}}{1-t^{n-j}q^{k-i+1}} = \prod_{j=0}^{n-1} \prod_{i=1}^k \frac{1-t^{j} q^{2k-i+1}}{1-t^j q^{k-i+1}},$$ and the result follows. Kaneko [@Kaneko2] defines the multivariable $q$-hypergeometric function associated with Macdonald polynomials by $${_2 \Phi_1}^{(q,t)}(a,b;c;x_1,\dots,x_n)= \sum_\lambda \frac{ (a)_{\lambda}^{(q,t)} (b)_{\lambda}^{(q,t)}}{(c)_{\lambda}^{(q,t)}} \frac{P^{{\mathrm{Mac}}}_{\lambda}(x_1,\dots,x_n;q,t)}{c'_{\lambda}(q,t)},$$ where $\lambda$ runs over all partitions of length $\ell(\lambda) \le n$. The $q$-shifted moment $\left\langle \prod_{i=0}^{k-1} | \Psi^{{\mathrm{A}}}(\bz;q^{i+1/2} \xi )|^2 \right\rangle_{n}^{(q,t)}$ given in Corollary \[CorMomentValue\] can also be expressed as a special value of the generalized $q$-hypergeometric function ${_2 \Phi_1}^{(q,t)}$ as follows: \[PropMomentHypergeometric\] For any complex number with $|\eta|<1$ and real number $u$, $$\left\langle \prod_{j=1}^n \left| \frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}} \right|^2 \right\rangle_{n}^{(q,t)} = {_2 \Phi_1}^{(q,t)}(u^{-1},u^{-1} ;q t^{n-1}; (u|\eta|)^2, (u|\eta|)^2 t,\dots, (u|\eta|)^2t^{n-1}).$$ In particular, letting $u=q^k$ and $\eta=q^{1/2}\xi$ with $\xi \in {\mathbb{T}}$, we have $$\left\langle \prod_{i=0}^{k-1} | \Psi^{{\mathrm{A}}}(\bz;q^{i+1/2} \xi)|^2 \right\rangle_{n}^{(q,t)} = {_2 \Phi_1}^{(q,t)}(q^{-k},q^{-k} ;q t^{n-1}; q^{2k+1}, q^{2k+1}t , \dots, q^{2k+1} t^{n-1}).$$ A simple calculation gives $$\prod_{j=1}^n \frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}} = \exp \(\sum_{k=1}^\infty \frac{(-1)^{k-1}}{k} \frac{1-u^k}{1-q^k} p_{k}(-\eta z_1, \dots, -\eta z_n) \).$$ From expressions and , we have $$\prod_{j=1}^n \frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}} = \sum_{\lambda} (-\eta)^{|\lambda|} \epsilon_{u,q}(Q^{{\mathrm{Mac}}}_{\lambda'}(\cdot;t,q)) Q^{{\mathrm{Mac}}}_{\lambda}(\bz;q,t) = \sum_{\lambda} (-\eta)^{|\lambda|} \epsilon_{u,q}(P^{{\mathrm{Mac}}}_{\lambda'}(\cdot;t,q)) P^{{\mathrm{Mac}}}_{\lambda}(\bz;q,t).$$ Thus we have $$\prod_{j=1}^n \left| \frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}} \right|^2 = \sum_{\lambda, \mu} (-\eta)^{|\lambda|} (-\overline{\eta})^{|\mu|} \epsilon_{u,q}(P^{{\mathrm{Mac}}}_{\lambda'}(\cdot;t,q)) \epsilon_{u,q}(Q^{{\mathrm{Mac}}}_{\mu'}(\cdot;t,q)) P^{{\mathrm{Mac}}}_{\lambda}(\bz;q,t) Q^{{\mathrm{Mac}}}_{\mu}(\bz^{-1};q,t).$$ The average is given by $$\begin{aligned} \left\langle \prod_{j=1}^n \left| \frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}} \right|^2 \right\rangle_{n}^{(q,t)} =& \sum_{\lambda} |\eta|^{2|\lambda|} \epsilon_{u,q}(P^{{\mathrm{Mac}}}_{\lambda'}(\cdot;t,q)) \epsilon_{u,q}(Q^{{\mathrm{Mac}}}_{\lambda'}(\cdot;t,q)) \frac{ \langle P^{{\mathrm{Mac}}}_{\lambda}, Q^{{\mathrm{Mac}}}_{\lambda} \rangle_{\Delta^{{\mathrm{Mac}}}} } {\langle 1, 1 \rangle_{\Delta^{{\mathrm{Mac}}}} } \\ =& \sum_{\lambda} |\eta|^{2|\lambda|} \frac{\{(u)_{\lambda'}^{(t,q)}\}^2}{c_{\lambda'}(t,q) c'_{\lambda'}(t,q)} \prod_{s \in \lambda} \frac{1-q^{a'(s)}t^{n-l'(s)}}{1-q^{a'(s)+1}t^{n-l'(s)-1}}\end{aligned}$$ by expression and the orthogonality property . It is easy to check that $$\begin{aligned} &(u)_{\lambda'}^{(t,q)} =(-u)^{|\lambda|} (u^{-1})_{\lambda}^{(q,t)}, \qquad c_{\lambda'}(t,q) c'_{\lambda'}(t,q)= c_{\lambda}(q,t) c'_{\lambda}(q,t), \\ &\prod_{s \in \lambda} \frac{1-q^{a'(s)}t^{n-l'(s)}}{1-q^{a'(s)+1}t^{n-l'(s)-1}} = \prod_{s \in \lambda} \frac{t^{l'(s)}-q^{a'(s)}t^{n}}{t^{l'(s)} -q^{a'(s)+1}t^{n-1}} = \frac{(t^n)_{\lambda}^{(q,t)}}{(q t^{n-1})_{\lambda}^{(q,t)}}.\end{aligned}$$ Finally, we obtain $$\left\langle \prod_{j=1}^n \left| \frac{(\eta z_j;q)_\infty}{(\eta z_j u; q)_{\infty}} \right|^2 \right\rangle_{n}^{(q,t)} = \sum_{\lambda} (u|\eta|)^{2|\lambda|} \frac{\{(u^{-1})_{\lambda}^{(q,t)}\}^2}{(q t^{n-1})_{\lambda}^{(q,t)}} \frac{P^{{\mathrm{Mac}}}_{\lambda}(1,t,\dots,t^{n-1};q,t)}{c'_{\lambda}(q,t)},$$ which equals ${_2 \Phi_1}^{(q,t)}(u^{-1},u^{-1} ;q t^{n-1}; (u|\eta|)^2,\dots, (u|\eta|)^2t^{n-1})$. Now we derive the asymptotic behavior of the moment of $|\Psi(\bz;\eta)|$ when $|\eta| < 1$ in the limit as $n \to \infty$. The following theorem is a generalization of the well-known strong Szegö limit theorem as stated in §\[subsectionCBEJack\] below. \[Thm:SzegoMacdonald\] Let $\phi(z)=\exp(\sum_{k \in {\mathbb{Z}}} c(k) z^k)$ be a function on ${\mathbb{T}}$ and assume $$\label{Eq:AssumptionSzego} \sum_{k \in {\mathbb{Z}}} |c(k)|< \infty \qquad \text{and} \qquad \sum_{k \in {\mathbb{Z}}} |k| |c(k)|^2 < \infty.$$ Then we have $$\lim_{n \to \infty} e^{-n c(0)} \left\langle \prod_{j=1}^n \phi(z_j) \right\rangle_n^{(q,t)} = \exp \( \sum_{k=1}^\infty kc(k)c(-k) \frac{1-q^k}{1-t^k} \).$$ First we see that $$\begin{aligned} & \prod_{j=1}^n \phi(z_j)= e^{n c(0)} \prod_{k=1}^\infty \exp(c(k) p_k(\bz)) \exp(c(-k) \overline{p_k(\bz)}) \\ =& e^{n c(0)} \prod_{k=1}^\infty \( \sum_{a=0}^\infty \frac{c(k)^{a}}{a!} p_{(k^{a})}(\bz) \) \( \sum_{b=0}^\infty \frac{c(-k)^{b}}{b!} \overline{p_{(k^{b})}(\bz)} \) \\ =& e^{n c(0)} \sum_{(1^{a_1} 2^{a_2} \cdots )} \sum_{(1^{b_1} 2^{b_2} \cdots )} \( \prod_{k=1}^\infty \frac{c(k)^{a_k}c(-k)^{b_k}}{a_k! \, b_k!} \) p_{(1^{a_1}2^{a_2} \cdots )}(\bz)\overline{p_{(1^{b_1}2^{b_2} \cdots )}(\bz)},\end{aligned}$$ where both $(1^{a_1}2^{a_2} \cdots )$ and $(1^{b_1}2^{b_2} \cdots )$ run over all partitions. Therefore we have $$e^{-n c(0)} \left\langle \prod_{j=1}^n \phi(z_j) \right\rangle_n^{(q,t)} = \sum_{(1^{a_1} 2^{a_2} \cdots )} \sum_{(1^{b_1} 2^{b_2} \cdots )} \( \prod_{k=1}^\infty \frac{c(k)^{a_k}}{a_k!} \frac{c(-k)^{b_k}}{b_k!} \) \frac{ \langle p_{(1^{a_1} 2^{a_2} \cdots )}, p_{(1^{b_1} 2^{b_2} \cdots )} \rangle_{\Delta^{{\mathrm{Mac}}}} } { \langle 1, 1 \rangle_{\Delta^{{\mathrm{Mac}}}} }.$$ We recall the asymptotic behavior $$\frac{ \langle p_{(1^{a_1} 2^{a_2} \cdots )}, p_{(1^{b_1} 2^{b_2} \cdots )} \rangle_{\Delta^{{\mathrm{Mac}}}} } { \langle 1, 1 \rangle_{\Delta^{{\mathrm{Mac}}}} } \qquad \longrightarrow \qquad \prod_{k=1}^\infty \delta_{a_k b_k} k^{a_k} a_k! \( \frac{1-q^k}{1-t^k}\)^{a_k}$$ in the limit as $n \to \infty$, see [@Mac Chapter VI (9.9) and (1.5)]. It follows from this that $$\begin{aligned} &\lim_{n \to \infty} e^{-n c(0)} \left\langle \prod_{j=1}^n \phi(z_j) \right\rangle_n^{(q,t)} = \sum_{(1^{a_1} 2^{a_2} \cdots )} \prod_{k=1}^\infty \frac{(k c(k) c(-k))^{a_k}}{a_k!} \(\frac{1-q^k}{1-t^k}\)^{a_k} \\ =& \prod_{k=1}^\infty \( \sum_{a=0}^\infty \frac{(k c(k) c(-k)\frac{1-q^k}{1-t^k})^{a}}{a!}\) = \exp \( \sum_{k=1}^\infty k c(k) c(-k) \frac{1-q^k}{1-t^k}\).\end{aligned}$$ Here $\sum_{k=1}^\infty k c(k) c(-k) \frac{1-q^k}{1-t^k}$ converges absolutely by the second assumption in and the Cauchy-Schwarz inequality, because $| \frac{1-q^k}{1-t^k}| \le \frac{1+|q|^k}{1-|t|^{k}} \le 1+|q|$. Note that the present proof is similar to the corresponding one in [@BD]. The result in [@BD] is the special case of Theorem \[Thm:SzegoMacdonald\] with $q=t$. As an example of this theorem, the asymptotic behavior of the moment of $|\Psi^{{\mathrm{A}}}(\bz;\eta)|$ is given as follows. A further asymptotic result is given by Corollary \[AsymMomentQ\] below. \[ExampleMomentLimit\] Let $\gamma \in {\mathbb{R}}$ and let $\eta$ be a complex number such that $|\eta| < 1$. Then we have $$\lim_{n \to \infty} \left\langle |\Psi^{{\mathrm{A}}}(\bz;\eta)|^{2\gamma} \right\rangle_n^{(q,t)} = \( \frac{(q |\eta|^2;t)_{\infty}}{(|\eta|^2;t)_{\infty}} \)^{\gamma^2}.$$ This result is obtained by applying Theorem \[Thm:SzegoMacdonald\] to $\phi(z)= |1+\eta z|^{2\gamma}$. Then the Fourier coefficients of $\log \phi$ are $c(k)=(-1)^{k-1} \eta^k \gamma/k$ and $c(-k)=(-1)^{k-1} \overline{\eta}^k \gamma/k$ for $k >0$, and $c(0)=0$. Circular ensembles and its $q$-analogue {#sectionCBEq} ======================================= Special case: $t=q^{\beta/2}$ ----------------------------- In this subsection, we examine the results of the last section for the special case $t=q^{\beta/2}$ with $\beta>0$, i.e., we consider the weight function $\Delta^{{\mathrm{Mac}}}(\bz;q,q^{\beta/2})$. Denote by $\langle \cdot \rangle_{n,\beta}^q$ the corresponding average. Define the $q$-gamma function (see e.g. [@AAR (10.3.3)]) by $$\Gamma_q(x)= (1-q)^{1-x} \frac{(q;q)_\infty}{(q^x;q)_{\infty}}.$$ \[ThmMomentGamma\] Let $\beta$ be a positive real number. For a positive integer $k$ and $\xi \in {\mathbb{T}}$, we have $$\begin{aligned} \left\langle \prod_{i=1}^k | \Psi(\bz;q^{i-1/2}\xi )|^2 \right\rangle_{n,\beta}^q =& \prod_{i=0}^{k-1} \frac{\Gamma_t(\frac{2}{\beta}(i+1)) \Gamma_t(n+\frac{2}{\beta}(k+i+1))} {\Gamma_t(\frac{2}{\beta}(k+i+1)) \Gamma_t(n+\frac{2}{\beta}(i+1))} \qquad \text{(with $t=q^{\beta/2}$)} \label{eqCBEmomentQ} \\ =& \prod_{j=0}^{n-1} \frac{\Gamma_q (\frac{\beta}{2}j +2k+1) \Gamma_q(\frac{\beta}{2} j+1)} {\Gamma_q(\frac{\beta}{2}j+k+1)^2}. \notag\end{aligned}$$ The claim follows immediately from Corollary \[CorMomentValue\] and the functional equation $\Gamma_q(1+x) = \frac{1-q^x}{1-q} \Gamma_q(x)$. Consider now the asymptotic behavior of this average in the limit as $n \to \infty$. Put $[n]_q = (1-q^n)/(1-q)$. \[AsymMomentQ\] For a positive integer $k$ and $\xi \in {\mathbb{T}}$, it holds that $$\label{eqCBEmomentLimit} \lim_{n \to \infty} ([n]_t)^{-2k^2/\beta} \left\langle \prod_{i=1}^k | \Psi(\bz;q^{i-1/2}\xi )|^2 \right\rangle_{n,\beta}^q = \prod_{i=0}^{k-1} \frac{\Gamma_t(\frac{2}{\beta}(i+1))} {\Gamma_t(\frac{2}{\beta}(k+i+1))} \qquad \text{with $t=q^{\beta/2}$}.$$ Verify that $$\label{eq:GammaQasym} \lim_{n \to \infty} \frac{\Gamma_t(n+a)}{\Gamma_t(n) ([n]_t)^a} =1$$ for any constant $a$. Then the claim is clear from expression . \[ExFq\] Denote by ${\mathcal{F}}_\beta^q(k)$ the right-hand side of equation . Then we obtain $$\begin{aligned} {\mathcal{F}}_{1}^q(k) =& \prod_{j=0}^{k-1} \frac{[2j+1]_{q^{\frac{1}{2}}} !}{[2k+2j+1]_{q^{\frac{1}{2}}}!}, \label{eqf1q} \\ {\mathcal{F}}_{2}^q(k) =& \prod_{j=0}^{k-1} \frac{[j]_{q} !}{[j+k]_q!}, \label{eqf2q} \\ {\mathcal{F}}_{4}^q(2k) =& \frac{([2]_q)^{2k^2}}{[2k-1]_q !!} \prod_{j=1}^{2k-1} \frac{[j]_q!}{[2j]_q!}. \label{eqf4q}\end{aligned}$$ Here $[n]_q!=[n]_q [n-1]_q \cdots [1]_q$ and $[2k-1]_q!! = [2k-1]_q [2k-3]_q \cdots [3]_q [1]_q$. Equalities and are trivial because $\Gamma_q(n+1)=[n]_q!$. We check relation . By definition, we have $${\mathcal{F}}_4^q(2k)= \prod_{i=0}^{2k-1} \frac{\Gamma_{q^2}(\frac{1}{2}(i+1))} {\Gamma_{q^2}(k+\frac{1}{2}(i+1))} = \prod_{p=0}^{k-1} \frac{\Gamma_{q^2} (p+\frac{1}{2}) \Gamma_{q^2}(p+1)} {\Gamma_{q^2} (k+p+\frac{1}{2}) \Gamma_{q^2}(k+p+1)}.$$ Using the $q$-analogue of the Legendre duplication formula (see e.g. [@AAR Theorem 10.3.5(a)]) $$\Gamma_q(2x) \Gamma_{q^2}(1/2) = (1+q)^{2x-1} \Gamma_{q^2}(x) \Gamma_{q^2}(x+1/2),$$ we have $${\mathcal{F}}_4^q(2k)= \prod_{p=0}^{k-1} \frac{(1+q)^{2k} \Gamma_q(2p+1)}{\Gamma_q(2k+2p+1)}= ([2]_q)^{2k^2} \prod_{p=0}^{k-1} \frac{[2p]_q! }{[2k+2p]_q !}.$$ Expression can then be proven by induction on $n$. Circular $\beta$-ensembles and Jack polynomials {#subsectionCBEJack} ----------------------------------------------- We take the limit as $q \to 1$ of the results of the previous subsection. Recall the formula $$\lim_{q \to 1} \frac{(q^a x;q)_{\infty}}{(x;q)_{\infty}} =(1-x)^{-a}$$ for $|x|<1$ and $a \in {\mathbb{R}}$, see [@AAR Theorem 10.2.4] for example. Then we have $$\lim_{q \to 1} \Delta^{{\mathrm{Mac}}}(\bz;q,q^{\beta/2}) = \prod_{1 \le i<j \le n} |z_i-z_j|^\beta =: \Delta^{{\mathrm{Jack}}}(\bz;2/\beta),$$ which is a constant times the p.d.f. for Dyson’s circular $\beta$-ensembles (see §6). Denote by $\langle \cdot \rangle_{n,\beta}$ the corresponding average, i.e., for a function $f$ on ${\mathbb{T}}^n$ define $$\langle f \rangle_{n,\beta} = \lim_{q \to 1} \langle f \rangle_{n,\beta}^q = \frac{\int_{{\mathbb{T}}^n} f(\bz) \prod_{1 \le i<j \le n} |z_i-z_j|^\beta \dd \bz} {\int_{{\mathbb{T}}^n} \prod_{1 \le i<j \le n} |z_i-z_j|^\beta \dd \bz}.$$ Let $\alpha >0$. The Jack polynomial $P^{{\mathrm{Jack}}}_\lambda(x_1,\dots,x_n;\alpha)$ for each partition $\lambda$ is defined by the limit approached by the corresponding Macdonald polynomial, $$P^{{\mathrm{Jack}}}_\lambda(x_1,\dots,x_n;\alpha) = \lim_{q \to 1} P^{{\mathrm{Mac}}}_\lambda(x_1,\dots,x_n;q,q^{1/\alpha}),$$ see [@Mac Chapter VI-10] for detail. Jack polynomials are orthogonal polynomials with respect to the weight function $\Delta^{{\mathrm{Jack}}}(\bz;\alpha)$. In particular, $s_{\lambda}(x_1,\dots,x_n)=P^{{\mathrm{Jack}}}_\lambda(x_1,\dots,x_n;1)$ are called Schur polynomials, and are irreducible characters of $U(n)$ associated with $\lambda$. From the theorems in the last section, we have the following: from Theorem \[ThmAverageMac\], we see that $$\label{AverageProductA} \left\langle \prod_{l=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_l^{-1}) \cdot \prod_{k=1}^K \Psi^{{\mathrm{A}}}(\bz;\eta_{L+k}) \right\rangle_{n,\beta} = (\eta_1 \cdots \eta_L)^{-n} \cdot P_{(n^L)}^{{\mathrm{Jack}}} (\eta_1, \dots, \eta_{L+K};\beta/2).$$ For a positive real number $\gamma$ and complex number $\eta$ with $|\eta|<1$, we have from Proposition \[PropMomentHypergeometric\] that $$\label{MomentHypergeometricJack} \left\langle |\Psi(\bz;\eta)|^{2\gamma} \right\rangle_{n, {\mathrm{C}\beta\mathrm{E}_{}}} = {_2 F_1}^{(2/\beta)}(-\gamma, -\gamma; \frac{\beta}{2}(n-1)+1; |\eta|^2, \dots, |\eta|^2),$$ where ${_2 F_1}^{(\alpha)}(a,b; c; x_1,\dots,x_n)$ is the hypergeometric function associated with Jack polynomials [@Kaneko1] defined by $${_2 F_1}^{(\alpha)}(a,b; c; x_1,\dots,x_n)= \sum_{\lambda} \frac{[a]^{(\alpha)}_\lambda [b]^{(\alpha)}_\lambda}{[c]^{(\alpha)}_\lambda} \frac{\alpha^{|\lambda|} P_\lambda^{{\mathrm{Jack}}}(x_1,\dots,x_n;\alpha)}{c'_\lambda(\alpha)}$$ with $$[u]_\lambda^{(\alpha)}=\prod_{s \in \lambda} (u-l'(s)/\alpha +a'(s)), \qquad \text{and} \qquad c'_\lambda(\alpha)= \prod_{s \in \lambda}(\alpha(a(s)+1)+l(s)).$$ For a positive integer $k$, and $\xi \in {\mathbb{T}}$, by Theorem \[ThmMomentGamma\] and Corollary \[AsymMomentQ\] it holds that $$\label{MomentAsymptoticA} \left\langle | \Psi^{{\mathrm{A}}}(\bz;\xi )|^{2k} \right\rangle_{n,\beta} = \prod_{i=0}^{k-1} \frac{\Gamma(\frac{2}{\beta}(i+1)) \Gamma(n+\frac{2}{\beta}(k+i+1))} {\Gamma(\frac{2}{\beta}(k+i+1)) \Gamma(n+\frac{2}{\beta}(i+1))} \sim \prod_{i=0}^{k-1} \frac{\Gamma(\frac{2}{\beta}(i+1)) } {\Gamma(\frac{2}{\beta}(k+i+1))} \cdot n^{2k^2/\beta}$$ in the limit as $n \to \infty$. For a function $\phi(z)=\exp(\sum_{k \in {\mathbb{Z}}} c(k) z^k)$ on ${\mathbb{T}}$ satisfying inequalities , by Theorem \[Thm:SzegoMacdonald\] it holds that $$\label{eq:SzegoJack} \lim_{n \to \infty} e^{-n c(0)} \left\langle \prod_{j=1}^n \phi(z_j) \right\rangle_{n, \beta} = \exp \( \frac{2}{\beta}\sum_{k=1}^\infty kc(k)c(-k) \).$$ In particular, for $\gamma \in {\mathbb{R}}$ and a complex number $\eta$ such that $|\eta|< 1$, we have $$\lim_{n \to \infty} \left\langle |\Psi^{{\mathrm{A}}}(\bz;\eta)|^{2\gamma} \right\rangle_{n,\beta} = (1-|\eta|^2)^{-2 \gamma^2/\beta}.$$ Several observations may be made concerning the above identities: equation is obtained by verifying the limits $$\lim_{t \to 1} \frac{(q^a)_\lambda^{(q,t)}}{(1-t)^{|\lambda|}} =\alpha^{|\lambda|} [a]_\lambda^{(\alpha)}, \qquad \lim_{t \to 1} \frac{c'_\lambda(q,t)}{(1-t)^{|\lambda|}} =c'_\lambda(\alpha),$$ with $q=t^\alpha$. The expression for the moment is obtained in [@FK] using a different proof, which employs a Selberg type integral evaluation. Equation is also obtained in [@KS_zetafunctions] essentially by the Selberg integral evaluation. When $\beta=2$, equation presents the strong Szegö limit theorem for a Toeplitz determinant. Indeed, the average of the left-hand side of is then equal to the Toeplitz determinant $\det(d_{i-j})_{1 \le i,j \le n}$ of $\phi$, where $d_i$ are Fourier coefficients of $\phi$. Equation with general $\beta>0$ is seen in [@Johansson1; @Johansson2], but it may be noted that the present proof, employing symmetric function theory, is straightforward. This expression is applied in [@Hyper] in order to observe an asymptotic behavior for Toeplitz ‘hyperdeterminants’. Jacobi polynomials due to Heckman and Opdam =========================================== The results obtained in §\[sectionMacAverage\] and §\[sectionCBEq\] will be applied to random matrix polynomials from symmetric spaces of the type A root system in the next section. In order to evaluate the corresponding polynomials of the BC type root system, we here recall Heckman and Opdam’s Jacobi polynomials and give some identities corresponding to and . The dominance ordering associated with the root system of type BC is defined as follows: for two partitions $\lambda=(\lambda_1,\lambda_2,\dots)$ and $\mu=(\mu_1,\mu_2,\dots)$, $$\mu \le \lambda \qquad \Leftrightarrow \qquad \mu_1 + \cdots+\mu_i \le \lambda_1+ \cdots +\lambda_i \quad \text{for all $i \ge 1$}.$$ Let ${\mathbb{C}}[\bx^{\pm 1}] = {\mathbb{C}}[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ be the ring of all Laurent polynomials in $n$ variables $\bx=(x_1,\dots,x_n)$. The Weyl group $W={\mathbb{Z}}_2 \wr {\mathfrak{S}}_n = {\mathbb{Z}}_2^n \rtimes {\mathfrak{S}}_n$ of type $BC_n$ acts naturally on ${\mathbb{Z}}^n$ and ${\mathbb{C}}[\bx^{\pm 1}]$, respectively. Denote by ${\mathbb{C}}[\bx^{\pm 1}]^W$ the subring of all $W$-invariants in ${\mathbb{C}}[\bx^{\pm 1}]$. Let $\Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3)$ be a function on ${\mathbb{T}}^n$ defined by $$\Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3) = \prod_{1 \le i < j \le n} |1-z_i z_j^{-1}|^{2 k_3} |1-z_i z_j|^{2 k_3} \cdot \prod_{1 \le j \le n} |1-z_j|^{2k_1} |1-z_j^2|^{2k_2}.$$ Here we assume $k_1$, $k_2$, and $k_3$ are real numbers such that $$k_1+k_2>-1/2, \quad k_2 > -1/2, \quad k_3 \ge 0.$$ Define an inner product on ${\mathbb{C}}[\bx^{\pm 1}]^W$ by $$\langle f,g \rangle_{\Delta^{{\mathrm{HO}}}} = \frac{1}{2^n n!} \int_{{\mathbb{T}}^n} f(\bz) g(\bz^{-1}) \Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3) \dd \bz.$$ For each partition $\mu$, we let $$m^{{\mathrm{BC}}}_{\mu}(\bx)=\sum_{\nu \in W \mu} x_1^{\nu_1} \cdots x_n^{\nu_n},$$ where $W\mu$ is the $W$-orbit of $\mu$ (cf. ). These polynomials form a ${\mathbb{C}}$-basis of ${\mathbb{C}}[\bx^{\pm 1}]^W$. Then, there exists a unique family of polynomials $P^{{\mathrm{HO}}}_{\lambda}= P^{{\mathrm{HO}}}_{\lambda}(\bx;k_1,k_2,k_3) \in {\mathbb{C}}[\bx^{\pm 1}]^W$ ($\lambda$ are partitions such that $\ell(\lambda) \le n$) satisfying two conditions: $$P^{{\mathrm{HO}}}_{\lambda}(\bx)= m_{\lambda}^{{\mathrm{BC}}}(\bx)+ \sum_{\mu: \mu < \lambda} u_{\lambda \mu} m^{{\mathrm{BC}}}_{\mu}(\bx), \quad \text{with $u_{\lambda \mu} \in {\mathbb{C}}$}, \qquad\qquad \langle P^{{\mathrm{HO}}}_{\lambda}, P_{\mu}^{{\mathrm{HO}}} \rangle_{\Delta^{{\mathrm{HO}}}} = 0 \quad \text{if $\lambda \not= \mu$}.$$ The Laurent polynomials $P_{\lambda}$ are known as Jacobi polynomials associated with the root system of type $BC_n$ due to Heckman and Opdam, see e.g. [@Diejen; @Heckman; @Mimachi]. They can be seen as BC-analogues of Jack polynomials. For a function $f$ on ${\mathbb{T}}^n$, we denote by $\langle f \rangle_{n}^{k_1,k_2,k_3}$ the mean value of $f$ with respect to the weight function $\Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3)$: $$\langle f \rangle_{n}^{k_1,k_2,k_3} = \frac{\int_{{\mathbb{T}}^n} f(\bz) \Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3)\dd \bz}{ \int_{{\mathbb{T}}^n} \Delta^{{\mathrm{HO}}}(\bz;k_1,k_2,k_3)\dd \bz}.$$ From the three parameters $k_1,k_2,k_3$, we define new parameters $$\tilde{k}_1 = k_1/k_3, \qquad \tilde{k}_2=(k_2+1)/k_3-1, \qquad \tilde{k}_3=1/k_3.$$ Put $$\Psi^{{\mathrm{BC}}}(\bz;x)= \prod_{j=1}^n(1+x z_j)(1+x z_j^{-1}).$$ \[Thm:MainTheorem\] The following relation holds $$\label{eq:MainEq} \left\langle \Psi^{{\mathrm{BC}}}(\bz;x_1)\Psi^{{\mathrm{BC}}}(\bz;x_2) \cdots \Psi^{{\mathrm{BC}}}(\bz;x_m) \right\rangle_{n}^{k_1,k_2,k_3} = (x_1 \cdots x_m)^n P^{{\mathrm{HO}}}_{(n^m)}(x_1,\dots,x_m;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3).$$ In order to prove this, we need the following dual Cauchy identity obtained by Mimachi [@Mimachi]. \[Thm:Mimachi\] Let $\bx=(x_1,\dots, x_n)$ and $\by=(y_1,\dots, y_m)$ be sequences of indeterminates. Jacobi polynomials $P^{{\mathrm{HO}}}_{\lambda}$ satisfy the equality $$\prod_{i=1}^n \prod_{j=1}^m (x_i+x_i^{-1} - y_j - y_j^{-1}) = \sum_{\lambda \subset (m^n)} (-1)^{|\tilde{\lambda}|} P^{{\mathrm{HO}}}_{\lambda}(\bx;k_1,k_2,k_3) P^{{\mathrm{HO}}}_{\tilde{\lambda}}(\by;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3),$$ where $\tilde{\lambda}=(n-\lambda_m', n-\lambda_{m-1}', \dots, n-\lambda_1')$. We see that $$\Psi^{{\mathrm{BC}}}(\bz;x_1) \Psi^{{\mathrm{BC}}}(\bz;x_2) \cdots \Psi^{{\mathrm{BC}}}(\bz;x_m) = (x_1 \cdots x_m)^n \prod_{i=1}^m \prod_{j=1}^n (x_i + x_i^{-1} + z_j + z_j^{-1}).$$ Using Proposition \[Thm:Mimachi\] we have $$\begin{aligned} & \left\langle \Psi^{{\mathrm{BC}}}(\bz;x_1)\Psi^{{\mathrm{BC}}}(\bz;x_2) \cdots \Psi^{{\mathrm{BC}}}(\bz;x_m) \right\rangle_{n}^{k_1,k_2,k_3} \\ = & (x_1 \cdots x_m)^n \sum_{\lambda \subset (m^n)} P^{{\mathrm{HO}}}_{\tilde{\lambda}}(x_1,\dots, x_m;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3) \langle P^{{\mathrm{HO}}}_{\lambda}(\bz;k_1,k_2,k_3) \rangle_{n}^{k_1,k_2,k_3}.\end{aligned}$$ By the orthogonality relation for Jacobi polynomials, we have $$\langle P^{{\mathrm{HO}}}_{\lambda}(\bz;k_1,k_2,k_3) \rangle_{n}^{k_1,k_2,k_3} = \begin{cases} 1, & \text{if $\lambda = (0)$}, \\ 0, & \text{otherwise}, \end{cases}$$ and we thus obtain the theorem. Using Theorem 2.1 in [@Mimachi], we derive a more general form of equation including a Macdonald-Koornwinder polynomial. \[Thm:Main2\] Let $${\mathcal{F}}(m;k_1,k_2,k_3)= \prod_{j=0}^{m-1} \frac{\sqrt{\pi}}{2^{k_1 +2 k_2+j k_3-1} \Gamma(k_1+k_2+\frac{1}{2}+j k_3)}.$$ The $m$-th moment of $\Psi^{{\mathrm{BC}}}(\bz;1)$ is given by $$\left\langle \Psi^{{\mathrm{BC}}}(\bz;1)^m \right\rangle_{n}^{k_1,k_2,k_3} = {\mathcal{F}}(m;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3) \cdot \prod_{j=0}^{m-1} \frac{\Gamma(n+ \tilde{k}_1+2\tilde{k}_2+j \tilde{k}_3 ) \Gamma(n+ \tilde{k}_1+\tilde{k}_2+\frac{1}{2}+j \tilde{k}_3 )} {\Gamma(n+ \frac{\tilde{k}_1}{2}+\tilde{k}_2+\frac{j \tilde{k}_3}{2} ) \Gamma(n+ \frac{\tilde{k}_1}{2}+\tilde{k}_2+\frac{1+j \tilde{k}_3}{2} )}.$$ By Theorem \[Thm:MainTheorem\] we have $$\label{eq:MTspecial} \left\langle \Psi^{{\mathrm{BC}}}(\bz;1)^m \right\rangle_{n}^{k_1,k_2,k_3} = P^{{\mathrm{HO}}}_{(n^m)}(1^m;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3).$$ The special case $P_{\lambda}^{{\mathrm{HO}}}(1,1,\dots,1;k_1,k_2,k_3)$ is known and is given as follows (see e.g. [@Diejen] [^2]): for a partition $\lambda$ of length $\le m$, $$\begin{aligned} P^{{\mathrm{HO}}}_{\lambda}(\underbrace{1, \dots, 1}_m;k_1,k_2,k_3) =& 2^{2|\lambda|} \prod_{1 \le i <j \le m} \frac{(\rho_i+ \rho_j+k_3)_{\lambda_i+\lambda_j} (\rho_i- \rho_j+k_3)_{\lambda_i-\lambda_j}} {(\rho_i+ \rho_j)_{\lambda_i+\lambda_j} (\rho_i- \rho_j)_{\lambda_i-\lambda_j}} \\ & \quad \times \prod_{j=1}^m \frac{(\frac{k_1}{2} +k_2 + \rho_j)_{\lambda_j} (\frac{k_1+1}{2} + \rho_j)_{\lambda_j}} {(2 \rho_j)_{2 \lambda_j}} \end{aligned}$$ with $\rho_j= (m-j)k_3 + \frac{k_1}{2}+k_2$. Here $(a)_n = \Gamma(a+n) / \Gamma(a)$ is the Pochhammer symbol. Substituting $(n^m)$ for $\lambda$, we have $$\begin{aligned} & P^{{\mathrm{HO}}}_{(n^m)} (1^m; k_1,k_2,k_3) \notag \\ =& \prod_{1 \le i <j \le m} \frac{(k_1+2 k_2+(2m-i-j+1)k_3)_{2n}} {(k_1+2 k_2+(2m-i-j)k_3)_{2n}} \cdot \prod_{j=0}^{m-1} \frac{2^{2n} (k_1+2 k_2+j k_3)_n (k_1+k_2+\frac{1}{2}+j k_3)_n} {(k_1 +2 k_2+2 j k_3)_{2n}}. \label{eq:moment_product1}\end{aligned}$$ A simple algebraic manipulation of the first product on the right-hand side of yields $$\prod_{1 \le i <j \le m} \frac{(k_1+2 k_2+(2m-i-j+1)k_3)_{2n}} {(k_1+2 k_2+(2m-i-j)k_3)_{2n}} = \prod_{j=0}^{m-1} \frac{(k_1+2k_2+ 2jk_3)_{2n}}{(k_1+2k_2+j k_3)_{2n}}$$ and therefore we obtain $$P_{(n^m)}^{{\mathrm{HO}}} (1^m; k_1,k_2,k_3) = \prod_{j=0}^{m-1} \frac{2^{2n} (k_1+k_2+\frac{1}{2}+j k_3)_n}{(n+k_1+2k_2+jk_3)_{n}}.$$ Combining the above result with equation , we have $$\label{eq:Main2} \left\langle \Psi^{{\mathrm{BC}}}(\bz;1)^m \right\rangle_{n}^{k_1,k_2,k_3} = \prod_{j=0}^{m-1} \frac{2^{2n} \Gamma(n+ \tilde{k}_1+2\tilde{k}_2+j \tilde{k}_3 ) \Gamma(n+ \tilde{k}_1+\tilde{k}_2+\frac{1}{2}+j \tilde{k}_3 )} {\Gamma( \tilde{k}_1+\tilde{k}_2+\frac{1}{2} +j \tilde{k}_3) \Gamma(2n+ \tilde{k}_1+2\tilde{k}_2+j \tilde{k}_3 )}.$$ Finally, we apply the formula $$\Gamma(2a) = \frac{2^{2a-1}}{\sqrt{\pi}} \Gamma(a) \Gamma(a+\frac{1}{2})$$ to $\Gamma(2n+\tilde{k}_1+2\tilde{k}_2+j \tilde{k}_3)$ in equation and we then have the theorem. \[cor:Main\] It holds that $$\left\langle \Psi^{{\mathrm{BC}}}(\bz;1)^m \right\rangle_{n}^{k_1,k_2,k_3} \sim {\mathcal{F}}(m;\tilde{k}_1,\tilde{k}_2,\tilde{k}_3) \cdot n^{m(\tilde{k}_1+\tilde{k}_2)+\frac{1}{2}m(m-1)\tilde{k}_3},$$ as $n \to \infty$. The claim follows from the previous theorem and the asymptotics of the gamma function (cf ): $\Gamma(n+a) \sim \Gamma(n) n^a$ for a constant $a$. Random matrix ensembles associated with compact symmetric spaces ================================================================ Finally, we apply the theorems obtained above to compact symmetric spaces as classified by Cartan. These symmetric spaces are labeled A I, BD I, C II, and so on, see e.g. Table 1 in [@CM]. Let $G/K$ be such a compact symmetric space. Here $G$ is a compact subgroup of $GL(N,{\mathbb{C}})$ for some positive integer $N$, and $K$ is a closed subgroup of $G$. Then the space $G/K$ is realized as the subset $S$ of $G$: $S \simeq G/K$ and the probability measure $\dd M$ on $S$ is induced from the quotient space $G/K$. We consider $S$ as a probability space with the measure $\dd M$ and call the random matrix ensemble associated with $G/K$. See [@Duenez] for details. The random matrix ensembles considered in §\[subsectionA\], §\[subsectionAI\], and §\[subsectionAII\] are called Dyson’s circular $\beta$-ensembles, see [@Dyson; @Mehta]. The identities in these subsections follow immediately from expressions and (see also Example \[ExFq\]) . Similarly, identities after §\[subsectionB\] follows from Theorem \[Thm:MainTheorem\], Theorem \[Thm:Main2\], and Corollary \[cor:Main\]. Note that the results in §\[subsectionA\], §\[subsectionB\], §\[subsectionC\], and §\[subsectionD\] are results for compact Lie groups (which are not proper symmetric spaces) previously presented in [@BG]. $U(n)$ – type A {#subsectionA} --------------- Consider the unitary group $U(n)$ with the normalized Haar measure. This space has a simple root system of type A. The corresponding p.d.f. for eigenvalues $z_1,\dots,z_n$ of $M \in U(n)$ is proportional to $\Delta^{{\mathrm{Jack}}}(\bz;1)$. This random matrix ensemble is called the circular unitary ensemble (CUE). For complex numbers $\eta_1,\dots,\eta_L, \eta_{L+1},\dots, \eta_{L+K}$, it follows from equation that $$\begin{aligned} & \left\langle \prod_{i=1}^L \det(I+\eta_i^{-1} M^{-1}) \cdot \prod_{i=1}^K \det(I+\eta_{L+i} M) \right\rangle_{U(n)} \\ =& \left\langle \prod_{i=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_i^{-1}) \cdot \prod_{i=1}^K \Psi^{{\mathrm{A}}}(\bz;\eta_{L+i}) \right\rangle_{n,2} = \prod_{i=1}^L \eta_i^{-n} \cdot s_{(n^L)} (\eta_1,\dots,\eta_{L+K}).\end{aligned}$$ In addition, from equation we obtain $$\left\langle |\det(I+ \xi M)|^{2m} \right\rangle_{U(n)} = \prod_{j=0}^{m-1}\frac{j! (n+j+m)!}{(j+m)! (n+j)!} \sim \prod_{j=0}^{m-1}\frac{j!}{(j+m)!} \cdot n^{m^2}$$ for any $\xi \in {\mathbb{T}}$. $U(n)/O(n)$ – type A I {#subsectionAI} ---------------------- Consider the ensemble $S(n)$ associated with the symmetric space $U(n)/O(n)$. The space $S(n)$ is the set of all symmetric matrices in $U(n)$. The corresponding p.d.f. for eigenvalues $z_1,\dots,z_n$ is proportional to $\Delta^{{\mathrm{Jack}}}(\bz;2) = \prod_{1 \le i<j \le n} |z_i-z_j|$. This random matrix ensemble is called the circular orthogonal ensemble (COE). We have $$\begin{aligned} &\left\langle \prod_{i=1}^L \det(I+\eta_i^{-1} M^{-1}) \cdot \prod_{i=1}^K \det(I+\eta_{L+i} M) \right\rangle_{S(n)} \\ =& \left\langle \prod_{i=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_i^{-1}) \cdot \prod_{i=1}^K \Psi^{{\mathrm{A}}}(\bz;\eta_{L+i}) \right\rangle_{n,1} = \prod_{i=1}^L \eta_i^{-n} \cdot P_{(n^L)}^{{\mathrm{Jack}}} (\eta_1,\dots,\eta_{L+K};1/2).\end{aligned}$$ For $\xi \in {\mathbb{T}}$, we obtain $$\left\langle |\det(I+ \xi M)|^{2m} \right\rangle_{S(n)} = \prod_{j=0}^{m-1} \frac{(2j+1)! (n+2m+2j+1)!}{(2m+2j+1)! (n+2j+1)!} \sim \prod_{j=0}^{m-1}\frac{(2j+1)!}{(2m+2j+1)!} \cdot n^{2m^2}.$$ $U(2n)/Sp(2n)$ – type A II {#subsectionAII} -------------------------- Consider the ensemble $S(n)$ associated with the symmetric space $U(2n)/Sp(2n)$. The space $S(n)$ is the set of all self-dual matrices in $U(2n)$, i.e., $M \in S(n)$ is a unitary matrix satisfying $M=J \trans{M} \trans{J}$ with $J=\(\begin{smallmatrix} 0 & I_n \\ -I_n & 0 \end{smallmatrix} \)$. This random matrix ensemble is called the circular symplectic ensemble (CSE). The eigenvalues of $M \in S(n)$ are of the form $z_1,z_1,z_2,z_2,\dots,z_n,z_n$ and so the characteristic polynomial is given as $\det(I+xM)= \prod_{j=1}^n (1+x z_j)^2$. The corresponding p.d.f. for $z_1,\dots,z_n$ is proportional to $\Delta^{{\mathrm{Jack}}}(\bz;1/2)=\prod_{1 \le i<j \le n} |z_i-z_j|^4$. We have $$\begin{aligned} &\left\langle \prod_{i=1}^L \det(I+\eta_i^{-1} M^{-1})^{1/2} \cdot \prod_{i=1}^K \det(I+\eta_{L+i} M)^{1/2} \right\rangle_{S(n)} \\ =& \left\langle \prod_{i=1}^L \Psi^{{\mathrm{A}}}(\bz^{-1};\eta_i^{-1}) \cdot \prod_{i=1}^K \Psi^{{\mathrm{A}}}(\bz;\eta_{L+i}) \right\rangle_{n,4} = \prod_{i=1}^L x_i^{-n} \cdot P_{(n^L)}^{{\mathrm{Jack}}} (\eta_1,\dots,\eta_{L+K};2).\end{aligned}$$ For $\xi \in {\mathbb{T}}$, we obtain $$\left\langle |\det(I+ \xi M)|^{2m} \right\rangle_{S(n)} = \prod_{j=0}^{2m-1} \frac{\Gamma(\frac{j+1}{2}) \Gamma(n+m+\frac{j+1}{2})} {\Gamma(m+\frac{j+1}{2}) \Gamma(n+\frac{j+1}{2})} \sim \frac{2^m}{(2m-1)!! \ \prod_{j=1}^{2m-1}(2j-1)!!} \cdot n^{2m^2}.$$ $SO(2n+1)$ – type B {#subsectionB} ------------------- Consider the special orthogonal group $SO(2n+1)$. An element $M$ in $SO(2n+1)$ is an orthogonal matrix in $SL(2n+1,{\mathbb{R}})$, with eigenvalues given by $z_1,z_1^{-1},\cdots, z_n,z_n^{-1},1$. From Weyl’s integral formula, the corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;1,0,1)$, and therefore it follows from Theorem \[Thm:MainTheorem\] that $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{SO(2n+1)} = \prod_{i=1}^m (1+x_i) \cdot \left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{1,0,1} = \prod_{i=1}^m x_i^n (1+x_i) \cdot P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;1,0,1).$$ Here $P_\lambda^{{\mathrm{HO}}}(x_1,\dots,x_m;1,0,1)$ is just the irreducible character of $SO(2m+1)$ associated with the partition $\lambda$. Theorem \[Thm:Main2\], Corollary \[cor:Main\], and a simple calculation lead to $$\left\langle \det(I+ M)^m \right\rangle_{SO(2n+1)} = 2^{m} \prod_{j=0}^{m-1} \frac{ \Gamma(2n+ 2j+2 ) } {2^{j} (2j+1)!! \ \Gamma(2n+ j+1)} \sim \frac{2^{2m}}{\prod_{j=1}^{m} (2j-1)!!} n^{m^2/2+m/2}$$ in the limit as $n \to \infty$. $Sp(2n)$ – type C {#subsectionC} ----------------- Consider the symplectic group $Sp(2n)$, i.e., a matrix $M \in Sp(2n)$ belongs to $U(2n)$ and satisfies $M J \trans{M}=J$, where $J=\(\begin{smallmatrix} O_n & I_n \\ -I_n & O_n \end{smallmatrix} \)$. The eigenvalues are given by $z_1,z_1^{-1},\cdots, z_n,z_n^{-1}$. The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;0,1,1)$ and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{Sp(2n)} = \left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{0,1,1} = \prod_{i=1}^m x_i^n \cdot P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;0,1,1).$$ Here $P_\lambda^{{\mathrm{HO}}}(x_1,\dots,x_m;0,1,1)$ is just the irreducible character of $Sp(2m)$ associated with the partition $\lambda$. We obtain $$\left\langle \det(I+ M)^m \right\rangle_{Sp(2n)} = \prod_{j=0}^{m-1} \frac{\Gamma(2n+2j+3) }{2^{j+1} \cdot (2j+1)!! \ \Gamma(2n+j+2)} \sim \frac{1}{ \prod_{j=1}^{m} (2j-1)!!} \cdot n^{m^2/2+m/2}.$$ $SO(2n)$ – type D {#subsectionD} ----------------- Consider the special orthogonal group $SO(2n)$. The eigenvalues of a matrix $M \in SO(2n)$ are of the form $z_1,z_1^{-1},\cdots, z_n,z_n^{-1}$. The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;0,0,1)$, and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{SO(2n)} = \left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{0,0,1} = \prod_{i=1}^m x_i^n \cdot P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;0,0,1).$$ Here $P_\lambda^{{\mathrm{HO}}}(x_1,\dots,x_m;0,0,1)$ is just the irreducible character of $O(2m)$ (not $SO(2m)$) associated with the partition $\lambda$. We have $$\left\langle \det(I+ M)^m \right\rangle_{SO(2n)} = \prod_{j=0}^{m-1} \frac{\Gamma(2n+2j)}{2^{j-1} \, (2j-1)!! \ \Gamma(2n+j)} \sim \frac{2^m}{\prod_{j=1}^{m-1} (2j-1)!!} \cdot n^{m^2/2-m/2}.$$ $U(2n+r)/(U(n+r)\times U(n))$ – type A III ------------------------------------------ Let $r$ be a non-negative integer. Consider the random matrix ensemble $G(n,r)$ associated with $U(2n+r)/(U(n+r)\times U(n))$. The explicit expression of a matrix in $G(n,r)$ is omitted here, but may be found in [@Duenez]. The eigenvalues of a matrix $M \in G(n,r) \subset U(2n+r)$ are of the form $$\label{eq:Eigenvalues} z_1,z_1^{-1},\cdots, z_n,z_n^{-1},\underbrace{1,1,\dots, 1}_r.$$ The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;r,\frac{1}{2},1)$, and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{G(n,r)} = \prod_{i=1}^m (1+x_i)^r \cdot \left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{r,\frac{1}{2},1} = \prod_{i=1}^m (1+x_i)^r x_i^n \cdot P^{{\mathrm{HO}}}_{(n^m)}(x_1,\dots,x_m;r,\frac{1}{2},1).$$ We obtain $$\begin{aligned} & \left\langle \det(I+ M)^m \right\rangle_{G(n,r)} = 2^{mr} \left\langle \Psi^{{\mathrm{BC}}}(\bz;1) \right\rangle_{n}^{r,\frac{1}{2},1} \\ =& \frac{\pi^{m/2}}{\prod_{j=0}^{m-1} 2^{j} (r+j)! } \prod_{j=0}^{m-1} \frac{\Gamma(n+r+j+1)^2} {\Gamma(n+\frac{r+j+1}{2}) \Gamma(n+\frac{r+j}{2}+1)} \sim \frac{\pi^{m/2}}{2^{m(m-1)/2} \prod_{j=0}^{m-1} (r+j)! } \cdot n^{m^2/2 + rm}.\end{aligned}$$ $O(2n+r)/(O(n+r) \times O(n))$ – type BD I ------------------------------------------ Let $r$ be a non-negative integer. Consider the random matrix ensemble $G(n,r)$ associated with the compact symmetric space $O(2n+r)/(O(n+r) \times O(n))$. The eigenvalues of a matrix $M \in G(n,r) \subset O(2n+r)$ are of the form . The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;\frac{r}{2},0,\frac{1}{2})$, and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{G(n,r)} = \prod_{i=1}^m (1+x_i)^r \cdot \left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{\frac{r}{2},0,\frac{1}{2}} = \prod_{i=1}^m (1+x_i)^r x_i^n \cdot P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;r,1,2).$$ We obtain $$\left\langle \det(I+ M)^m \right\rangle_{G(n,r)} = 2^{mr} \prod_{j=0}^{m-1} \frac{\Gamma(2n+4j+2r+3)}{2^{2j+r+1}(4j+2r+1)!! \ \Gamma(2n+2j+r+2)} \sim \frac{2^{mr}}{\prod_{j=0}^{m-1}(4j+2r+1)!!} \cdot n^{m^2+rm}.$$ $Sp(2n)/U(n)$ – type C I ------------------------ Consider the random matrix ensemble $S(n)$ associated with the compact symmetric space $Sp(2n) /(Sp(2n)\cap SO(2n)) \simeq Sp(2n)/U(n)$. The eigenvalues of a matrix $M \in S(n) \subset Sp(2n)$ are of the form $z_1,z_1^{-1},\cdots, z_n,z_n^{-1}$. The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;0,\frac{1}{2},\frac{1}{2})$, and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M) \right\rangle_{S(n)} = \left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{0,\frac{1}{2},\frac{1}{2}} = \prod_{i=1}^m x_i^n \cdot P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;0,2,2).$$ We obtain $$\left\langle \det(I+ M)^m \right\rangle_{S(n)} = \prod_{j=0}^{m-1} \frac{(n+2j+3) \Gamma(2n+4j+5)}{2^{2j+2}(4j+3)!! \ \Gamma(2n+2j+4)} \sim \frac{1}{2^m \prod_{j=1}^m (4j-1)!!} \cdot n^{m^2+m}.$$ $Sp(4n+2r)/(Sp(2n+2r) \times Sp(2n))$ – type C II ------------------------------------------------- Let $r$ be a non-negative integer. Consider the random matrix ensemble $G(n,r)$ associated with the compact symmetric space $Sp(4n+2r)/(Sp(2n+2r) \times Sp(2n))$. The eigenvalues of a matrix $M \in G(n,r) \subset Sp(4n+2r)$ are of the form $$z_1,z_1,z_1^{-1},z_1^{-1},\cdots, z_n,z_n, z_n^{-1},z_n^{-1}, \underbrace{1,\dots,1}_{2r}.$$ The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;2r,\frac{3}{2},2)$, and therefore we have $$\begin{aligned} \left\langle \prod_{i=1}^m \det(I+x_i M)^{1/2} \right\rangle_{G(n,r)} =&\prod_{i=1}^m (1+x_i)^r \left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{2r,\frac{3}{2},2} \\ =& \prod_{i=1}^m (1+x_i)^rx_i^n \cdot P^{{\mathrm{HO}}}_{(n^m)}(x_1,\dots,x_m;r,\frac{1}{4},\frac{1}{2}).\end{aligned}$$ We obtain $$\begin{aligned} \left\langle \det(I+ M)^m \right\rangle_{G(n,r)} =& \frac{2^{4mr+m^2+m}}{\prod_{j=0}^{m-1} (4j+4r+1)!!} \cdot \frac{\prod_{p=1}^{4m} \Gamma(n+r+\frac{p+1}{4})}{\prod_{j=1}^{2m} \Gamma(n+\frac{r}{2}+\frac{j}{4}) \Gamma(n+\frac{r+1}{2}+\frac{j}{4})} \\ \sim & \frac{2^{4mr+m^2+m}}{\prod_{j=0}^{m-1} (4j+4r+1)!!} n^{m^2+2mr}.\end{aligned}$$ $SO(4n+2)/U(2n+1)$ – type D III-odd ----------------------------------- Consider the random matrix ensemble $S(n)$ associated with the compact symmetric space $SO(4n+2)/(SO(4n+2) \cap Sp(4n+2)) \simeq SO(4n+2)/U(2n+1)$. The eigenvalues of a matrix $M \in S(n) \subset SO(4n+2)$ are of the form $z_1,z_1,z_1^{-1},z_1^{-1},\cdots, z_n,z_n, z_n^{-1},z_n^{-1}, 1,1$. The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;2,\frac{1}{2},2)$ and therefore we have $$\begin{aligned} \left\langle \prod_{i=1}^m \det(I+x_i M)^{1/2} \right\rangle_{S(n)} =&\prod_{i=1}^m (1+x_i) \left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{2,\frac{1}{2},2} \\ =& \prod_{i=1}^m (1+x_i) x_i^n \cdot P^{{\mathrm{HO}}}_{(n^m)}(x_1,\dots,x_m;1,-\frac{1}{4},\frac{1}{2}).\end{aligned}$$ We obtain $$\left\langle \det(I+ M)^m \right\rangle_{S(n)} = \frac{2^{m^2+5m}}{\prod_{j=1}^{m} (4j-1)!!} \cdot \prod_{j=1}^{2m} \frac{\Gamma(n+\frac{j}{2}+\frac{3}{4}) \Gamma(n+\frac{j}{2})} {\Gamma(n+\frac{j}{4}) \Gamma(n+\frac{j}{4} +\frac{1}{2})} \sim \frac{2^{m^2+5m}}{\prod_{j=1}^{m} (4j-1)!!} \cdot n^{m^2+m}.$$ $SO(4n)/U(2n)$ – type D III-even -------------------------------- Consider the random matrix ensembles $S(n)$ associated with the compact symmetric space $SO(4n)/(SO(4n) \cap Sp(4n)) \simeq SO(4n)/U(2n)$. The eigenvalues of the matrix $M \in S(n) \subset SO(4n)$ are of the form $$z_1,z_1,z_1^{-1},z_1^{-1},\cdots, z_n,z_n, z_n^{-1},z_n^{-1}.$$ The corresponding p.d.f. of $z_1,z_2,\dots,z_n$ is proportional to $\Delta^{{\mathrm{HO}}}(\bz;0,\frac{1}{2},2)$ and therefore we have $$\left\langle \prod_{i=1}^m \det(I+x_i M)^{1/2} \right\rangle_{S(n)} = \left\langle \prod_{i=1}^m \Psi^{{\mathrm{BC}}}(\bz;x_i) \right\rangle_{n}^{0,\frac{1}{2},2} = P_{(n^m)}^{{\mathrm{HO}}}(x_1,\dots,x_m;0,-\frac{1}{4},\frac{1}{2}).$$ Hence we obtain $$\left\langle \det(I+ M)^m \right\rangle_{S(n)} = \frac{2^{m^2+m}}{\prod_{j=1}^{m-1} (4j-1)!!} \cdot \prod_{j=0}^{2m-1} \frac{\Gamma(n+\frac{j}{2}+\frac{1}{4}) \Gamma(n+\frac{j-1}{2})} {\Gamma(n+\frac{j-1}{4}) \Gamma(n+\frac{j+1}{4})} \sim \frac{2^{m^2+m}}{\prod_{j=1}^{m-1} (4j-1)!!} \cdot n^{m^2-m}.$$ Final comments ============== We have calculated the average of products of the characteristic moments $\langle \prod_{j=1}^m \det(I+x_j M) \rangle$. We would also like to calculate the average of the quotient $$\left\langle \frac{\prod_{j=1}^m \det(I+x_j M)}{\prod_{i=1}^l \det(I+y_i M)} \right\rangle_n^{k_1,k_2,k_3}.$$ Expressions for these quotients have been obtained for the classical groups (i.e., $(k_1,k_2,k_3)=(1,0,1), (0,1,1), (0,0,1)$ in our notation) in [@BG], but the derivation of expressions for other cases remains an open problem. <span style="font-variant:small-caps;">Acknowledgements.</span> The author would like to thank Professor Masato Wakayama for bringing to the author’s attention the paper [@Mimachi]. [GaWa]{} G. E. Andrews, R. Askey, and R. Roy, “Special Functions”, Encyclopedia Math. Appl. 71, Cambridge Univ. Press, Cambridge, 1999. D. Bump and P. Diaconis, Toeplitz minors, J. Combin. Theory Ser. A [**97**]{} (2002), 252–271. D. Bump and A. Gamburd, On the averages of characteristic polynomials from classical groups, Comm. Math. Phys. [**265**]{} (2006), 227–274. M. Caselle and U. Magnea, Random matrix theory and symmetric spaces, Physics Reports [**394**]{} (2004), 41–156. J. F. van Diejen, Properties of some families of hypergeometric orthogonal polynomials in several variables, Trans. Amer. Math. Soc. [**351**]{} (1999), 233–270. E. Dueñez, Random matrix ensembles associated to compact symmetric spaces, Comm. Math. Phys. [**244**]{} (2004), 29–61. F. J. Dyson, Statistical theory of the energy levels of complex systems. I, J. Mathematical Phys. [**3**]{} (1962), 140–156. P. J. Forrester and J. P. Keating, Singularity dominated strong fluctuations for some random matrix averages, Comm. Math. Phys. [**250**]{} (2004), 119–131. G. Heckman and H. Schlichtkrull, “Harmonic Analysis and Special Functions on Symmetric spaces”, Perspect. Math. [**16**]{}, Academic Press, San Diego, 1994. K. Johansson, On Szegö’s asymptotic formula for Toeplitz determinants and generalizations, Bull. Sci. Math. (2) [**112**]{} (1988), 257–304. —–, On fluctuations of eigenvalues of random Hermitian matrices, Duke Math. J. [**91**]{} (1998), 151–204. J. Kaneko, Selberg integrals and hypergeometric functions associated with Jack polynomials, SIAM J. Math. Anal. [**24**]{} (1993), 1086–1110. —–, $q$-Selberg integrals and Macdonald polynomials, Ann. scient. Éc. Norm. Sup. $4^{\text{e}}$ série [**29**]{} (1996), 583–637. J. P. Keating and N. C. Snaith, Random matrix theory and $\zeta(1/2+it)$, Comm. Math. Phys. [**214**]{} (2000), 57–89. —–, Random matrix theory and $L$-functions at $s=1/2$, Comm. Math. Phys. [**214**]{} (2000), 91–110. I. G. Macdonald, “Symmetric Functions and Hall Polynomials”, 2nd ed., Oxford University Press, Oxford, 1995. S. Matsumoto, Hyperdeterminant expressions for Jack functions of rectangular shapes, arXiv:math/0603033. M. L. Mehta, “Random Matrices”, 3rd ed., Pure and Applied Mathematics (Amsterdam) 142, Elsevier/Academic Press, Amsterdam, 2004. K. Mimachi, A duality of Macdonald-Koornwinder polynomials and its application to integral representations, Duke Math. J. [**107**]{} (2001), 265–281. <span style="font-variant:small-caps;">Sho MATSUMOTO</span>\ Faculty of Mathematics, Kyushu University.\ Hakozaki Higashi-ku, Fukuoka, 812-8581 JAPAN.\ `shom@math.kyushu-u.ac.jp`\ [^1]: Research Fellow of the Japan Society for the Promotion of Science, partially supported by Grant-in-Aid for Scientific Research (C) No. 17006193. [^2]: The connection between ours notation and van Diejen’s [@Diejen] is given by $\nu_0 = k_1+k_2, \ \nu_1=k_2, \ \nu=k_3$.
--- abstract: 'We generalize the concept of quasiparticle for one-dimensional (1D) interacting electronic systems. The $\uparrow $ and $\downarrow $ quasiparticles recombine the pseudoparticle colors $c$ and $s$ (charge and spin at zero magnetic field) and are constituted by one many-pseudoparticle [*topological momenton*]{} and one or two pseudoparticles. These excitations cannot be separated. We consider the case of the Hubbard chain. We show that the low-energy electron – quasiparticle transformation has a singular charater which justifies the perturbative and non-perturbative nature of the quantum problem in the pseudoparticle and electronic basis, respectively. This follows from the absence of zero-energy electron – quasiparticle overlap in 1D. The existence of Fermi-surface quasiparticles both in 1D and three dimensional (3D) many-electron systems suggests there existence in quantum liquids in dimensions 1$<$D$<$3. However, whether the electron – quasiparticle overlap can vanish in D$>$1 or whether it becomes finite as soon as we leave 1D remains an unsolved question.' author: - 'J. M. P. Carmelo$^{1}$ and A. H. Castro Neto$^{2}$' --- = 10000 Electrons, pseudoparticles, and quasiparticles in the\ one-dimensional many-electron problem $^{1}$ Department of Physics, University of Évora, Apartado 94, P-7001 Évora Codex, Portugal\ and Centro de Física das Interacções Fundamentais, I.S.T., P-1096 Lisboa Codex, Portugal $^{2}$ Department of Physics, University of California, Riverside, CA 92521 INTRODUCTION ============ The unconventional electronic properties of novel materials such as the superconducting coper oxides and synthetic quasi-unidimensional conductors has attracted much attention to the many-electron problem in spatial dimensions 1$\leq$D$\leq$3. A good understanding of [*both*]{} the different and common properties of the 1D and 3D many-electron problems might provide useful indirect information on quantum liquids in dimensions 1$<$D$<$3. This is important because the direct study of the many-electron problem in dimensions 1$<$D$<$3 is of great complexity. The nature of interacting electronic quantum liquids in dimensions 1$<$D$<$3, including the existence or non existence of quasiparticles and Fermi surfaces, remains an open question of crucial importance for the clarification of the microscopic mechanisms behind the unconventional properties of the novel materials. In 3D the many-electron quantum problem can often be described in terms of a one-particle quantum problem of quasiparticles [@Pines; @Baym], which interact only weakly. This Fermi liquid of quasiparticles describes successfully the properties of most 3D metals, which are not very sensitive to the presence of electron-electron interactions. There is a one to one correspondence between the $\sigma $ quasiparticles and the $\sigma $ electrons of the original non-interacting problem (with $\sigma =\uparrow \, , \downarrow$). Moreover, the coherent part of the $\sigma $ one-electron Green function is quite similar to a non-interacting Green function except that the bare $\sigma $ electron spectrum is replaced by the $\sigma $ quasiparticle spectrum and for an electron renormalization factor, $Z_{\sigma }$, smaller than one and such that $0<Z_{\sigma }<1$. A central point of Fermi-liquid theory is that quasiparticle - quasihole processes describe exact low-energy and small-momentum Hamiltonian eigenstates and “adding” or “removal” of one quasiparticle connects two exact ground states of the many-electron Hamiltonian. On the other hand, in 1D many-electron systems [@Solyom; @Haldane; @Metzner], such as the hUBbard chain solvable by Bethe ansatz (BA) [@Bethe; @Yang; @Lieb; @Korepinrev], the $\sigma $ electron renormalization factor, $Z_{\sigma }$, vanishes [@Anderson; @Carmelo95a]. Therefore, the many-particle problem is not expected to be descibed in terms of a one-particle problem of Fermi-liquid quasiparticles. Such non-perturbative electronic problems are usually called Luttinger liquids [@Haldane]. In these systems the two-electron vertex function at the Fermi momentum diverges in the limit of vanishing excitation energy [@Carmelo95a]. In a 3D Fermi liquid this quantity is closely related to the interactions of the quasiparticles [@Pines; @Baym]. Its divergence seems to indicate that there are no quasiparticles in 1D interacting electronic systems. A second possibility is that there are quasiparticles in the 1D many-electron problem but without overlap with the electrons in the limit of vanishing excitation energy. While the different properties of 1D and 3D many-electron problems were the subject of many Luttinger-liquid studies in 1D [@Solyom; @Haldane; @Metzner], the characterization of their common properties is also of great interest because the latter are expected to be present in dimensions 1$<$D$<$3 as well. One example is the Landau-liquid character common to Fermi liquids and some Luttinger liquids which consists in the generation of the low-energy excitations in terms of different momentum-occupation configurations of anti-commuting quantum objects (quasiparticles or pseudoparticles) whose forward-scattering interactions determine the low-energy properties of the quantum liquid. This generalized Landau-liquid theory was first applied in 1D to contact-interaction soluble problems [@Carmelo90] and shortly after also to $1/r^2$-interaction integrable models [@Haldane91]. Within this picture the 1D many-electron problem can also be described in terms of weakly interacting “one-particle” objects, the pseudoparticles, which, however, have no one-to-one correspondence with the electrons, as is shown in this paper. In spite of the absence of the one to one principle in what concerns single pseudoparticles and single electrons, following the studies of Refs. [@Carmelo90; @Carmelo91b; @Carmelo92] a generalized adiabatic principle for small-momentum pseudoparticle-pseudohole and electron-hole excitations was introduced for 1D many-electron problems in Refs. [@Carmelo92b]. The pseudoparticles of 1D many-electron systems show other similarities with the quasiparticles of a Fermi liquid, there interactions being determined by [*finite*]{} forward-scattering $f$ functions [@Carmelo91b; @Carmelo92; @Carmelo92b]. At constant values of the electron numbers this description of the quantum problem is very similar to Fermi-liquid theory, except for two main differences: (i) the $\uparrow $ and $\downarrow $ quasiparticles are replaced by the $c$ and $s$ pseudoparticles [@Carmelo93; @Carmelo94; @Carmelo94b; @Carmelo94c; @Carmelo95], and (ii) the discrete pseudoparticle momentum (pseudomomentum) is of the usual form $q_j={2\pi\over {N_a}}I_j^{\alpha}$ but the numbers $I_j^{\alpha}$ (with $\alpha =c,s$) are not always integers. They are integers or half integers depending on whether the number of particles in the system is even or odd. This plays a central role in the present quasiparticle problem. The connection of these perturbative pseudoparticles to the non-perturbative 1D electronic basis remains an open problem. By perturbative we mean here the fact that the two-pseudoparticle $f$ functions and forward-scattering amplitudes are finite [@Carmelo92b; @Carmelo94], in contrast to the two-electron vertice functions. The low-energy excitations of the Hubbard chain at constant electron numbers and in a finite magnetic field and chemical potential were shown [@Carmelo91b; @Carmelo92; @Carmelo92b; @Carmelo94; @Carmelo94b; @Carmelo94c] to be $c$ and $s$ pseudoparticle-pseudohole processes relative to the canonical-ensemble ground state. This determines the $c$ and $s$ low-energy separation [@Carmelo94c], which at zero magnetization leads to the so called charge and spin separation. In this paper we find that in addition to the above pseudoparticle-pseudohole excitations there are also Fermi-surface [*quasiparticle*]{} transitions in the 1D many-electron problem. Moreover, it is the study of such quasiparticle which clarifies the complex and open problem of the low-energy electron – pseudoparticle transformation. As in 3D Fermi liquids, the quasiparticle excitation is a transition between two exact ground states of the interacting electronic problem differing in the number of electrons by one. When one electron is added to the electronic system the number of these excitations [*also*]{} increases by one. Naturally, its relation to the electron excitation will depend on the overlap between the states associated with this and the quasiparticle excitation and how close we are in energy from the initial interacting ground state. Therefore, in order to define the quasiparticle we need to understand the properties of the actual ground state of the problem as, for instance, is given by its exact solution via the BA. We find that in the 1D Hubbard model adding one $\uparrow $ or $\downarrow $ electron of lowest energy is associated with adding one $\uparrow $ or $\downarrow $ quasiparticle, as in a Fermi liquid. These are many-pseudoparticle objects which recombine the colors $c$ and $s$ giving rise to the spin projections $\uparrow $ and $\downarrow $. We find that the quasiparticle is constituted by individual pseudoparticles and by a many-pseudoparticle object of large momentum that we call topological momenton. Importantly, these excitations cannot be separated. Although one quasiparticle is basically one electron, we show that in 1D the quasiparticle – electron transformation is singular because it involves the vanishing one-electron renormalization factor. This also implies a low-energy singular electron - pseudoparticle transformation. This singular character explains why the problem becomes perturbative in the pseudoparticle basis while it is non perturbative in the usual electronic picture. The singular nature of the low-energy electron - quasiparticle and electron – pseudoparticle transformations reflects the fact that the one-electron density of states vanishes in the 1D electronic problem when the excitation energy $\omega\rightarrow 0$. The diagonalization of the many-electron problem is at lowest excitation energy associated with the singular electron – quasiparticle transformation which absorbes the vanishing electron renormalization factor and maps vanishing electronic spectral weight onto finite quasiparticle and pseudoparticle spectral weight. For instance, by absorbing the renormalization factor the electron - quasiparticle transformation renormalizes divergent two-electron vertex functions onto finite two-quasiparticle scattering parameters. These quantities fully determine the finite $f$ functions and scattering amplitudes of the pseudoparticle theory [@Carmelo92; @Carmelo92b; @Carmelo94b]. The pseudoparticle $f$ functions and amplitudes determine all the static and low-energy quantities of the 1D many-electron problem and are associated with zero-momentum two-pseudoparticle forward scattering. The paper is organized as follows: the pseudoparticle operator basis is summarized in Sec. II. In Sec. III we find the quasiparticle operational expressions in the pseudoparticle basis and characterize the corresponding $c$ and $s$ recombination in the $\uparrow $ and $\downarrow $ spin projections. The singular electron – quasiparticle (and electron – pseudoparticle) transformation is studied in Sec. IV. Finally, in Sec. V we present the concluding remarks. THE PERTURBATIVE PSEUDOPARTICLE OPERATOR BASIS ============================================== It is useful for the studies presented in this paper to introduce in this section some basic information on the perturbative operator pseudoparticle basis, as it is obtained directly from the BA solution [@Carmelo94; @Carmelo94b; @Carmelo94c]. We consider the Hubbard model in 1D [@Lieb; @Frahm; @Frahm91] with a finite chemical potential $\mu$ and in the presence of a magnetic field $H$ [@Carmelo92b; @Carmelo94; @Carmelo94b] $$\begin{aligned} \hat{H} = -t\sum_{j,\sigma}\left[c_{j,\sigma}^{\dag }c_{j+1,\sigma}+c_ {j+1,\sigma}^{\dag }c_{j,\sigma}\right] + U\sum_{j} [c_{j,\uparrow}^{\dag }c_{j,\uparrow} - 1/2] [c_{j,\downarrow}^{\dag }c_{j,\downarrow} - 1/2] - \mu \sum_{\sigma} \hat{N}_{\sigma } - 2\mu_0 H\hat{S}_z \, ,\end{aligned}$$ where $c_{j,\sigma}^{\dag }$ and $c_{j,\sigma}$ are the creation and annihilation operators, respectively, for electrons at the site $j$ with spin projection $\sigma=\uparrow, \downarrow$. In what follows $k_{F\sigma}=\pi n_{\sigma}$ and $k_F=[k_{F\uparrow}+k_{F\downarrow}]/2=\pi n/2$, where $n_{\sigma}=N_{\sigma}/N_a$ and $n=N/N_a$, and $N_{\sigma}$ and $N_a$ are the number of $\sigma$ electrons and lattice sites, respectively ($N=\sum_{\sigma}N_{\sigma}$). We also consider the spin density, $m=n_{\uparrow}-n_{\downarrow}$. The many-electron problem $(1)$ can be diagonalized using the BA [@Yang; @Lieb]. We consider all finite values of $U$, electron densities $0<n<1$, and spin densities $0<m<n$. For this parameter space the low-energy physics is dominated by the lowest-weight states (LWS’s) of the spin and eta-spin algebras [@Korepin; @Essler] of type I [@Carmelo94; @Carmelo94b; @Carmelo95]. The LWS’s I are described by real BA rapidities, whereas all or some of the BA rapidities which describe the LWS’s II are complex and non-real. Both the LWS’s II and the non-LWS’s out of the BA solution [@Korepin] have energy gaps relative to each canonical ensemble ground state [@Carmelo94; @Carmelo94b; @Carmelo95]. Fortunately, the quasiparticle description involves only LWS’s I because these quantum objects are associated with ground-state – ground-state transitions and in the present parameter space all ground states of the model are LWS’s I. On the other hand, the electronic excitation involves transitions to LWS’s I, LWS’s II, and non-LWS’s, but the electron – quasiparticle transformation involves only LWS’s I. Therefore, our results refer mainly to the Hilbert sub space spanned by the LWS’s I and are valid at energy scales smaller than the above gaps. (Note that in simpler 1D quantum problems of symmetry $U(1)$ the states I span the whole Hilbert space [@Anto].) In this Hilbert sub space the BA solution was shown to refer to an operator algebra which involves two types of [*pseudoparticle*]{} creation (annihilation) operators $b^{\dag }_{q,\alpha }$ ($b_{q,\alpha }$). These obey the usual anti-commuting algebra [@Carmelo94; @Carmelo94b; @Carmelo94c] $$\{b^{\dag }_{q,\alpha},b_{q',\alpha'}\} =\delta_{q,q'}\delta_{\alpha ,\alpha'}, \hspace{0.5cm} \{b^{\dag }_{q,\alpha},b^{\dag }_{q',\alpha'}\}=0, \hspace{0.5cm} \{b_{q,\alpha},b_{q',\alpha'}\}=0 \, .$$ Here $\alpha$ refers to the two pseudoparticle colors $c$ and $s$ [@Carmelo94; @Carmelo94b; @Carmelo94c]. The discrete pseudomomentum values are $$q_j = {2\pi\over {N_a}}I_j^{\alpha } \, ,$$ where $I_j^{\alpha }$ are [*consecutive*]{} integers or half integers. There are $N_{\alpha }^*$ values of $I_j^{\alpha }$, [*i.e.*]{} $j=1,...,N_{\alpha }^*$. A LWS I is specified by the distribution of $N_{\alpha }$ occupied values, which we call $\alpha $ pseudoparticles, over the $N_{\alpha }^*$ available values. There are $N_{\alpha }^*- N_{\alpha }$ corresponding empty values, which we call $\alpha $ pseudoholes. These are good quantum numbers such that $$N_c^* = N_a \, ; \hspace{0.5cm} N_c = N \, ; \hspace{0.5cm} N_s^* = N_{\uparrow} \, ; \hspace{0.5cm} N_s = N_{\downarrow} \, .$$ The numbers $I_j^c$ are integers (or half integers) for $N_s$ even (or odd), and $I_j^s$ are integers (or half integers) for $N_s^*$ odd (or even) [@Lieb]. All the states I can be generated by acting onto the vacuum $|V\rangle $ (zero-electron density) suitable combinations of pseudoparticle operators [@Carmelo94; @Carmelo94b]. The ground state $$|0;N_{\sigma }, N_{-\sigma}\rangle = \prod_{\alpha=c,s} [\prod_{q=q_{F\alpha }^{(-)}}^{q_{F\alpha }^{(+)}} b^{\dag }_{q,\alpha }] |V\rangle \, ,$$ and all LWS’s I are Slatter determinants of pseudoparticle levels. In Appendix A we define the pseudo-Fermi points, $q_{F\alpha }^{(\pm )}$, of $(5)$. In that Appendix we also present other quantities of the pseudoparticle representation which are useful for the present study. In the pseudoparticle basis spanned by the LWS’s I and in normal order relatively to the ground state $(5)$ the Hamiltonian $(1)$ has the following form [@Carmelo94; @Carmelo94c] $$:\hat{H}: = \sum_{i=1}^{\infty}\hat{H}^{(i)} \, ,$$ where, to second pseudoparticle scattering order $$\begin{aligned} \hat{H}^{(1)} & = & \sum_{q,\alpha} \epsilon_{\alpha}(q):\hat{N}_{\alpha}(q): \, ;\nonumber\\ \hat{H}^{(2)} & = & {1\over {N_a}}\sum_{q,\alpha} \sum_{q',\alpha'} {1\over 2}f_{\alpha\alpha'}(q,q') :\hat{N}_{\alpha}(q)::\hat{N}_{\alpha'}(q'): \, .\end{aligned}$$ Here $(7)$ are the Hamiltonian terms which are [ *relevant*]{} at low energy [@Carmelo94b]. Furthermore, at low energy and small momentum the only relevant term is the non-interacting term $\hat{H}^{(1)}$. Therefore, the $c$ and $s$ pseudoparticles are non-interacting at the small-momentum and low-energy fixed point and the spectrum is described in terms of the bands $\epsilon_{\alpha}(q)$ (studied in detail in Ref. [@Carmelo91b]) in a pseudo-Brillouin zone which goes between $q_c^{(-)}\approx -\pi$ and $q_c^{(+)}\approx \pi$ for the $c$ pseudoparticles and $q_s^{(-)}\approx -k_{F\uparrow}$ and $q_s^{(+)}\approx k_{F\uparrow}$ for the $s$ pseudoparticles. In the ground state $(5)$ these are occupied for $q_{F\alpha}^{(-)}\leq q\leq q_{F\alpha}^{(+)}$, where the pseudo-Fermi points (A1)-(A3) are such that $q_{Fc}^{(\pm)}\approx \pm 2k_F$ and $q_{Fs}^{(\pm)}\approx \pm k_{F\downarrow}$ (see Appendix A). At higher energies and (or ) large momenta the pseudoparticles start to interact via zero-momentum transfer forward-scattering processes of the Hamiltonian $(6)-(7)$. As in a Fermi liquid, these are associated with $f$ functions and Landau parameters [@Carmelo92; @Carmelo94], whose expressions we present in Appendix A, where we also present the expressions for simple pseudoparticle-pseudohole operators which are useful for the studies of next sections. THE QUASIPARTICLES AND $c$ AND $s$ RECOMBINATION ================================================ In this section we introduce the 1D quasiparticle and express it in the pseudoparticle basis. In Sec. IV we find that this clarifies the low-energy transformation between the electrons and the pseudoparticles. We define the quasiparticle operator as the generator of a ground-state – ground-state transition. The study of ground states of form $(5)$ differing in the number of $\sigma $ electrons by one reveals that their relative momentum equals [*presisely* ]{} the $U=0$ Fermi points, $\pm k_{F\sigma}$. Following our definition, the quasiparticle operator, ${\tilde{c}}^{\dag }_{k_{F\sigma },\sigma }$, which creates one quasiparticle with spin projection $\sigma$ and momentum $k_{F\sigma}$ is such that $${\tilde{c}}^{\dag }_{k_{F\sigma},\sigma} |0; N_{\sigma}, N_{-\sigma}\rangle = |0; N_{\sigma} + 1, N_{\sigma}\rangle \, .$$ The quasiparticle operator defines a one-to-one correspondence between the addition of one electron to the system and the creation of one quasiparticle: the electronic excitation, $c^{\dag }_{k_{F\sigma},\sigma}|0; N_{\sigma}, N_{-\sigma}\rangle$, defined at the Fermi momentum but arbitrary energy, contains a single quasiparticle, as we show in Sec. IV. In that section we will study this excitation as we take the energy to be zero, that is, as we approach the Fermi surface, where the problem is equivalent to Landau’s. Since we are discussing the problem of addition or removal of one particle the boundary conditions play a crucial role. As discussed in Secs. I and II, the available Hamiltonian eigenstates I depend on the discrete numbers $I_j^{\alpha}$ of Eq. $(3)$ which can be integers of half-integers depending on whether the number of particles in the system is even or odd \[the pseudomomentum is given by Eq. $(3)$\]. When we add or remove one electron to or from the many-body system we have to consider the transitions between states with integer and half-integer quantum numbers \[or equivalently, between states with an odd (even) and even (odd) number of $\sigma $ electrons\]. The transition between two ground states differing in the number of electrons by one is associated with two different processes: a backflow in the Hilbert space of the pseudoparticles with a shift of all the pseudomomenta by $\pm\frac{\pi}{N_a}$ \[associated with the change from even (odd) to odd (even) number of particles\], which we call [*topological momenton*]{}, and the creation of one or a pair of pseudoparticles at the pseudo-Fermi points. According to the integer or half-integer character of the $I_j^{\alpha}$ numbers we have four “topological” types of Hilbert sub spaces. Since that character depends on the parities of the electron numbers, we refer these sub spaces by the parities of $N_{\uparrow}$ and $N_{\downarrow}$, respectively: (a) even, even; (b) even, odd; (c) odd, even; and (d) odd, odd. The ground-state total momentum expression is different for each type of Hilbert sub space in such a way that the relative momentum, $\Delta P$, of $U>0$ ground states differing in $N_{\sigma }$ by one equals the $U=0$ Fermi points, ie $\Delta P=\pm k_{F\sigma }$. Moreover, we find that the above quasiparticle operator $\tilde{c}^{\dag }_{k_{F\sigma },\sigma }$ involves the generator of one low-energy and large-momentum topological momenton. The $\alpha $ topological momenton is associated with the backflow of the $\alpha $ pseudoparticle pseudomomentum band and cannot occur without a second type of excitation associated with the adding or removal of pseudoparticles. The $\alpha $-topological-momenton generator, $U^{\pm 1}_{\alpha }$, is an unitary operator which controls the topological transformations of the pseudoparticle Hamiltonian $(6)-(7)$. For instance, in the $\Delta P=\pm k_{F\uparrow }$ transitions (a)$\rightarrow $(c) and (b)$\rightarrow $(d) the Hamiltonian $(6)-(7)$ transforms as $$:H: \rightarrow U^{\pm\Delta N_{\uparrow}}_s :H: U^{\mp\Delta N_{\uparrow}}_s \, ,$$ and in the $\Delta P=\pm k_{F\downarrow }$ transitions (a)$\rightarrow $(b) and (c)$\rightarrow $(d) as $$:H:\rightarrow U^{\pm \Delta N_{\downarrow}}_c:H:U^{\mp \Delta N_{\downarrow}}_c \, ,$$ where $\Delta N_{\sigma}=\pm 1$ and the expressions of the generator $U^{\pm 1}_{\alpha }$ is obtained below. In order to arrive to the expressions for the quasipaticle operators and associate topological-momenton generators $U^{\pm 1}_{\alpha }$ we refer again to the ground-state pseudoparticle representation $(5)$. For simplicity, we consider that the initial ground state of form $(5)$ is non degenerate and has zero momentum. Following equations (A1)-(A3) this corresponds to the situation when both $N_{\uparrow }$ and $N_{\downarrow }$ are odd, ie the initial Hilbert sub space is of type (d). However, note that our results are independent of the choice of initial ground state. The pseudoparticle numbers of the initial state are $N_c=N_{\uparrow }+N_{\downarrow }$ and $N_s=N_{\downarrow }$ and the pseudo-Fermi points $q_{F\alpha }^{(\pm)}$ are given in Eq. (A1). We express the electronic and pseudoparticle numbers and pseudo-Fermi points of the final states in terms of the corresponding values for the initial state. We consider here the case when the final ground state has numbers $N_{\uparrow }$ and $N_{\downarrow }+1$ and momentum $k_{F\downarrow }$. The procedures for final states with these numbers and momentum $-k_{F\downarrow }$ or numbers $N_{\uparrow }+1$ and $N_{\downarrow }$ and momenta $\pm k_{F\uparrow }$ are similiar and are omitted here. The above final state belongs the Hilbert sub space (c). Our goal is to find the quasiparticle operator $\tilde{c}^{\dag}_{k_{F\downarrow },\downarrow}$ such that $$|0; N_{\uparrow}, N_{\downarrow}+1\rangle = \tilde{c}^{\dag}_{k_{F\downarrow },\downarrow} |0; N_{\uparrow}, N_{\downarrow}\rangle\, .$$ Taking into account the changes in the pseudoparticle quantum numbers associated with this (d)$\rightarrow $(c) transition we can write the final state as follows $$|0; N_{\uparrow}, N_{\downarrow}+1\rangle = \prod_{q=q^{(-)}_{Fc}-\frac{\pi}{N_a}}^{q^{(+)}_{Fc}+ \frac{\pi}{N_a}}\prod_{q=q^{(-)}_{Fs}}^{q^{(+)}_{Fs}+\frac{2\pi}{N_a}} b^{\dag}_{q,c} b^{\dag}_{q,s} |V\rangle \, ,$$ which can be rewritten as $$|0; N_{\uparrow}, N_{\downarrow}+1\rangle = b^{\dag}_{q^{(+)}_{Fc}+\frac{\pi}{N_a},c} b^{\dag}_{q^{(+)}_{Fs}+\frac{2\pi}{N_a},s} \prod_{q=q^{(-)}_{Fs}}^{q^{(+)}_{Fs}} \prod_{q=q^{(-)}_{Fs}}^{q^{(+)}_{Fs}} b^{\dag}_{q-\frac{\pi}{N_a},c} b^{\dag}_{q,s} |V\rangle \, ,$$ and further, as $$|0; N_{\uparrow}, N_{\downarrow}+1\rangle = b^{\dag}_{q^{(+)}_{Fc}+\frac{\pi}{N_a},c} b^{\dag}_{q^{(+)}_{Fs}+\frac{2\pi}{N_a},s} U_c^{+1}|0; N_{\uparrow}, N_{\downarrow}\rangle \, ,$$ where $U_c^{+1}$ is the generator of expression $(10)$. Both this operator and the operator $U_s^{+1}$ of Eq. $(9)$ obey the relation $$U^{\pm 1}_{\alpha }b^{\dag }_{q,\alpha }U^{\mp 1}_{\alpha }= b^{\dag }_{q\mp {\pi\over {N_a}},\alpha } \, .$$ The pseudoparticle vacuum remains invariant under the application of $U^{\pm 1}_{\alpha }$ $$U^{\pm 1}_{\alpha }|V\rangle = |V\rangle \, .$$ (The $s$-topological-momenton generator, $U_s^{+1}$, appears if we consider the corresponding expressions for the up-spin electron.) Note that the $\alpha $ topological momenton is an excitation which only changes the integer or half-integer character of the corresponding pseudoparticle quantum numbers $I_j^{\alpha }$. In Appendix B we derive the following expression for the generator $U^{\pm 1}_{\alpha }$ $$U^{\pm 1}_{\alpha }=U_{\alpha } \left(\pm\frac{\pi}{N_a}\right) \, ,$$ where $$U_{\alpha}(\delta q) = \exp\left\{ - i\delta q G_{\alpha}\right\} \, ,$$ and $$G_{\alpha} = -i\sum_{q} \left[{\partial\over {\partial q}} b^{\dag }_{q,\alpha }\right]b_{q,\alpha } \, ,$$ is the Hermitian generator of the $\mp {\pi\over {N_a}}$ topological $\alpha $ pseudomomentum translation. The operator $U^{\pm 1}_{\alpha }$ has the following discrete representation $$U^{\pm 1}_{\alpha } = \exp\left\{\sum_{q} b^{\dag }_{q\pm {\pi\over {N_a}},\alpha }b_{q,\alpha }\right\} \, .$$ When acting on the initial ground state of form $(5)$ the operator $U^{\pm 1}_{\alpha }$ produces a vanishing-energy $\alpha $ topological momenton of large momentum, $k=\mp N_{\alpha }{\pi\over {N_a}}\simeq q_{F\alpha}^{(\mp)}$. As referred above, the topological momenton is always combined with adding or removal of pseudoparticles. In the two following equations we change notation and use $q_{F\alpha }^{(\pm)}$ to refer the pseudo-Fermi points of the final state (otherwise our reference state is the initial state). Comparing equations $(11)$ and $(14)$ it follows that $$\tilde{c}^{\dag }_{\pm k_{F\downarrow },\downarrow } = b^{\dag }_{q_{Fc}^{(\pm)},c}b^{\dag }_{q_{Fs}^{(\pm)},s} U^{\pm 1}_{c } \, ,$$ and a similar procedure for the up-spin electron leads to $$\tilde{c}^{\dag }_{\pm k_{F\uparrow },\uparrow } = b^{\dag }_{q_{Fc}^{(\pm)},c} U^{\pm 1}_{s} \, .$$ According to these equations the $\sigma $ quasiparticles are constituted by one topological momenton and one or two pseudoparticles. The topological momenton cannot be separated from the pseudoparticle excitation, ie both these excitations are confined inside the quasiparticle. Moreover, since the generators $(17)-(20)$ have a many-pseudoparticle character, following Eqs. $(21)-(22)$ the quasiparticle is a many-pseudoparticle object. Note also that both the $\downarrow $ and $\uparrow $ quasiparticles $(21)$ and $(22)$, respectively, are constituted by $c$ and $s$ excitations. Therefore, the $\sigma $ quasiparticle is a quantum object which recombines the pseudoparticle colors $c$ and $s$ (charge and spin in the limit $m\rightarrow 0$ [@Carmelo94]) giving rise to spin projection $\uparrow $ or $\downarrow $. It has “Fermi surface” at $\pm k_{F\sigma }$. However, two-quasiparticle objects can be of two-pseudoparticle character because the product of the two corresponding many-pseudoparticle operators is such that $U^{+ 1}_{\alpha }U^{- 1}_{\alpha }=\openone$, as for the triplet pair $\tilde{c}^{\dag }_{+k_{F\uparrow },\uparrow } \tilde{c}^{\dag }_{-k_{F\uparrow },\uparrow }= b^{\dag }_{q_{Fc}^{(+)},c}b^{\dag }_{q_{Fc}^{(-)},c}$. Such triplet quasiparticle pair is constituted only by individual pseudoparticles because it involves the mutual annihilation of the two topological momentons of generators $U^{+ 1}_{\alpha }$ and $U^{- 1}_{\alpha }$. Therefore, relations $(21)$ and $(22)$ which connect quasiparticles and pseudoparticles have some similarities with the Jordan-Wigner transformation. Finally, we emphasize that the Hamiltonian-eigenstate generators of Eqs. $(26)$ and $(27)$ of Ref. [@Carmelo94b] are not general and refer to finite densities of added and removed electrons, respectively, corresponding to even electron numbers. The corresponding general generator expressions will be studied elsewhere and involve the topological-momenton generators $(17)-(20)$. THE ELECTRON - QUASIPARTICLE TRANSFORMATION =========================================== In this section we study the relation of the 1D quasiparticle introduced in Sec. III to the electron. This study brings about the question of the low-excitation-energy relation between the electronic operators $c_{k,\sigma}^{\dag }$ in momentum space at $k=\pm k_{F\sigma }$ and the pseudoparticle operators $b_{q,\alpha}^{\dag }$ at the pseudo-Fermi points. The quasiparticle operator, ${\tilde{c}}^{\dag }_{k_{F\sigma },\sigma}$, which creates one quasiparticle with spin projection $\sigma$ and momentum $k_{F\sigma}$, is defined by Eq. $(8)$. In the pseudoparticle basis the $\sigma $ quasiparticle operator has the form $(21)$ or $(22)$. However, since we do not know the relation between the electron and the pseudoparticles, Eqs. $(21)$ and $(22)$ do not provide direct information on the electron content of the $\sigma $ quasiparticle. Equation $(8)$ tells us that the quasiparticle operator defines a one-to-one correspondence between the addition of one electron to the system and the creation of one quasiparticle, exactly as we expect from the Landau theory in 3D: the electronic excitation, $c^{\dag }_{k_{F\sigma },\sigma}|0; N_{\uparrow}=N_c-N_s, N_{\downarrow}=N_s\rangle$, defined at the Fermi momentum but arbitrary energy, contains a single $\sigma $ quasiparticle, as we show below. When we add or remove one electron from the many-body system this includes the transition to the suitable final ground state as well as transitions to excited states. The former transition is nothing but the quasiparticle excitation of Sec. III. Although our final results refer to momenta $k=\pm k_{F\sigma }$, in the following analysis we consider for simplicity only the momentum $k=k_{F\sigma }$. In order to relate the quasiparticle operators $\tilde{c}^{\dag }_{k_{F\sigma },\sigma }$ to the electronic operators $c^{\dag }_{k_{F\sigma },\sigma }$ we start by defining the Hilbert sub space where the low-energy $\omega $ projection of the state $$c^{\dag }_{k_{F\sigma},\sigma} |0; N_{\sigma}, N_{-\sigma} \rangle \, ,$$ is contained. Notice that the electron excitation $(23)$ is [ *not*]{} an eigenstate of the interacting problem: when acting onto the initial ground state $|0;i\rangle\equiv |0; N_{\sigma}, N_{-\sigma} \rangle$ the electronic operator $c^{\dag }_{k_{F\sigma},\sigma }$ can be written as $$c^{\dag }_{k_{F\sigma},\sigma } = \left[\langle 0;f|c^{\dag }_{k_{F\sigma},\sigma }|0;i\rangle + {\hat{R}}\right] \tilde{c}^{\dag }_{k_{F\sigma},\sigma } \, ,$$ where $${\hat{R}}=\sum_{\gamma}\langle \gamma;k=0|c^{\dag }_{k_{F\sigma},\sigma }|0;i\rangle {\hat{A}}_{\gamma} \, ,$$ and $$|\gamma;k=0\rangle = {\hat{A}}_{\gamma} \tilde{c}^{\dag }_{k_{F\sigma },\sigma }|0;i\rangle = {\hat{A}}_{\gamma}|0;f\rangle \, .$$ Here $|0;f\rangle\equiv |0; N_{\sigma}+1, N_{-\sigma} \rangle$ denotes the final ground state, $\gamma$ represents the set of quantum numbers needed to specify each Hamiltonian eigenstate present in the excitation $(23)$, and ${\hat{A}}_{\gamma}$ is the corresponding generator. The first term of the rhs of Eq. $(24)$ refers to the ground state - ground state transition and the operator $\hat{R}$ generates $k=0$ transitions from $|0,f\rangle $ to states I, states II, and non LWS’s. Therefore, the electron excitation $(23)$ contains the quantum superposition of both the suitable final ground state $|0;f\rangle$, of excited states I relative to that state which result from multiple pseudoparticle-pseudohole processes associated with transitions to states I, and of LWS’s II and non-LWS’s. All these states have the same electron numbers as the final ground state. The transitions to LWS’s II and to non-LWS’s require a minimal finite energy which equals their gap relative to the final ground state. The set of all these Hamiltonian eigenstates spans the Hilbert sub space where the electronic operators $c^{\dag }_{k_{F\sigma },\sigma }$ $(24)$ projects the initial ground state. In order to show that the ground-state – ground-state leading order term of $(24)$ controls the low-energy physics, we study the low-energy sector of the above Hilbert sub space. This is spanned by low-energy states I. In the case of these states the generator ${\hat{A}}_{\gamma}$ of Eq. $(26)$ reads $${\hat{A}}_{\gamma}\equiv {\hat{A}}_{\{N_{ph}^{\alpha ,\iota}\},l} = \prod_{\alpha=c,s} {\hat{L}}^{\alpha\iota}_{-N_{ph}^{\alpha\iota}}(l) \, ,$$ where the operator ${\hat{L}}^{\alpha\iota}_{-N_{ph}^{\alpha\iota}}(l)$ is given in Eq. $(56)$ of Ref. [@Carmelo94b] and produces a number $N_{ph}^{\alpha ,\iota}$ of $\alpha ,\iota$ pseudoparticle-pseudohole processes onto the final ground state. Here $\iota =sgn (q)1=\pm 1$ defines the right ($\iota=1$) and left ($\iota=-1$) pseudoparticle movers, $\{N_{ph}^{\alpha ,\iota}\}$ is a short notation for $$\{N_{ph}^{\alpha ,\iota}\}\equiv N_{ph}^{c,+1}, N_{ph}^{c,-1}, N_{ph}^{s,+1}, N_{ph}^{s,-1} \, ,$$ and $l$ is a quantum number which distinguishes different pseudoparticle-pseudohole distributions characterized by the same values for the numbers $(28)$. In the case of the lowest-energy states I the above set of quantum numbers $\gamma $ is thus given by $\gamma\equiv \{N_{ph}^{\alpha ,\iota}\},l$. (We have introduced the argument $(l)$ in the operator $L^{\alpha\iota}_{-N_{ph}^{\alpha\iota}}(l)$ which for the same value of the $N_{ph}^{\alpha\iota}$ number defines different $\alpha\iota$ pseudoparticle - pseudohole configurations associated with different choices of the pseudomomenta in the summation of expression $(56)$ of Ref. [@Carmelo94b].) In the particular case of the lowest-energy states expression $(26)$ reads $$|\{N_{ph}^{\alpha ,\iota}\},l;k=0\rangle = {\hat{A}}_{\{N_{ph}^{\alpha ,\iota}\},l} \tilde{c}^{\dag }_{k_{F\sigma },\sigma }|0;i\rangle = {\hat{A}}_{\{N_{ph}^{\alpha ,\iota}\},l}|0;f\rangle \, .$$ The full electron – quasiparticle transformation $(24)$ involves other Hamiltonian eigenstates which are irrelevant for the quasiparticle problem studied in the present paper. Therefore, we omit here the study of the general generators ${\hat{A}}_{\gamma}$ of Eq. $(26)$. The momentum expression (relative to the final ground state) of Hamiltonian eigenstates with generators of the general form $(27)$ is [@Carmelo94b] $$k = {2\pi\over {N_a}}\sum_{\alpha ,\iota}\iota N_{ph}^{\alpha\iota} \, .$$ Since our states $|\{N_{ph}^{\alpha ,\iota}\},l;k=0\rangle$ have zero momentum relative to the final ground state they have restrictions in the choice of the numbers $(28)$. For these states these numbers are such that $$\sum_{\alpha ,\iota}\iota N_{ph}^{\alpha ,\iota} = 0 \, ,$$ which implies that $$\sum_{\alpha }N_{ph}^{\alpha ,+1} = \sum_{\alpha }N_{ph}^{\alpha ,-1} = \sum_{\alpha }N_{ph}^{\alpha ,\iota} \, .$$ Since $$N_{ph}^{\alpha ,\iota}=1,2,3,.... \, ,$$ it follows from Eqs. $(31)-(33)$ that $$\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota} = 2,4,6,8.... \, ,$$ is always an even positive integer. The vanishing chemical-potential excitation energy, $$\omega^0_{\sigma }=\mu(N_{\sigma }+1,N_{-\sigma }) -\mu(N_{\sigma },N_{-\sigma }) \, ,$$ can be evaluated by use of the Hamiltonian $(6)-(7)$ and is given by $$\omega^0_{\uparrow } = {\pi\over {2N_a}}\left[v_c + F_{cc}^1 + v_s + F_{ss}^1 - 2F_{cs}^1 + v_c + F_{cc}^0\right] \, ,$$ and $$\omega^0_{\downarrow } = {\pi\over {2N_a}}\left[v_s + F_{ss}^1 + v_c + F_{cc}^0 + v_s + F_{ss}^0 + 2F_{cs}^0\right] \, ,$$ for up and down spin, respectively, and involves the pseudoparticle velocities (A6) and Landau parameters (A8). Since we measure the chemical potencial from its value at the canonical ensemble of the reference initial ground state, ie we consider $\mu(N_{\sigma },N_{-\sigma })=0$, $\omega^0_{\sigma }$ measures also the ground-state excitation energy $\omega^0_{\sigma }=E_0(N_{\sigma }+1,N_{-\sigma })-E_0(N_{\sigma },N_{-\sigma })$. The excitation energies $\omega (\{N_{ph}^{\alpha ,\iota}\})$ of the states $|\{N_{ph}^{\alpha ,\iota}\},l;k=0\rangle$ (relative to the initial ground state) involve the energy $\omega^0_{\sigma }$ and are $l$ independent. They are given by $$\omega (\{N_{ph}^{\alpha ,\iota}\}) = \omega^0_{\sigma } + {2\pi\over {N_a}}\sum_{\alpha ,\iota} v_{\alpha} N_{ph}^{\alpha ,\iota} \, .$$ We denote by $N_{\{N_{ph}^{\alpha ,\iota}\}}$ the number of these states which obey the condition-equations $(31)$, $(32)$, and $(34)$ and have the same values for the numbers $(28)$. In order to study the main corrections to the (quasiparticle) ground-state – ground-state transition it is useful to consider the simplest case when $\sum_{\alpha ,\iota}N_{ph}^{\alpha ,\iota}=2$. In this case we have $N_{\{N_{ph}^{\alpha ,\iota}\}}=1$ and, therefore, we can omit the index $l$. There are four of such Hamiltonian eigenstates. Using the notation of the right-hand side (rhs) of Eq. $(28)$ these states are $|1,1,0,0;k=0\rangle $, $|0,0,1,1;k=0\rangle $, $|1,0,0,1;k=0\rangle $, and $|0,1,1,0;k=0\rangle $. They involve two pseudoparticle-pseudohole processes with $\iota=1$ and $\iota=-1$, respectively and read $$|1,1,0,0;k=0\rangle = \prod_{\iota=\pm 1} {\hat{\rho}}_{c ,\iota}(\iota {2\pi\over {N_a}}) \tilde{c}^{\dag }_{k_{F\sigma },\sigma } |0;i\rangle \, ,$$ $$|0,0,1,1;k=0\rangle = \prod_{\iota=\pm 1} {\hat{\rho}}_{s ,\iota}(\iota {2\pi\over {N_a}}) \tilde{c}^{\dag }_{k_{F\sigma },\sigma } |0;i\rangle \, ,$$ $$|1,0,0,1;k=0\rangle = {\hat{\rho}}_{c,+1}({2\pi\over {N_a}}) {\hat{\rho}}_{s,-1}(-{2\pi\over {N_a}}) \tilde{c}^{\dag }_{k_{F\sigma },\sigma } |0;i\rangle \, ,$$ $$|0,1,1,0;k=0\rangle = {\hat{\rho}}_{c,-1}(-{2\pi\over {N_a}}) {\hat{\rho}}_{s,+1}({2\pi\over {N_a}}) \tilde{c}^{\dag }_{k_{F\sigma },\sigma } |0;i\rangle \, ,$$ where ${\hat{\rho}}_{\alpha ,\iota}(k)$ is the fluctuation operator of Eq. (A12). This was studied in some detail in Ref. [@Carmelo94c]. From equations $(26)$, $(27)$, and $(29)$ we can rewrite expression $(24)$ as $$\begin{aligned} c^{\dag }_{k_{F\sigma},\sigma } & = & \langle 0;f|c^{\dag }_{k_{F\sigma},\sigma }|0;i\rangle \left[1 + \sum_{\{N_{ph}^{\alpha ,\iota}\},l} {\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0| c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle \over \langle 0;f|c^{\dag }_{k_{F\sigma},\sigma }|0;i\rangle} \prod_{\alpha=c,s} {\hat{L}}^{\alpha\iota}_{-N_{ph}^{\alpha\iota}}(l)\right] \tilde{c}^{\dag }_{k_{F\sigma},\sigma } \nonumber\\ & + & \sum_{\gamma '}\langle \gamma ';k=0|c^{\dag }_{k_{F\sigma},\sigma }|0;i\rangle {\hat{A}}_{\gamma '} \tilde{c}^{\dag }_{k_{F\sigma},\sigma } \, ,\end{aligned}$$ where $\gamma '$ refers to the Hamiltonian eigenstates of form $(26)$ whose generator ${\hat{A}}_{\gamma '}$ are not of the particular form $(27)$. In Appendix C we evaluate the matrix elements of expression $(43)$ corresponding to transitions to the final ground state and excited states of form $(29)$. Following Ref. [@Carmelo94b], these states refer to the conformal-field-theory [@Frahm; @Frahm91] critical point. They are such that the ratio $N_{ph}^{\alpha ,\iota}/N_a$ vanishes in the thermodynamic limit, $N_a\rightarrow 0$. Therefore, in that limit the positive excitation energies $\omega (\{N_{ph}^{\alpha ,\iota}\})$ of Eq. $(38)$ are vanishing small. The results of that Appendix lead to $$\langle 0;f|c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle = \sqrt{Z_{\sigma }}\, ,$$ where, as in a Fermi liquid [@Nozieres], the one-electron renormalization factor $$Z_{\sigma}=\lim_{\omega\to 0}Z_{\sigma}(\omega) \, ,$$ is closed related to the $\sigma $ self energy $\Sigma_{\sigma} (k,\omega)$. Here the function $Z_{\sigma}(\omega)$ is given by the small-$\omega $ leading-order term of $$|\varsigma_{\sigma}||1-{\partial \hbox{Re} \Sigma_{\sigma} (\pm k_{F\sigma},\omega) \over {\partial\omega}}|^{-1}\, ,$$ where $$\varsigma_{\uparrow}=-2+\sum_{\alpha} {1\over 2}[(\xi_{\alpha c}^1-\xi_{\alpha s}^1)^2 +(\xi_{\alpha c}^0)^2] \, ,$$ and $$\varsigma_{\downarrow}=-2 +\sum_{\alpha}{1\over 2}[(\xi_{\alpha s}^1)^2+(\xi_{\alpha c}^0+\xi_{\alpha s}^0)^2] \, ,$$ are $U$, $n$, and $m$ dependent exponents which for $U>0$ are negative and such that $-1<\varsigma_{\sigma}<-1/2$. In equations $(47)$ and $(48)$ $\xi_{\alpha\alpha'}^j$ are the parameters (A7). From equations $(46)$, (C11), and (C15) we find $$Z_{\sigma}(\omega)=a^{\sigma }_0 \omega^{1+\varsigma_{\sigma}} \, ,$$ where $a^{\sigma }_0$ is a real and positive constant such that $$\lim_{U\to 0}a^{\sigma }_0=1 \, .$$ Equation $(49)$ confirms that the renormalization factor $(45)$ vanishes, as expected for a 1D many-electron problem [@Anderson]. It follows from Eq. $(44)$ that in the present 1D model the electron renormalization factor can be identified with a single matrix element [@Anderson; @Metzner94]. We emphasize that in a Fermi liquid $\varsigma_{\sigma}=-1$ and Eq. $(46)$ recovers the usual Fermi-liquid relation. In the different three limits $U\rightarrow 0$, $m\rightarrow 0$, and $m\rightarrow n$ the exponents $\varsigma_{\uparrow}$ and $\varsigma_{\downarrow}$ are equal and given by $-1$, $-2+{1\over 2}[{\xi_0\over 2}+{1\over {\xi_0}}]^2$, and $-{1\over 2}-\eta_0[1-{\eta_0\over 2}]$, respectively. Here the $m\rightarrow 0$ parameter $\xi_0$ changes from $\xi_0=\sqrt{2}$ at $U=0$ to $\xi_0=1$ as $U\rightarrow\infty$ and $\eta_0=({2\over {\pi}})\tan^{-1}\left({4t\sin (\pi n)\over U}\right)$. The evaluation in Appendix C for the matrix elements of the rhs of expression $(43)$ refers to the thermodynamic limit and follows the study of the small-$\omega $ dependencies of the one-electron Green function $G_{\sigma} (\pm k_{F\sigma}, \omega)$ and self energy $\Sigma_{\sigma} (\pm k_{F\sigma}, \omega)$. This leads to $\omega $ dependent quantities \[as $(46)$ and $(49)$ and the function $F_{\sigma}^{\alpha ,\iota}(\omega )$ of Eq. $(51)$ below\] whose $\omega\rightarrow 0$ limits provide the expressions for these matrix elements. Although these matrix elements vanish, it is physicaly important to consider the associate $\omega $-dependent functions. These are matrix-element expressions only in the limit $\omega\rightarrow 0$, yet at small finite values of $\omega $ they provide revelant information on the electron - quasiparticle overlap at low energy $\omega $. In addition to expression $(44)$, in Appendix C we find the following expression which is valid only for matrix elements involving the excited states of form $(29)$ referring to the conformal-field-theory critical point $$\begin{aligned} \langle \{N_{ph}^{\alpha ,\iota}\},l;k=0| c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle & = & \lim_{\omega\to 0} F_{\sigma}^{\alpha ,\iota}(\omega ) = 0\, ,\nonumber\\ F_{\sigma}^{\alpha ,\iota}(\omega ) & = & e^{i\chi_{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)} \sqrt{{a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)\over a^{\sigma }_0}}\sqrt{Z_{\sigma }(\omega )}\, \omega^{\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota}} \, .\end{aligned}$$ Here $\chi_{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)$ and $a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)$ are real numbers and the function $Z_{\sigma }(\omega )$ was defined above. Notice that the function $F_{\sigma}^{\alpha ,\iota}(\omega )$ vanishes with different powers of $\omega $ for different sets of $N_{ph}^{\alpha ,\iota}$ numbers. This is because these powers reflect directly the order of the pseudoparticle-pseudohole generator relative to the final ground state of the corresponding state I. Although the renormalization factor $(45)$ and matrix elements $(51)$ vanish, Eqs. $(49)$ and $(51)$ provide relevant information in what concerns the ratios of the different matrix elements which can either diverge or vanish. Moreover, in the evaluation of some $\omega $-dependent quantities we can use for the matrix elements $(51)$ the function $F_{\sigma}^{\alpha ,\iota}(\omega )$ and assume that $\omega $ is vanishing small, which leads to correct results. This procedure is similar to replacing the renormalization factor $(45)$ by the function $(49)$. While the renormalization factor is zero because in the limit of vanishing excitation energy there is no overlap between the electron and the quasiparticle, the function $(49)$ is associated with the small electron - quasiparticle overlap which occurs at low excitation energy $\omega $. Obviously, if we introduced in the rhs of Eq. $(43)$ zero for the matrix elements $(44)$ and $(51)$ we would loose all information on the associate low-energy singular electron - quasiparticle transformation (described by Eq. $(58)$ below). The vanishing of the matrix elements $(44)$ and $(51)$ just reflects the fact that the one-electron density of states vanishes in the 1D many-electron problem when the excitation energy $\omega\rightarrow 0$. This justifies the lack of electron - quasiparticle overlap in the limit of zero excitation energy. However, the diagonalization of that problem absorbes the renormalization factor $(45)$ and maps vanishing electronic spectral weight onto finite quasiparticle and pseudoparticle spectral weight. This process can only be suitable described if we keep either ${1\over {N_a}}$ corrections in the case of the large finite system or small virtual $\omega $ corrections in the case of the infinite system. (The analysis of Appendix C has considered the thermodynamic limit and, therefore, we consider in this section the case of the infinite system.) In spite of the vanishing of the matrix elements $(44)$ and $(51)$, following the above discussion we introduce Eqs. $(44)$ and $(51)$ in Eq. $(43)$ with the result $$\begin{aligned} c^{\dag }_{\pm k_{F\sigma},\sigma } & = & \lim_{\omega\to 0} \sqrt{Z_{\sigma }(\omega )} \left[1 + \sum_{\{N_{ph}^{\alpha ,\iota}\},l} e^{i\chi_{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)} \sqrt{{a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)\over a^{\sigma }_0}}\omega^{\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota }}\prod_{\alpha=c,s} {\hat{L}}^{\alpha\iota}_{-N_{ph}^{\alpha\iota}}(l)\right] \tilde{c}^{\dag }_{\pm k_{F\sigma},\sigma } \nonumber\\ & + & \sum_{\gamma '}\langle \gamma ';k=0|c^{\dag }_{\pm k_{F\sigma},\sigma }|0;i\rangle {\hat{A}}_{\gamma '} \tilde{c}^{\dag }_{\pm k_{F\sigma},\sigma } \, .\end{aligned}$$ (Note that the expression is the same for momenta $k=k_{F\sigma}$ and $k=-k_{F\sigma}$.) Let us confirm the key role played by the “bare” quasiparticle ground-state – ground-state transition in the low-energy physics. Since the $k=0$ higher-energy LWS’s I and finite-energy LWS’s II and non-LWS’s represented in Eq. $(52)$ by $|\gamma ';k=0\rangle$ are irrelevant for the low-energy physics, we focus our attention on the lowest-energy states of form $(29)$. Let us look at the leading-order terms of the first term of the rhs of Eq. $(52)$. These correspond to the ground-state – ground-state transition and to the first-order pseudoparticle-pseudohole corrections. These corrections are determined by the excited states $(39)-(42)$. The use of Eqs. $(34)$ and $(39)-(42)$ allows us rewriting the leading-order terms as $$\lim_{\omega\to 0}\sqrt{Z_{\sigma }(\omega )}\left[1 + \omega^2\sum_{\alpha ,\alpha ',\iota} C_{\alpha ,\alpha '}^{\iota } \rho_{\alpha\iota } (\iota{2\pi\over {N_a}}) \rho_{\alpha '-\iota } (-\iota{2\pi\over {N_a}}) + {\cal O}(\omega^4)\right] \tilde{c}^{\dag }_{\pm k_{F\sigma},\sigma } \, ,$$ where $C_{\alpha ,\alpha '}^{\iota }$ are complex constants such that $$C_{c,c}^{1} = C_{c,c}^{-1} = e^{i\chi_{\sigma }(1,1,0,0)} \sqrt{{a^{\sigma }(1,1,0,0)\over a^{\sigma }_0}} \, ,$$ $$C_{s,s}^{1} = C_{s,s}^{-1} = e^{i\chi_{\sigma }(0,0,1,1)} \sqrt{{a^{\sigma }(0,0,1,1)\over a^{\sigma }_0}} \, ,$$ $$C_{c,s}^{1} = C_{s,c}^{-1} = e^{i\chi_{\sigma }(1,0,0,1)} \sqrt{{a^{\sigma }(1,0,0,1)\over a^{\sigma }_0}} \, ,$$ $$C_{c,s}^{-1} = C_{s,c}^{1} = e^{i\chi_{\sigma }(0,1,1,0)} \sqrt{{a^{\sigma }(0,1,1,0)\over a^{\sigma }_0}} \, ,$$ and ${\hat{\rho}}_{\alpha ,\iota } (k)=\sum_{\tilde{q}} b^{\dag}_{\tilde{q}+k,\alpha ,\iota}b_{\tilde{q},\alpha ,\iota}$ is a first-order pseudoparticle-pseudohole operator. The real constants $a^{\sigma }$ and $\chi_{\sigma }$ in the rhs of Eqs. $(54)-(57)$ are particular cases of the corresponding constants of the general expression $(51)$. Note that the $l$ independence of the states $(39)-(42)$ allowed the omission of the index $l$ in the quantities of the rhs of Eqs. $(54)-(57)$ and that we used the notation $(28)$ for the argument of the corresponding $l$-independent $a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\})$ constants and $\chi_{\sigma }(\{N_{ph}^{\alpha ,\iota}\})$ phases. The higher-order contributions to expression $(53)$ are associated with low-energy excited Hamiltonian eigenstates I orthogonal both to the initial and final ground states and whose matrix-element amplitudes are given by Eq. $(51)$. The corresponding functions $F_{\sigma}^{\alpha ,\iota}(\omega )$ vanish as $\lim_{\omega\to 0}\omega^{1+\varsigma_{\sigma}+4j\over 2}$ (with $2j$ the number of pseudoparticle-pseudohole processes relative to the final ground state and $j=1,2,...$). Therefore, the leading-order term of $(52)-(53)$ and the exponent $\varsigma_{\sigma}$ $(47)-(48)$ fully control the low-energy overlap between the $\pm k_{F\sigma}$ quasiparticles and electrons and determines the expressions of all $k=\pm k_{F\sigma }$ one-electron low-energy quantities. That leading-order term refers to the ground-state – ground-state transition which dominates the electron - quasiparticle transformation $(24)$. This transition corresponds to the “bare” quasiparticle of Eq. $(8)$. We follow the same steps as Fermi liquid theory and consider the low-energy non-canonical and non-complete transformation one derives from the full expression $(53)$ by only taking the corresponding leading-order term which leads to $${\tilde{c}}^{\dag }_{\pm k_{F\sigma},\sigma } = {c^{\dag }_{\pm k_{F\sigma},\sigma }\over {\sqrt{Z_{\sigma }}}} \, .$$ This relation refers to a singular transformation. Combining Eqs. $(21)-(22)$ and $(58)$ provides the low-energy expression for the electron in the pseudoparticle basis. The singular nature of the transformation $(58)$ which maps the vanishing-renormalization-factor electron onto the one-renormalization-factor quasiparticle, explains the perturbative character of the pseudoparticle-operator basis [@Carmelo94; @Carmelo94b; @Carmelo94c]. If we replace in Eq. $(58)$ the renormalization factor $Z_{\sigma }$ by $Z_{\sigma }(\omega )$ or omit $\lim_{\omega\to 0}$ from the rhs of Eqs. $(52)$ and $(53)$ and in both cases consider $\omega$ being very small leads to effective expressions which contain information on the low-excitation-energy electron – quasiparticle overlap. Since these expressions correspond to the infinite system, the small $\omega $ finite contributions contain the same information as the ${1\over {N_a}}$ corrections of the corresponding large but finite system at $\omega =0$. It is the perturbative character of the pseudoparticle basis that determines the form of expansion $(53)$ which except for the non-classical exponent in the $\sqrt{Z_{\sigma }(\omega )} \propto \omega^{1+\varsigma_{\sigma}\over 2}$ factor \[absorbed by the electron - quasiparticle transformation $(58)$\] includes only classical exponents, as in a Fermi liquid [@Nozieres]. At low energy the BA solution performs the singular transformation $(58)$ which absorbes the one-electron renormalization factor $(45)$ and maps vanishing electronic spectral weight onto finite quasiparticle and pseudoparticle spectral weight. By that process the transformation $(58)$ renormalizes divergent two-electron scattering vertex functions onto finite two-quasiparticle scattering quantities. These quantities are related to the finite $f$ functions [@Carmelo92] of form given by Eq. (A4) and amplitudes of scattering [@Carmelo92b] of the pseudoparticle theory. It was shown in Refs. [@Carmelo92; @Carmelo92b; @Carmelo94b] that these $f$ functions and amplitudes of scattering determine all static and low-energy quantities of the 1D many-electron problem, as we discuss below and in Appendices A and D. The $f$ functions and amplitudes are associated with zero-momentum two-pseudoparticle forward scattering. These scattering processes interchange no momentum and no energy, only giving rise to two-pseudoparticle phase shifts. The corresponding pseudoparticles control all the low-energy physics. In the limit of vanishing energy the pseudoparticle spectral weight leads to finite values for the static quantities, yet it corresponds to vanishing one-electron spectral weight. To diagonalize the problem at lowest energy is equivalent to perform the electron - quasiparticle transformation $(58)$: it maps divergent irreducible (two-momenta) charge and spin vertices onto finite quasiparticle parameters by absorbing $Z_{\sigma }$. In a diagramatic picture this amounts by multiplying each of these vertices appearing in the diagrams by $Z_{\sigma }$ and each one-electron Green function (propagator) by ${1\over Z_{\sigma }}$. This procedure is equivalent to renormalize the electron quantities onto corresponding quasiparticle quantities, as in a Fermi liquid. However, in the present case the renormalization factor is zero. This also holds true for more involved four-momenta divergent two-electron vertices at the Fermi points. In this case the electron - quasiparticle transformation multiplies each of these vertices by a factor $Z_{\sigma }Z_{\sigma '}$, the factors $Z_{\sigma }$ and $Z_{\sigma '}$ corresponding to the pair of $\sigma $ and $\sigma '$ interacting electrons. The obtained finite parameters control all static quantities. Performimg the transformation $(58)$ is equivalent to sum all vertex contributions and we find that this transformation is unique, ie it maps the divergent Fermi-surface vertices on the same finite quantities independently on the way one chooses to approach the low energy limit. This cannot be detected by looking only at logarithmic divergences of some diagrams [@Solyom; @Metzner]. Such non-universal contributions either cancel or are renormalized to zero by the electron - quasiparticle transformation. We have extracted all our results from the exact BA solution which takes into account all relevant contributions. We can choose the energy variables in such a way that there is only one $\omega $ dependence. We find that the relevant vertex function divergences are controlled by the electron - quasiparticle overlap, the vertices reading $$\Gamma_{\sigma\sigma '}^{\iota }(k_{F\sigma },\iota k_{F\sigma '};\omega) = {1\over {Z_{\sigma}(\omega)Z_{\sigma '}(\omega)}} \{\sum_{\iota '=\pm 1}(\iota ')^{{1-\iota\over 2}} [v_{\rho }^{\iota '} + (\delta_{\sigma ,\sigma '} - \delta_{\sigma ,-\sigma '})v_{\sigma_z}^{\iota '}] - \delta_{\sigma ,\sigma '}v_{F,\sigma }\} \, ,$$ where the expressions for the charge $v_{\rho}^{\iota }$ and spin $v_{\sigma_z}^{\iota }$ velocities are given in Appendix D. The divergent character of the function $(59)$ follows exclusively from the ${1\over Z_{\sigma}(\omega)Z_{\sigma '}(\omega)}$ factor, with $Z_{\sigma}(\omega)$ given by $(49)$. The transformation $(58)$ maps the divergent vertices onto the $\omega $-independent finite quantity $Z_{\sigma}(\omega) Z_{\sigma '}(\omega)\Gamma_{\sigma\sigma '}^{\iota }(k_{F\sigma }, \iota k_{F\sigma '};\omega )$. The low-energy physics is determined by the following $v_{F,\sigma }$-independent Fermi-surface two-quasiparticle parameters $$L^{\iota }_{\sigma ,\sigma'} = lim_{\omega\to 0} \left[\delta_{\sigma ,\sigma '}v_{F,\sigma }+ Z_{\sigma}(\omega) Z_{\sigma '}(\omega)\Gamma_{\sigma\sigma '}^{\iota }(k_{F\sigma },\iota k_{F\sigma '};\omega )\right] \, .$$ From the point of view of the electron - quasiparticle transformation the divergent vertices $(59)$ originate the finite quasiparticle parameters $(60)$ which define the above charge and spin velocities. These are given by the following simple combinations of the parameters $(60)$ $$\begin{aligned} v_{\rho}^{\iota} = {1\over 4}\sum_{\iota '=\pm 1}(\iota ')^{{1-\iota\over 2}}\left[L_{\sigma ,\sigma}^{\iota '} + L_{\sigma ,-\sigma}^{\iota '}\right] \, , \nonumber\\ v_{\sigma_z}^{\iota} = {1\over 4}\sum_{\iota '=\pm 1}(\iota ')^{{1-\iota\over 2}}\left[L_{\sigma ,\sigma}^{\iota '} - L_{\sigma ,-\sigma}^{\iota '}\right] \, .\end{aligned}$$ As shown in Appendix D, the parameters $L_{\sigma ,\sigma '}^{\iota}$ can be expressed in terms of the pseudoparticle group velocities (A6) and Landau parameters (A8) as follows $$\begin{aligned} L_{\sigma ,\sigma}^{\pm 1} & = & 2\left[{(v_s + F^0_{ss})\over L^0} \pm (v_c + F^1_{cc}) - {L_{\sigma ,-\sigma}^{\pm 1}\over 2} \right] \, , \nonumber\\ L_{\sigma ,-\sigma}^{\pm 1} & = & -4\left[{(v_c + F^0_{cc} + F^0_{cs})\over L^0}\pm (v_s + F^1_{ss} - F^1_{cs})\right] \, ,\end{aligned}$$ where $L^0=(v_c+F^0_{cc})(v_s+F^0_{ss})-(F^0_{cs})^2$. Combining equations $(61)$ and $(62)$ we find the expressions of the Table for the charge and spin velocities. These velocities were already known through the BA solution and determine the expressions for all static quantities [@Carmelo94c]. Equations $(62)$ clarify their origin which is the singular electron - quasiparticle transformation $(58)$. It renders a non-perturbative electronic problem into a perturbative pseudoparticle problem. In Appendix D we show how the finite two-pseudoparticle forward-scattering $f$ functions and amplitudes which determine the static quantities are directly related to the two-quasiparticle finite parameters $(60)$ through the velocities $(61)$. This study confirms that it is the singular electron - quasiparticle transformation $(58)$ which justifies the [*finite character*]{} of the $f_{\alpha\alpha '}(q,q')$ functions (A4) and the associate perturbative origin of the pseudoparticle Hamiltonian $(6)-(7)$ [@Carmelo94]. In order to further confirm that the electron - quasiparticle transformation $(58)$ and associate electron - quasiparticle overlap function $(49)$ control the whole low-energy physics we close this section by considering the one-electron spectral function. The spectral function was studied numerically and for $U\rightarrow\infty$ in Refs. [@Muramatsu] and [@Shiba], respectively. The leading-order term of the real-part expression for the $\sigma $ Green function at $k=\pm k_{F\sigma}$ and small excitation energy $\omega $ (C10)-(C11) is given by, $\hbox{Re}G_{\sigma} (\pm k_{F\sigma},\omega)=a^{\sigma}_0 \omega^{\varsigma_{\sigma}}$. From Kramers-Kronig relations we find $\hbox{Im}G_{\sigma} (\pm k_{F\sigma},\omega)= -i\pi a^{\sigma}_0 (1 + \varsigma_{\sigma}) \omega^{\varsigma_{\sigma }}$ for the corresponding imaginary part. Based on these results we arrive to the following expression for the low-energy spectral function at $k=\pm k_{F\sigma}$ $$A_{\sigma}(\pm k_{F\sigma},\omega) = 2\pi a^{\sigma }_0 (1 + \varsigma_{\sigma}) \omega^{\varsigma_{\sigma}} = 2\pi {\partial Z_{\sigma}(\omega)\over\partial\omega} \, .$$ This result is a generalization of the $U\rightarrow\infty$ expression of Ref. [@Shiba]. It is valid for all parameter space where both the velocities $v_c$ and $v_s$ (A6) are finite. (This excludes half filling $n=1$, maximum spin density $m=n$, and $U=\infty$ when $m\neq 0$.) The use of Kramers-Kronig relations also restricts the validity of expression $(63)$ to the energy $\omega $ continuum limit. On the other hand, we can show that $(63)$ is consistent with the general expression $$\begin{aligned} A_{\sigma} (\pm k_{F\sigma},\omega) & = & \sum_{\{N_{ph}^{\alpha ,\iota}\},l} |\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0| c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2 2\pi\delta (\omega - \omega (\{N_{ph}^{\alpha ,\iota}\}))\nonumber \\ & + & \sum_{\gamma '}|\langle\gamma ';k=0|c^{\dag }_{k_{F\sigma},\sigma} |0;i\rangle |^2 2\pi\delta (\omega - \omega_{\gamma '}) \, ,\end{aligned}$$ whose summations refer to the same states as the summations of expressions $(43)$ and $(52)$. The restriction of the validity of expression $(63)$ to the energy continuum limit requires the consistency to hold true only for the spectral weight of $(64)$ associated with the quasiparticle ground-state – ground-state transition. This corresponds to the first $\delta $ peak of the rhs of Eq. $(64)$. Combining equations $(44)$ and $(64)$ and considering that in the present limit of vanishing $\omega $ replacing the renormalization factor $(45)$ by the electron - quasiparticle overlap function $(49)$ leads to the correct result (as we confirm below) we arrive to $$A_{\sigma}(\pm k_{F\sigma},\omega) = a^{\sigma }_0\omega^{1+\varsigma_{\sigma}} 2\pi\delta (\omega - \omega^0_{\sigma }) = Z_{\sigma}(\omega ) 2\pi\delta (\omega - \omega^0_{\sigma }) \, .$$ Let us then show that the Kramers-Kronig continuum expression $(63)$ is an approximation consistent with the Dirac-delta function representation $(65)$. This consistency just requires that in the continuum energy domain from $\omega =0$ to the ground-state – ground-state transition energy $\omega =\omega^0_{\sigma }$ (see Eq. $(35)$) the functions $(63)$ and $(65)$ contain the same amount of spectral weight. We find that both the $A_{\sigma}(\pm k_{F\sigma},\omega)$ representations $(63)$ and $(65)$ lead to $$\int_{0}^{\omega^0_{\sigma }}A_{\sigma}(\pm k_{F\sigma},\omega) =2\pi a^{\sigma }_0 [\omega^0_{\sigma }]^{\varsigma_{\sigma }+1} \, ,$$ which confirms they contain the same spectral weight. The representation $(63)$ reveals that the spectral function diverges at $\pm k_{F\sigma}$ and small $\omega$ as a Luttinger-liquid power law. However, both the small-$\omega $ density of states and the integral $(66)$ vanish in the limit of vanishing excitation energy. Using the method of Ref. [@Carmelo93] we have also studied the spectral function $A_{\sigma}(k,\omega)$ for all values of $k$ and vanishing positive $\omega $. We find that $A_{\sigma}(k,\omega)$ \[and the Green function $Re G_{\sigma} (k,\omega)$\] vanishes when $\omega\rightarrow 0$ for all momentum values [*except*]{} at the non-interacting Fermi-points $k=\pm k_{F\sigma}$ where it diverges as the power law $(63)$. This divergence is fully controlled by the quasiparticle ground-state - ground-state transition. The transitions to the excited states $(29)$ give only vanishing contributions to the spectral function. This further confirms the dominant role of the bare quasiparticle ground-state - ground-state transition and of the associate electron - quasiparticle transformation $(58)$ which control the low-energy physics. It follows from the above behavior of the spectral function at small $\omega $ that for $\omega\rightarrow 0$ the density of states, $$D_{\sigma} (\omega)=\sum_{k}A_{\sigma}(k,\omega) \, ,$$ results, exclusively, from contributions of the peaks centered at $k=\pm k_{F\sigma}$ and is such that $D_{\sigma} (\omega)\propto \omega A_{\sigma}(\pm k_{F\sigma},\omega)$ [@Carmelo95a]. On the one hand, it is known from the zero-magnetic field studies of Refs. [@Shiba; @Schulz] that the density of states goes at small $\omega $ as $$D_{\sigma} (\omega)\propto\omega^{\nu_{\sigma}} \, ,$$ where $\nu_{\sigma}$ is the exponent of the equal-time momentum distribution expression, $$N_{\sigma}(k)\propto |k\mp k_{F\sigma}|^{\nu_{\sigma }} \, ,$$ [@Frahm91; @Ogata]. (The exponent $\nu_{\sigma }$ is defined by Eq. $(5.10)$ of Ref. [@Frahm91] for the particular case of the $\sigma $ Green function.) On the other hand, we find that the exponents $(47)-(48)$ and $\nu_{\sigma}$ are such that $$\varsigma_{\sigma}=\nu_{\sigma }-1 \, ,$$ in agreement with the above analysis. However, this simple relation does not imply that the equal-time expressions [@Frahm91; @Ogata] provide full information on the small-energy instabilities. For instance, in addition to the momentum values $k=\pm k_{F\sigma}$ and in contrast to the spectral function, $N_{\sigma}(k)$ shows singularities at $k=\pm [k_{F\sigma}+2k_{F-\sigma}]$ [@Ogata]. Therefore, only the direct low-energy study reveals all the true instabilities of the quantum liquid. Note that in some Luttinger liquids the momentum distribution is also given by $N(k)\propto |k\mp k_F|^{\nu }$ but with $\nu >1$ [@Solyom; @Medem; @Voit]. We find that in these systems the spectral function $A(\pm k_F,\omega) \propto\omega^{\nu -1}$ does not diverge. CONCLUDING REMARKS ================== One of the goals of this paper was, in spite of the differences between the Luttinger-liquid Hubbard chain and 3D Fermi liquids, detecting common features in these two limiting problems which we expect to be present in electronic quantum liquids in spatial dimensions $1<$D$<3$. As in 3D Fermi liquids, we find that there are Fermi-surface quasiparticles in the Hubbard chain which connect ground states differing in the number of electrons by one and whose low-energy overlap with electrons determines the $\omega\rightarrow 0$ divergences. In spite of the vanishing electron density of states and renormalization factor, the spectral function vanishes at all momenta values [*except*]{} at the Fermi surface where it diverges (as a Luttinger-liquid power law). While low-energy excitations are described by $c$ and $s$ pseudoparticle-pseudohole excitations which determine the $c$ and $s$ separation [@Carmelo94c], the quasiparticles describe ground-state – ground-state transitions and recombine $c$ and $s$ (charge and spin in the zero-magnetization limit), being labelled by the spin projection $\sigma $. They are constituted by one topological momenton and one or two pseudoparticles which cannot be separated and are confined inside the quasiparticle. Moreover, there is a close relation between the quasiparticle contents and the Hamiltonian symmetry in the different sectors of parameter space. This can be shown if we consider pseudoholes instead of pseudoparticles [@Carmelo95a] and we extend the present quasiparticle study to the whole parameter space of the Hubbard chain. Importantly, we have written the low-energy electron at the Fermi surface in the pseudoparticle basis. The vanishing of the electron renormalization factor implies a singular character for the low-energy electron – quasiparticle and electron – pseudoparticle transformations. This singular process extracts from vanishing electron spectral weight quasiparticles of spectral-weight factor one. The BA diagonalization of the 1D many-electron problem is at lowest excitation energy equivalent to perform such singular electron – quasiparticle transformation. This absorves the vanishing one-electron renormalization factor giving rise to the finite two-pseudoparticle forward-scattering $f$ functions and amplitudes which control the expressions for all static quantities [@Carmelo92; @Carmelo92b; @Carmelo94]. It is this transformation which justifies the perturbative character of the many-electron Hamiltonian in the pseudoparticle basis [@Carmelo94]. From the existence of Fermi-surface quasiparticles both in the 1D and 3D limits, our results suggest their existence for quantum liquids in dimensions 1$<$D$<$3. However, the effect of increasing dimensionality on the electron – quasiparticle overlap remains an unsolved problem. The present 1D results do not provide information on whether that overlap can vanish for D$>$1 or whether it always becomes finite as soon as we leave 1D. ACKNOWLEDGMENTS =============== We thank N. M. R. Peres for many fruitful discussions and for reproducing and checking some of our calculations. We are grateful to F. Guinea and K. Maki for illuminating discussions. This research was supported in part by the Institute for Scientific Interchange Foundation under the EU Contract No. ERBCHRX - CT920020 and by the National Science Foundation under the Grant No. PHY89-04035. In this Appendix we present some quantities of the pseudoparticle picture which are useful for the present study. We start by defining the pseudo-Fermi points and limits of the pseudo-Brillouin zones. When $N_{\alpha }$ (see Eq. $(4)$) is odd (even) and the numbers $I_j^{\alpha }$ of Eq. $(3)$ are integers (half integers) the pseudo-Fermi points are symmetric and given by [@Carmelo94; @Carmelo94c] $$q_{F\alpha }^{(+)}=-q_{F\alpha }^{(-)} = {\pi\over {N_a}}[N_{\alpha}-1] \, .$$ On the other hand, when $N_{\alpha }$ is odd (even) and $I_j^{\alpha }$ are half integers (integers) we have that $$q_{F\alpha }^{(+)} = {\pi\over {N_a}}N_{\alpha } \, , \hspace{1cm} -q_{F\alpha }^{(-)} ={\pi\over {N_a}}[N_{\alpha }-2] \, ,$$ or $$q_{F\alpha }^{(+)} = {\pi\over {N_a}}[N_{\alpha }-2] \, , \hspace{1cm} -q_{F\alpha }^{(-)} = {\pi\over {N_a}}N_{\alpha } \, .$$ Similar expressions are obtained for the pseudo-Brioullin zones limits $q_{\alpha }^{(\pm)}$ if we replace in Eqs. (A1)-(A3) $N_{\alpha }$ by the numbers $N_{\alpha }^*$ of Eq. $(4)$. The $f$ functions were studied in Ref. [@Carmelo92] and read $$\begin{aligned} f_{\alpha\alpha'}(q,q') & = & 2\pi v_{\alpha}(q) \Phi_{\alpha\alpha'}(q,q') + 2\pi v_{\alpha'}(q') \Phi_{\alpha'\alpha}(q',q) \nonumber \\ & + & \sum_{j=\pm 1} \sum_{\alpha'' =c,s} 2\pi v_{\alpha''} \Phi_{\alpha''\alpha}(jq_{F\alpha''},q) \Phi_{\alpha''\alpha'}(jq_{F\alpha''},q') \, ,\end{aligned}$$ where the pseudoparticle group velocities are given by $$v_{\alpha}(q) = {d\epsilon_{\alpha}(q) \over {dq}} \, ,$$ and $$v_{\alpha }=\pm v_{\alpha }(q_{F\alpha}^{(\pm)}) \, ,$$ are the pseudo-Fermi points group velocities. In expression (A4) $\Phi_{\alpha\alpha '}(q,q')$ mesures the phase shift of the $\alpha '$ pseudoparticle of pseudomomentum $q'$ due to the forward-scattering collision with the $\alpha $ pseudoparticle of pseudomomentum $q$. These phase shifts determine the pseudoparticle interactions and are defined in Ref. [@Carmelo92]. They control the low-energy physics. For instance, the related parameters $$\xi_{\alpha\alpha '}^j = \delta_{\alpha\alpha '}+ \Phi_{\alpha\alpha '}(q_{F\alpha}^{(+)},q_{F\alpha '}^{(+)})+ (-1)^j\Phi_{\alpha\alpha '}(q_{F\alpha}^{(+)},q_{F\alpha '}^{(-)}) \, , \hspace{2cm} j=0,1 \, ,$$ play a determining role at the critical point. ($\xi_{\alpha\alpha '}^1$ are the entries of the transpose of the dressed-charge matrix [@Frahm].) The values at the pseudo-Fermi points of the $f$ functions (A4) include the parameters (A7) and define the Landau parameters, $$F_{\alpha\alpha'}^j = {1\over {2\pi}}\sum_{\iota =\pm 1}(\iota )^j f_{\alpha\alpha'}(q_{F\alpha}^{(\pm)},\iota q_{F\alpha '}^{(\pm)}) \, , \hspace{1cm} j=0,1 \, .$$ These are also studied in Ref. [@Carmelo92]. The parameters $\delta_{\alpha ,\alpha'}v_{\alpha }+ F_{\alpha\alpha'}^j$ appear in the expressions of the low-energy quantities. We close this Appendix by introducing pseudoparticle-pseudohole operators which will appear in Sec. IV. Although the expressions in the pseudoparticle basis of one-electron operators remains an unsolved problem, in Ref. [@Carmelo94c] the electronic fluctuation operators $${\hat{\rho}}_{\sigma }(k)= \sum_{k'}c^{\dag }_{k'+k\sigma}c_{k'\sigma} \, ,$$ were expressed in terms of the pseudoparticle fluctuation operators $${\hat{\rho}}_{\alpha }(k)=\sum_{q}b^{\dag }_{q+k\alpha} b_{q\alpha} \, .$$ This study has revealed that $\iota =sgn (k)1=\pm 1$ electronic operators are made out of $\iota =sgn (q)1=\pm 1$ pseudoparticle operators only, $\iota $ defining the right ($\iota=1$) and left ($\iota=-1$) movers. Often it is convenient measuring the electronic momentum $k$ and pseudomomentum $q$ from the $U=0$ Fermi points $k_{F\sigma}^{(\pm)}=\pm \pi n_{\sigma}$ and pseudo-Fermi points $q_{F\alpha}^{(\pm)}$, respectively. This adds the index $\iota$ to the electronic and pseudoparticle operators. The new momentum $\tilde{k}$ and pseudomomentum $\tilde{q}$ are such that $$\tilde{k} =k-k_{F\sigma}^{(\pm)} \, , \hspace{2cm} \tilde{q}=q-q_{F\alpha}^{(\pm)} \, ,$$ respectively, for $\iota=\pm 1$. For instance, $${\hat{\rho}}_{\sigma ,\iota }(k)=\sum_{\tilde{k}} c^{\dag }_{\tilde{k}+k\sigma\iota}c_{\tilde{k}\sigma\iota} \, , \hspace{2cm} {\hat{\rho}}_{\alpha ,\iota }(k) = \sum_{\tilde{q}}b^{\dag }_{\tilde{q}+k\alpha\iota} b_{\tilde{q}\alpha\iota} \, .$$ In this Appendix we evaluate the expression for the topological-momenton generator $(17)-(19)$. In order to derive the expression for $U_c^{+1}$ we consider the Fourier transform of the pseudoparticle operator $b^{\dag}_{q,c}$ which reads $$\beta^{\dag}_{x,c} = \frac{1}{\sqrt{N_a}} \sum_{q_c^{(-)}}^{q_c^{(+)}} e^{-i q x} b^{\dag}_{q,c} \, .$$ From Eq. $(15)$ we arrive to $$U_c^{+1} \beta^{\dag}_{x,c} U_c^{-1} = \frac{1}{\sqrt{N_a}} \sum_{q_c^{(-)}}^{q_c^{(+)}} e^{-i q x} b^{\dag}_{q-\frac{\pi}{N_a},c} \, .$$ By performing a $\frac{\pi}{N_a}$ pseudomomentum translation we find $$U_c^{+1} \beta^{\dag}_{x,c} U_c^{-1} = e^{i\frac{\pi}{N_a} x}\beta^{\dag}_{x,c} \, ,$$ and it follows that $$\begin{aligned} U_{c}^{\pm 1} = \exp\left\{\pm i\frac{\pi}{N_a} \sum_{y}y\beta^{\dag}_{y,c}\beta_{y,c} \, , \right\}.\end{aligned}$$ By inverse-Fourier transforming expression (B4) we find expression $(17)-(19)$ for this unitary operator, which can be shown to also hold true for $U_{s}^{\pm 1}$. In this Appendix we derive the expressions for the matrix elements $(44)$ and $(51)$. At energy scales smaller than the gaps for the LWS’s II and non-LWS’s referred in this paper and in Refs. [@Carmelo94; @Carmelo94b; @Carmelo94c] the expression of the $\sigma $ one-electron Green function $G_{\sigma} (k_{F\sigma},\omega)$ is fully defined in the two Hilbert sub spaces spanned by the final ground state $|0;f\rangle $ and associate $k=0$ excited states $|\{N_{ph}^{\alpha ,\iota}\},l;k=0\rangle$ of form $(29)$ belonging the $N_{\sigma }+1$ sector and by a corresponding set of states belonging the $N_{\sigma }-1$ sector, respectively. Since $|0;f\rangle $ corresponds to zero values for all four numbers $(28)$ in this Appendix we use the notation $|0;f\rangle\equiv|\{N_{ph}^{\alpha ,\iota}=0\},l;k=0\rangle$. This allows a more compact notation for the state summations. The use of a Lehmann representation leads to $$G_{\sigma} (k_{F\sigma},\omega) = G_{\sigma}^{(N_{\sigma }+1)} (k_{F\sigma},\omega) + G_{\sigma}^{(N_{\sigma }-1)} (k_{F\sigma},\omega) \, ,$$ where $$G_{\sigma}^{(N_{\sigma }+1)} (k_{F\sigma},\omega) = \sum_{\{N_{ph}^{\alpha ,\iota}\},l} {|\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0| c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2\over {\omega - \omega (\{N_{ph}^{\alpha ,\iota}\}) + i\xi}} \, ,$$ has divergences for $\omega >0$ and $G_{\sigma}^{(N_{\sigma }-1)} (k_{F\sigma},\omega)$ has divergences for $\omega <0$. We emphasize that in the $\{N_{ph}^{\alpha ,\iota}\}$ summation of the rhs of Eq. (C2), $N_{ph}^{\alpha ,\iota}=0$ for all four numbers refers to the final ground state, as we mentioned above. Below we consider positive but vanishing values of $\omega $ and, therefore, we need only to consider the function (C2). We note that at the conformal-field critical point [@Frahm; @Frahm91] the states which contribute to (C2) are such that the ratio $N_{ph}^{\alpha ,\iota}/N_a$ vanishes in the thermodynamic limit, $N_a\rightarrow 0$ [@Carmelo94b]. Therefore, in that limit the positive excitation energies $\omega (\{N_{ph}^{\alpha ,\iota}\})$ of Eq. (C2), which are of the form $(38)$, are vanishing small. Replacing the full Green function by (C2) (by considering positive values of $\omega $ only) we find $$\lim_{N_a\to\infty}\hbox{Re}G_{\sigma} (k_{F\sigma},\omega) = \sum_{\{N_{ph}^{\alpha ,\iota}\}}\left[ {\sum_{l} |\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0| c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2 \over {\omega }} \right] \, .$$ We emphasize that considering the limit (C3) implies that all the corresponding expressions for the $\omega $ dependent quantities we obtain in the following are only valid in the limit of vanishing positive energy $\omega $. Although many of these quantities are zero in that limit, their $\omega $ dependence has physical meaning because different quantities vanish as $\omega\rightarrow 0$ in different ways, as we discuss in Sec. IV. Therefore, our results allow the classification of the relative importance of the different quantities. In order to solve the present problem we have to combine a suitable generator pseudoparticle analysis [@Carmelo94b] with conformal-field theory [@Frahm; @Frahm91]. Let us derive an alternative expression for the Green function (C3). Comparison of both expressions leads to relevant information. This confirms the importance of the pseudoparticle operator basis [@Carmelo94; @Carmelo94b; @Carmelo94c] which allows an operator description of the conformal-field results for BA solvable many-electron problems [@Frahm; @Frahm91]. The asymptotic expression of the Green function in $x$ and $t$ space is given by the summation of many terms of form $(3.13)$ of Ref. [@Frahm] with dimensions of the fields suitable to that function. For small energy the Green function in $k$ and $\omega $ space is obtained by the summation of the Fourier transforms of these terms, which are of the form given by Eq. $(5.2)$ of Ref. [@Frahm91]. However, the results of Refs. [@Frahm; @Frahm91] do not provide the expression at $k=k_{F\sigma }$ and small positive $\omega $. In this case the above summation is equivalent to a summation in the final ground state and excited states of form $(29)$ obeying to Eqs. $(31)$, $(32)$, and $(34)$ which correspond to different values for the dimensions of the fields. We emphasize that expression $(5.7)$ of Ref. [@Frahm91] is not valid in our case. Let us use the notation $k_0=k_{F\sigma }$ (as in Eqs. $(5.6)$ and $(5.7)$ of Ref. [@Frahm91]). While we consider $(k-k_0)=(k-k_{F\sigma })=0$ expression $(5.7)$ of Ref. [@Frahm91] is only valid when $(k-k_0)=(k-k_{F\sigma })$ is small but finite. We have solved the following general integral $$\tilde{g}(k_0,\omega ) = \int_{0}^{\infty}dt e^{i\omega t}F(t) \, ,$$ where $$F(t) = \int_{-\infty}^{\infty}dx\prod_{\alpha ,\iota} {1\over {(x+\iota v_{\alpha }t)^{2\Delta_{\alpha }^{\iota}}}} \, ,$$ with the result $$\tilde{g}(k_0,\omega )\propto \omega^{[\sum_{\alpha ,\iota} 2\Delta_{\alpha }^{\iota}-2]} \, .$$ Comparing our expression (C6) with expression $(5.7)$ of Ref. [@Frahm91] we confirm these expressions are different. In the present case of the final ground state and excited states of form $(29)$ obeying Eqs. $(31)$, $(32)$, and $(34)$ we find that the dimensions of the fields are such that $$\sum_{\alpha ,\iota} 2\Delta_{\alpha }^{\iota}= 2+\varsigma_{\sigma}+2\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota} \, ,$$ with $\varsigma_{\sigma}$ being the exponents $(47)$ and $(48)$. Therefore, equation (C6) can be rewritten as $$\tilde{g}(k_0,\omega )\propto \omega^{\varsigma_{\sigma}+2\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota}} \, .$$ Summing the terms of form (C8) corresponding to different states leads to an alternative expression for the function (C3) with the result $$\lim_{N_a\to\infty} \hbox{Re}G_{\sigma} (k_{F\sigma},\omega) = \sum_{\{N_{ph}^{\alpha ,\iota}\}}\left[ {a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\}) \omega^{\varsigma_{\sigma } + 1 + 2\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota} } \over {\omega }} \right] \, ,$$ or from Eq. $(34)$, $$\lim_{N_a\to\infty} \hbox{Re}G_{\sigma} (k_{F\sigma},\omega) = \sum_{j=0,1,2,...}\left[ {a^{\sigma }_j \omega^{\varsigma_{\sigma } + 1 + 4j} \over {\omega }}\right] \, ,$$ where $a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\})$ and $a^{\sigma }_j$ are complex constants. From equation (C10) we find $$\hbox{Re}\Sigma_{\sigma} ( k_{F\sigma},\omega ) = \omega - {1\over {\hbox{Re} G_{\sigma} (k_{F\sigma },\omega )}} = \omega [1-{\omega^{-1-\varsigma_{\sigma}}\over {a^{\sigma }_0+\sum_{j=1}^{\infty}a^{\sigma }_j\omega^{4j}}}] \, .$$ While the function $Re G_{\sigma} (k_{F\sigma},\omega)$ (C9)-(C10) diverges as $\omega\rightarrow 0$, following the form of the self energy (C11) the one-electron renormalization factor $(45)$ vanishes and there is no overlap between the quasiparticle and the electron, in contrast to a Fermi liquid. (In equation (C11) $\varsigma_{\sigma}\rightarrow -1$ and $a^{\sigma }_0\rightarrow 1$ when $U\rightarrow 0$.) Comparision of the terms of expressions (C3) and (C9) with the same $\{N_{ph}^{\alpha ,\iota}\}$ values, which refer to contributions from the same set of $N_{\{N_{ph}^{\alpha ,\iota}\}}$ Hamiltonian eigenstates $|\{N_{ph}^{\alpha ,\iota}\},l;k=0\rangle$ and refer to the limit $\omega\rightarrow 0$, leads to $$\sum_{l} |\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0| c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2 = \lim_{\omega\to 0} a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\})\, \omega^{\varsigma_{\sigma } + 1 + 2\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota}} = 0\, .$$ Note that the functions of the rhs of Eq. (C12) corresponding to different matrix elements go to zero with different exponents. On the other hand, as for the corresponding excitation energies $(38)$, the dependence of functions associated with the amplitudes $|\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0| c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |$ on the vanishing energy $\omega $ is $l$ independent. Therefore, we find $$|\langle \{N_{ph}^{\alpha ,\iota}\},l;k=0| c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2 = \lim_{\omega\to 0} a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)\, \omega^{\varsigma_{\sigma } + 1 + 2\sum_{\alpha ,\iota} N_{ph}^{\alpha ,\iota}} = 0\, ,$$ where the constants $a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l)$ are $l$ dependent and obey the normalization condition $$a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\}) = \sum_{l} a^{\sigma }(\{N_{ph}^{\alpha ,\iota}\},l) \, .$$ It follows that the matrix elements of Eq. (C12) have the form given in Eq. $(51)$. Moreover, following our notation for the final ground state when the four $N_{ph}^{\alpha ,\iota}$ vanish Eq. (C13) leads to $$|\langle 0;f|c^{\dag }_{k_{F\sigma},\sigma}|0;i\rangle |^2 = \lim_{\omega\to 0} a^{\sigma }_0\,\omega^{\varsigma_{\sigma } + 1} = \lim_{\omega\to 0} Z_{\sigma}(\omega) = Z_{\sigma} = 0 \, ,$$ where $a^{\sigma }_0=a^{\sigma }(\{N_{ph}^{\alpha ,\iota}=0\},l)$ is a positive real constant and $Z_{\sigma}(\omega)$ is the function $(49)$. Following equation (C11) the function $Z_{\sigma}(\omega)$ is given by the leading-order term of expression $(46)$. Since $a^{\sigma }_0$ is real and positive expression $(44)$ follows from Eq. (C15). In this Appendix we confirm that the finite two-quasiparticle functions $(60)$ of form $(62)$ which are generated from the divergent two-electron vertex functions $(59)$ by the singular electron - quasiparticle transformation $(58)$ control the charge and spin static quantities of the 1D many-electron problem. On the one hand, the parameters $v_{\rho}^{\iota}$ and $v_{\sigma_z}^{\iota}$ of Eq. $(59)$ can be shown to be fully determined by the two-quasiparticle functions $(60)$. By inverting relations $(60)$ with the vertices given by Eq. $(59)$ expressions $(61)$ follow. Physically, the singular electron - quasiparticle transformation $(58)$ maps the divergent two-electron functions onto the finite parameters $(60)$ and $(61)$. On the other hand, the “velocities” $(61)$ play a relevant role in the charge and spin conservation laws and are simple combinations of the zero-momentum two-pseudoparticle forward-scattering $f$ functions and amplitudes introduced in Refs. [@Carmelo92] and [@Carmelo92b], respectively. Here we follow Ref. [@Carmelo94c] and use the general parameter $\vartheta $ which refers to $\vartheta =\rho$ for charge and $\vartheta =\sigma_z$ for spin. The interesting quantity associated with the equation of motion for the operator ${\hat{\rho}}_{\vartheta }^{(\pm)}(k,t)$ defined in Ref. [@Carmelo94c] is the following ratio $${i\partial_t {\hat{\rho}}_{\vartheta }^{(\pm)}(k,t)\over k}|_{k=0} = {[{\hat{\rho}}_{\vartheta }^{(\pm)}(k,t), :\hat{{\cal H}}:]\over k}|_{k=0} = v_{\vartheta}^{\mp 1} {\hat{\rho}}_{\vartheta }^{(\mp)}(0,t) \, ,$$ where the functions $v_{\vartheta}^{\pm 1}$ $(61)$ are closely related to two-pseudoparticle forward-scattering quantities as follows $$\begin{aligned} v_{\vartheta}^{+1} & = & {1\over {\left[\sum_{\alpha ,\alpha'}{k_{\vartheta\alpha}k_{\vartheta\alpha'} \over {v_{\alpha}v_{\alpha '}}} \left(v_{\alpha}\delta_{\alpha ,\alpha '} - {[A_{\alpha\alpha '}^{1}+ A_{\alpha\alpha '}^{-1}] \over {2\pi}}\right)\right]}}\nonumber \\ & = & {1\over {\left[\sum_{\alpha}{1\over {v_{\alpha}}} \left(\sum_{\alpha'}k_{\vartheta\alpha '}\xi_{\alpha\alpha '}^1\right)^2\right]}} \, ,\end{aligned}$$ and $$\begin{aligned} v_{\vartheta}^{-1} & = & \sum_{\alpha ,\alpha'}k_{\vartheta\alpha}k_{\vartheta\alpha'} \left(v_{\alpha}\delta_{\alpha ,\alpha '} + {[f_{\alpha\alpha '}^{1}- f_{\alpha\alpha '}^{-1}]\over {2\pi}}\right)\nonumber \\ & = & \sum_{\alpha}v_{\alpha} \left(\sum_{\alpha'}k_{\vartheta\alpha '} \xi_{\alpha\alpha '}^1\right)^2 \, .\end{aligned}$$ Here $k_{\vartheta\alpha}$ are integers given by $k_{\rho c}=k_{\sigma_{z} c}=1$, $k_{\rho s}=0$, and $k_{\sigma_{z} s}=-2$, and the parameters $\xi_{\alpha\alpha '}^j$ are defined in Eq. (A7). In the rhs of Eqs. (D2) and (D3) $v_{\alpha }$ are the $\alpha $ pseudoparticle group velocities (A6), the $f$ functions are given in Eq. (A4) and $A_{\alpha\alpha'}^{1}=A_{\alpha\alpha' }(q_{F\alpha}^{(\pm)}, q_{F\alpha'}^{(\pm)})$ and $A_{\alpha\alpha'}^{-1}= A_{\alpha\alpha'}(q_{F\alpha}^{(\pm)}, q_{F\alpha'}^{(\mp)})$, where $A_{\alpha\alpha'}(q,q')$ are the scattering amplitudes given by Eqs. $(83)-(85)$ of Ref. [@Carmelo92b]. The use of relations $(61)$ and of Eqs. (A5), (A6), (A8), (D2), and (D3) shows that the parameters $(60)$ and corresponding charge and spin velocities $v_{\vartheta}^{\pm 1}$ can also be expressed in terms of the pseudoparticle group velocities (A6) and Landau parameters (A8). These expressions are given in Eq. $(62)$ and in the Table. The charge and spin velocities control all static quantities of the many-electron system. They determine, for example, the charge and spin susceptibilities, $$K^{\vartheta }={1\over {\pi v_{\vartheta}^{+1}}} \, ,$$ and the coherent part of the charge and spin conductivity spectrum, $v_{\vartheta}^{-1}\delta (\omega )$, respectively [@Carmelo92; @Carmelo92b; @Carmelo94c]. D. Pines and P. Nozières, in [*The Theory of Quantum Liquids*]{}, (Addison-Wesley, Redwood City, 1989), Vol. I. Gordon Baym and Christopher J. Pethick, in [*Landau Fermi-Liquid Theory Concepts and Applications*]{}, (John Wiley & Sons, New York, 1991). J. Sólyom, Adv. Phys. [**28**]{}, 201 (1979). F. D. M. Haldane, J. Phys. C [**14**]{}, 2585 (1981). I. E. Dzyaloshinskii and A. I. Larkin, Sov. Phys. JETP [**38**]{}, 202 (1974); Walter Metzner and Carlo Di Castro, Phys. Rev. B [**47**]{}, 16 107 (1993). This ansatz was introduced for the case of the isotropic Heisenberg chain by H. A. Bethe, Z. Phys. [**71**]{}, 205 (1931). For one of the first generalizations of the Bethe ansatz to multicomponent systems see C. N. Yang, Phys. Rev. Lett. [**19**]{}, 1312 (1967). Elliott H. Lieb and F. Y. Wu, Phys. Rev. Lett. [**20**]{}, 1445 (1968); For a modern and comprehensive discussion of these issues, see V. E. Korepin, N. M. Bogoliubov, and A. G. Izergin, [*Quantum Inverse Scattering Method and Correlation Functions*]{} (Cambridge University Press, 1993). P. W. Anderson, Phys. Rev. Lett. [**64**]{}, 1839 (1990); Philip W. Anderson, Phys. Rev. Lett. [**65**]{} 2306 (1990); P. W. Anderson and Y. Ren, in [*High Temperature Superconductivity*]{}, edited by K. S. Bedell, D. E. Meltzer, D. Pines, and J. R. Schrieffer (Addison-Wesley, Reading, MA, 1990). J. M. P. Carmelo and N. M. R. Peres, Nucl. Phys. B. [**458**]{} \[FS\], 579 (1996). J. Carmelo and A. A. Ovchinnikov, Cargèse lectures, unpublished (1990); J. Phys.: Condens. Matter [**3**]{}, 757 (1991). F. D. M. Haldane, Phys. Rev. Lett. [**66**]{}, 1529 (1991); E. R. Mucciolo, B. Shastry, B. D. Simons, and B. L. Altshuler, Phys. Rev. B [**49**]{}, 15 197 (1994). J. Carmelo, P. Horsch, P.-A. Bares, and A. A. Ovchinnikov, Phys. Rev. B [**44**]{}, 9967 (1991). J. M. P. Carmelo, P. Horsch, and A. A. Ovchinnikov, Phys. Rev. B [**45**]{}, 7899 (1992). J. M. P. Carmelo and P. Horsch, Phys. Rev. Lett. [**68**]{}, 871 (1992); J. M. P. Carmelo, P. Horsch, and A. A. Ovchinnikov, Phys. Rev. B [**46**]{}, 14728 (1992). J. M. P. Carmelo, P. Horsch, D. K. Campbell, and A. H. Castro Neto, Phys. Rev. B [**48**]{}, 4200 (1993). J. M. P. Carmelo and A. H. Castro Neto, Phys. Rev. Lett. [**70**]{}, 1904 (1993); J. M. P. Carmelo, A. H. Castro Neto, and D. K. Campbell, Phys. Rev. B [**50**]{}, 3667 (1994). J. M. P. Carmelo, A. H. Castro Neto, and D. K. Campbell, Phys. Rev. B [**50**]{}, 3683 (1994). J. M. P. Carmelo, A. H. Castro Neto, and D. K. Campbell, Phys. Rev. Lett. [**73**]{}, 926 (1994); (E) [*ibid.*]{} [**74**]{}, 3089 (1995). J. M. P. Carmelo and N. M. R. Peres, Phys. Rev. B [**51**]{}, 7481 (1995). Holger Frahm and V. E. Korepin, Phys. Rev. B [**42**]{}, 10 553 (1990). Holger Frahm and V. E. Korepin, Phys. Rev. B [**43**]{}, 5653 (1991). Fabian H. L. Essler, Vladimir E. Korepin, and Kareljan Schoutens, Phys. Rev. Lett. [**67**]{}, 3848 (1991); Nucl. Phys. B [**372**]{}, 559 (1992). Fabian H. L. Essler and Vladimir E. Korepin, Phys. Rev. Lett. [**72**]{}, 908 (1994). A. H. Castro Neto, H. Q. Lin, Y.-H. Chen, and J. M. P. Carmelo, Phys. Rev. B (1994). Philippe Nozières, in [*The theory of interacting Fermi systems*]{} (W. A. Benjamin, NY, 1964), page 100. Walter Metzner and Claudio Castellani, preprint (1994). R. Preuss, A. Muramatsu, W. von der Linden, P. Dierterich, F. F. Assaad, and W. Hanke, Phys. Rev. Lett. [**73**]{}, 732 (1994). Karlo Penc, Frédéric Mila, and Hiroyuki Shiba, Phys. Rev. Lett. [**75**]{}, 894 (1995). H. J. Schulz, Phys. Rev. Lett. [**64**]{}, 2831 (1990). Masao Ogata, Tadao Sugiyama, and Hiroyuki Shiba, Phys. Rev. B [**43**]{}, 8401 (1991). V. Medem and K. Schönhammer, Phys. Rev. B [**46**]{}, 15 753 (1992). J. Voit, Phys. Rev. B [**47**]{}, 6740 (1993). TABLE *= *$v^{\iota}_{\rho }$ = *$v^{\iota}_{\sigma_z }$\ $\iota = -1$ $v_c + F^1_{cc}$ $v_c + F^1_{cc} + 4(v_s + F^1_{ss} - F^1_{cs})$\ $\iota = 1$ $(v_s + F^0_{ss})/L^0$ $(v_s + F^0_{ss} + 4[v_c + F^0_{cc} + F^0_{cs}])/L^0$\ \[tableI\]*** Table I - Alternative expressions of the parameters $v^{\iota}_{\rho }$ (D1)-(D4) and $v^{\iota}_{\sigma_z }$ (D2)-(D5) in terms of the pseudoparticle velocities $v_{\alpha}$ (A6) and Landau parameters $F^j_{\alpha\alpha '}$ (A8), where $L^0=(v_c+F^0_{cc}) (v_s+F^0_{ss})-(F^0_{cs})^2$.
--- abstract: | In this work, we apply the factorization technique to the Benjamin-Bona-Mahony like equations in order to get travelling wave solutions. We will focus on some special cases for which $m\neq n$, and we will obtain these solutions in terms of Weierstrass functions. Email: kuru@science.ankara.edu.tr title: 'Travelling wave solutions of BBM-like equations by means of factorization' --- Ş. Kuru\ [*Department of Physics, Faculty of Science, Ankara University, 06100 Ankara, Turkey*]{} Introduction ============ In this paper, we will consider the Benjamin-Bona-Mahony (BBM) [@benjamin] like equation ($B(m,n)$) with a fully nonlinear dispersive term of the form $$u_{t}+u_{x}+a\,(u^m)_x-(u^n)_{xxt}=0, \quad\quad m,\,n>1,\,\,m \neq n\, .\label{1.3}$$ This equation is similar to the nonlinear dispersive equation $K(m,n)$, $$u_{t}+(u^m)_x+(u^n)_{xxx}=0, \quad\quad m>0,\,1<n\leq3 \label{1.2}$$ which has been studied in detail by P. Rosenau and J.M. Hyman [@rosenau]. In the literature there are many studies dealing with the travelling wave solutions of the $K(m,n)$ and $B(m,n)$ equations, but in general they are restricted to the case $m=n$ [@rosenau; @rosenau1; @wazwaz; @wazwaz1; @wazwaz2; @wazwaz3; @wazwaz5; @taha; @ludu; @wazwaz4; @yadong; @wang; @kuru]. When $m\neq n$, the solutions of $K(m,n)$ were investigated in [@rosenau; @rosenau1]. Our aim here is just to search for solutions of the equations $B(m,n)$, with $m\neq n$, by means of the factorization method. We remark that this method [@pilar1; @pilar; @Perez; @pilar2], when it is applicable, allows to get directly and systematically a wide set of solutions, compared with other methods used in the BBM equations. For example, the direct integral method used by C. Liu [@liu] can only be applied to the $B(2,1)$ equation. However, the factorization technique can be used to more equations than the direct integral method and also, in some cases, it gives rise to more general solutions than the sine-cosine and the tanh methods [@wazwaz4; @yadong; @wazwaz6]. This factorization approach to find travelling wave solutions of nonlinear equations has been extended to third order nonlinear ordinary differential equations (ODE’s) by D-S. Wang and H. Li [@li]. When we look for the travelling wave solutions of Eq. (\[1.3\]), first we reduce the form of the $B(m,n)$ equation to a second order nonlinear ODE and then, we can immediately apply factorization technique. Here, we will assume $m \neq n$, since the case $m = n$ has already been examined in a previous article following this method [@kuru]. This paper is organized as follows. In section 2 we introduce factorization technique for a special type of the second order nonlinear ODE’s. Then, we apply straightforwardly the factorization to the related second order nonlinear ODE to get travelling wave solutions of $B(m,n)$ equation in section 3. We obtain the solutions for these nonlinear ODE’s and the $B(m,n)$ equation in terms of Weierstrass functions in section 4. Finally, in section 5 we will add some remarks. Factorization of nonlinear second order ODE’s ============================================= Let us consider, the following nonlinear second order ODE $$\label{9} \frac{d^2 W}{d \theta^2}-\beta \frac{d W}{d \theta}+F(W)=0$$ where $\beta$ is constant and $F(W)$ is an arbitrary function of $W$. The factorized form of this equation can be written as $$\label{10} \left[\frac{d}{d \theta}-f_2(W,\theta)\right]\left[\frac{d}{d \theta}-f_1(W,\theta)\right] W(\theta)=0\,.$$ Here, $f_1$ and $f_2$ are unknown functions that may depend explicitly on $W$ and $\theta$. Expanding (\[10\]) and comparing with (\[9\]), we obtain the following consistency conditions $$\label{12} f_1f_2=\frac{F(W)}{W}+\frac{\partial f_1}{\partial \theta}, \qquad f_2+\frac{\partial(W f_1)}{\partial W}=\beta.$$ If we solve (\[12\]) for $f_{1}$ or $f_{2}$, it will supply us to write a compatible first order ODE $$\label{14} \left[\frac{d}{d \theta}-f_1(W,\theta)\right] W(\theta)=0$$ that provides a solution for the nonlinear ODE (\[9\]) [@pilar1; @pilar; @Perez; @pilar2]. In the applications of this paper $f_{1}$ and $f_{2}$ will depend only on $W$. Factorization of the BBM-like equations ======================================== When Eq. (\[1.3\]) has the travelling wave solutions in the form $$\label{15} u(x,t)=\phi(\xi),\quad\quad \xi=hx+wt$$ where $h$ and $w$ are real constants, substituting (\[15\]) into (\[1.3\]) and after integrating, we get the reduced form of Eq. (\[1.3\]) to the second order nonlinear ODE $$\label{16} (\phi^n)_{\xi\xi}-A\,\phi-B\,\phi^m+D=0\,.$$ Notice that the constants in Eq. (\[16\]) are $$\label{17} A=\frac{h+w}{h^2\,w},\quad\quad B=\frac{a}{h\,w},\quad\quad D=\frac{R}{h^2\,w}$$ and $R$ is an integration constant. Now, if we introduce the following natural transformation of the dependent variable $$\label{18} \phi^n(\xi)=W(\theta),\quad\quad\xi=\theta$$ Eq. (\[16\]) becomes $$\label{19} \frac{d^2 W}{d \theta^2}-A\,W^{\frac{1}{n}}-B\,W^{\frac{m}{n}}+D=0.$$ Now, we can apply the factorization technique to Eq. (\[19\]). Comparing Eq. (\[9\]) and Eq. (\[19\]), we have $\beta=0$ and $$\label{20} F(W)=-(A\,W^{\frac{1}{n}}+B\,W^{\frac{m}{n}}-D)\,.$$ Then, from (\[12\]) we get only one consistency condition $$\label{23} f_1^2+f_1\,W\frac{df_1}{dW}-A\,W^{\frac{1-n}{n}}-B\, W^{\frac{m-n}{n}}+D\,W^{-1}=0\,$$ whose solutions are $$\label{24} f_1(W)=\pm\frac{1}{W}\sqrt{\frac{2\,n\,A}{n+1}\,W^{\frac{n+1}{n}}+ \frac{2\,n\,B}{m+n}\,W^{\frac{m+n}{n}}-2\, D\,W+C}\,$$ where $C$ is an integration constant. Thus, the first order ODE (\[14\]) takes the form$$\label{25} \frac{dW}{d \theta}\mp\sqrt{\frac{2\,n\,A}{n+1}\,W^{\frac{n+1}{n}}+\frac{2\,n\,B}{m+n}\,W^{\frac{m+n}{n}}-2\, D\,W+C}=0\,.$$ In order to solve this equation for $W$ in a more general way, let us take $W$ in the form $W=\varphi^p,\,p\neq0,1$, then, the first order ODE (\[25\]) is rewritten in terms of $\varphi$ as $$\label{26} (\frac{d\varphi}{d \theta})^2= \frac{2\,n\,A}{p^2\,(n+1)}\,\varphi^{p(\frac{1-n}{n})+2}+ \frac{2\,n\,B}{p^2\,(m+n)}\,\varphi^{p(\frac{m-n}{n})+2}-\frac{2\,D}{p^2}\,\varphi^{2-p} +\frac{C}{p^2}\,\varphi^{2-2\,p}\,.$$ If we want to guarantee the integrability of (\[26\]), the powers of $\varphi$ have to be integer numbers between $0$ and $4$ [@ince]. Having in mind the conditions on $n, m$ ($n\neq m >1$) and $p$ ($p\neq0$), we have the following possible cases: - If $C=0,\,\,D=0$, we can choose $p$ and $m$ in the following way $$\label{27a} p=\pm \frac{2n}{1-n} \quad {\rm{with}} \quad m=\frac{n+1}{2},\frac{3\,n-1}{2}, 2\,n-1$$ and $$\label{27b} p=\pm \frac{n}{1-n}\quad {\rm{with}} \quad m= 2\,n-1,3\,n-2\,.$$ It can be checked that the two choices of sign in (\[27a\]) and (\[27b\]) give rise to the same solutions for Eq. (\[1.3\]). Therefore, we will consider only one of them. Then, taking $p=- \frac{2n}{1-n}$, Eq. (\[26\]) becomes $$\label{29} (\frac{d\varphi}{d \theta})^2=\frac{A\,(n-1)^2}{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{n\,(3\,n+1)}\,\varphi, \quad m=\frac{n+1}{2} %, \quad (n\,\,\rm {is\,odd})$$ $$\label{30} (\frac{d\varphi}{d \theta})^2=\frac{A\,(n-1)^2}{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{n\,(5\,n-1)}\,\varphi^3, \quad m=\frac{3\,n-1}{2} %, \quad (n\,\,\rm {is\,odd})$$ $$\label{31} (\frac{d\varphi}{d \theta})^2=\frac{A\,(n-1)^2}{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{n\,(3\,n-1)}\,\varphi^4, \quad m=2\,n-1$$ and for $p=- \frac{n}{1-n}\,$, $$\label{32} (\frac{d\varphi}{d \theta})^2=\frac{2\,A\,(n-1)^2}{n\,(n+1)}\,\varphi+\frac{2\,B\,(n-1)^2}{n\,(3\,n-1)}\,\varphi^3, \quad m=2\,n-1$$ $$\label{33} (\frac{d\varphi}{d \theta})^2=\frac{2\,A\,(n-1)^2}{n\,(n+1)}\,\varphi+\frac{B\,(n-1)^2}{n\,(2\,n-1)}\,\varphi^4, \quad m=3\,n-2\,.$$ - If $C=0$, we have the special cases, $p=\pm 2$, $n=2$ with $m=3,4$. Due to the same reason in the above case, we will consider only $p=2$. Then, Eq. (\[26\]) takes the form: $$\label{e23} (\frac{d\varphi}{d \theta})^2=-\frac{D}{2}+\frac{A}{3}\,\varphi+\frac{B}{5}\,\varphi^3, \quad m=3$$ $$\label{e24} (\frac{d\varphi}{d \theta})^2=-\frac{D}{2}+\frac{A}{3}\,\varphi+\frac{B}{6}\,\varphi^4, \quad m=4\,.$$ - If $A=C=0$, we have $p=\pm 2$ with $m=\displaystyle \frac{n}{2},\frac{3\,n}{2},2\,n$. In this case, for $p=2$, Eq. (\[26\]) has the following form: $$\label{e2n2} (\frac{d\varphi}{d \theta})^2=-\frac{D}{2}\,\varphi^4+\frac{B}{3}\,\varphi^3, \quad m=\frac{n}{2} %, \quad (n\,\,\rm {is\,even})$$ $$\label{e3n2} (\frac{d\varphi}{d \theta})^2=-\frac{D}{2}\,\varphi^4+\frac{B}{5}\,\varphi, \quad m=\frac{3\,n}{2} %, \quad (n\,\,\rm {is\,even})$$ $$\label{e22n} (\frac{d\varphi}{d \theta})^2=-\frac{D}{2}\,\varphi^4+\frac{B}{6}, \quad m=2\,n\,.$$ - If $A=0$, we have $p=\pm 1$ with $m=2\,n,3\,n$. Here, also we will take only the case $p=1$, then, we will have the equations: $$\label{e2n} (\frac{d\varphi}{d \theta})^2=-2\,D\,\varphi+\frac{2}{3}\,B\,\varphi^3+C\varphi^4, \quad m=2n$$ $$\label{e3n} (\frac{d\varphi}{d \theta})^2=-2\,D\,\varphi+\frac{B}{2}\,\varphi^4+C, \quad m=3n\,.$$ - If $A=D=0$, we have $p=\displaystyle \pm \frac{1}{2}$ with $m=3\,n,5\,n$. Thus, for $p=\displaystyle \frac{1}{2}$, Eq. (\[26\]) becomes: $$\label{e3n1} (\frac{d\varphi}{d \theta})^2=2\,B\,\varphi^3+4\,C\varphi, \quad m=3n$$ $$\label{e3nn} (\frac{d\varphi}{d \theta})^2=\frac{4}{3}\,B\,\varphi^4+4\,C\,\varphi, \quad m=5n\,.$$ Travelling wave solutions for BBM-like equations ================================================ In this section, we will obtain the solutions of the differential equations (\[29\])-(\[33\]) in terms of Weierstrass function, $\wp(\theta;g_{2},g_{3})$, which allow us to get the travelling wave solutions of $B(m,n)$ equations (\[1.3\]). The rest of equations (\[e23\])-(\[e3nn\]) can be dealt with a similar way, but they will not be worked out here for the sake of shortness. First, we will give some properties of the $\wp$ function which will be useful in the following [@Bateman; @watson]. Relevant properties of the $\wp$ function ----------------------------------------- Let us consider a differential equation with a quartic polynomial $$\label{ef} \big(\frac{d\varphi}{d\theta}\big)^2 =P(\varphi) = a_{0}\,\varphi^4+4\,a_{1}\,\varphi^3+6\,a_{2}\,\varphi^2+4\,a_{3}\,\varphi+a_{4}\,.$$ The solution of this equation can be written in terms of the Weierstrass function where the invariants $g_2$ and $g_3$ of (\[ef\]) are $$\label{gg} g_{2}= a_{0}\,a_{4}-4\,a_{1}\,a_{3}+3\,a_{2}^2,\ \ g_{3}= a_{0}\,a_{2}\,a_{4}+2\,a_{1}\,a_{2}\,a_{3}-a_{2}^{3}-a_{0}\,a_{3}^2-a_{1}^{2}\,a_{4}$$ and the discriminant is given by $\Delta=g_2^3-27\,g_3^2$. Then, the solution $\varphi$ can be found as $$\label{x} \varphi(\theta)=\varphi_0+\frac{1}{4}P_\varphi(\varphi_0)\left(\wp(\theta;g_{2},g_{3})- \frac{1}{24}P_{\varphi\varphi}(\varphi_0)\right)^{-1}$$ where the subindex in $P_{\varphi}(\varphi_0)$ denotes the derivative with respect to $\varphi$, and $\varphi_0$ is one of the roots of the polynomial $P(\varphi)$ (\[ef\]). Depending of the selected root $\varphi_0$, we will have a solution with a different behavior [@kuru]. Here, also we want to recall some other properties of the Weierstrass functions [@stegun]: i\) The case $g_2=1$ and $g_3=0$ is called lemniscatic case $$\label{lc} \wp(\theta;g_{2},0)=g_2^{1/2}\,\wp(\theta\,g_{2}^{1/4};1,0),\qquad g_2>0\,$$ ii\) The case $g_2=-1$ and $g_3=0$ is called pseudo-lemniscatic case $$\label{plc} \wp(\theta;g_{2},0)=|g_2|^{1/2}\,\wp(\theta\,|g_{2}|^{1/4};-1,0),\qquad g_2<0\,$$ iii\) The case $g_2=0$ and $g_3=1$ is called equianharmonic case $$\label{ec} \wp(\theta;g_{2},0)=g_3^{1/3}\,\wp(\theta\,g_{3}^{1/6};0,1),\qquad g_3>0\,.$$ Once obtained the solution $W(\theta)$, taking into account (\[15\]), (\[18\]) and $W=\varphi^{p}$, the solution of Eq. (\[1.3\]) is obtanied as $$\label{uxt} u(x,t)=\phi(\xi)=W^{\frac{1}{n}}(\theta)=\varphi^{\frac{p}{n}}(\theta),\quad\quad\theta=\xi=h\,x+w\,t.$$ The case $C=0,\,D=0$, $\displaystyle p=-\frac{2n}{1-n}$ ------------------------------------------------------- - $m=\displaystyle \frac{n+1}{2}$ Equation (\[29\]) can be expressed as $$(\frac{d\varphi}{d\theta})^2=P(\varphi)=\frac{A\,(n-1)^2} {2\,n\,(n+1)}+\frac{B\,(n-1)^2}{n\,(3\,n+1)}\,\varphi$$ and from $P(\varphi)=0$, we get the root of this polynomial $$\label{f01} \varphi_0=-\frac{A\,(3\,n+1)}{2\,B\,(n+1)}\,.$$ The invariants (\[gg\]) are: $g_{2}=g_{3}=0$, and $\Delta=0$. Therefore, having in mind $\wp(\theta;0,0)=\displaystyle \frac{1}{\theta^2}$, we can find the solution of (\[29\]) from (\[x\]) for $\varphi_0$, given by (\[f01\]), $$\label{35} \varphi(\theta)= \frac{B^2\,(n-1)^2\,(n+1)\,\theta^2-2\,A\,n\,(3\,n+1)^2}{4\,B\,n\,(n+1)\,(3\,n+1)}\,.$$ Now, the solution of Eq. (\[1.3\]) reads from (\[uxt\]) $$\label{u1} u(x,t)=\left[\frac{B^2\,(n-1)^2\,(n+1)\,(h\,x+w\,t)^2-2\,A\,n\,(3\,n+1)^2} {4\,B\,n\,(n+1)\,(3\,n+1)}\right]^{\frac{2}{n-1}}\,.$$ - $m=\displaystyle \frac{3\,n-1}{2}$ In this case, our equation to solve is (\[30\]) and the polynomial has the form $$P(\varphi)=\frac{A\,(n-1)^2}{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{n\,(5\,n-1)}\,\varphi^3$$ with one real root: $\varphi_0=\left(\frac{-A\,(5\,n-1)}{2\,B\,(n+1)}\right)^{1/3}$. Here, the discriminant is different from zero with the invariants $$g_2=0,\qquad g_3=\frac{-A\,B^2\,(n-1)^6}{32\,n^3\,(n+1)\,(5\,n-1)^2}\,.$$ Then, the solution of (\[30\]) is obtained by (\[x\]) for $\varphi_0$, $$\label{phi2} \varphi=\varphi_0\,\left[\frac{4\,n\,(5\,n-1)\,\wp(\theta;0,g_3)+2\,B\,(n-1)^2\,\varphi_0 }{4\,n\,(5\,n-1)\,\wp(\theta;0,g_3)-B\,(n-1)^2\,\varphi_0}\right]$$ and we get the solution of Eq. (\[1.3\]) from (\[uxt\]) as $$\label{u2} u(x,t)=\left[\varphi_0^2\,\left(\frac{4\,n\,(5\,n-1)\,\wp(h\,x+w\,t;0,g_3)+2\,B\,(n-1)^2\,\varphi_0 }{4\,n\,(5\,n-1)\,\wp(h\,x+w\,t;0,g_3)-B\,(n-1)^2\,\varphi_0}\right)^2\right]^{\frac{1}{n-1}}$$ with the conditions: $A<0,g_3>0$, for $\varphi_0=\left(\frac{-A\,(5\,n-1)}{2\,B\,(n+1)}\right)^{1/3}$. Using the relation (\[ec\]), we can write the solution (\[u2\]) in terms of equianharmonic case of the Weierstrass function: $$\label{u21} u(x,t)=\left[\left(\frac{-A\,(5\,n-1)}{2\,B\,(n+1)}\right)^{2/3}\, \left(\frac{2^{2/3}\,\wp((h\,x+w\,t)\,g_3^{1/6};0,1)+2 }{2^{2/3}\,\wp((h\,x+w\,t)\,g_3^{1/6};0,1)-1}\right)^2\right]^{\frac{1}{n-1}}\,.$$ - $m=2\,n-1$ In Eq. (\[31\]), the quartic polynomial is $$P(\varphi)=\frac{A\,(n-1)^2}{2\,n\,(n+1)}+\frac{B\,(n-1)^2}{2\,n\,(3\,n-1)}\,\varphi^4$$ and has two real roots: $\varphi_0=\pm\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/4}$ for $A<0,\,B>0$ or $A>0,\,B<0$. In this case, the invariants are $$g_2=\frac{A\,B\,(n-1)^4}{4\,n^2\,(n+1)\,(3\,n-1)},\qquad g_3=0\,.$$ Here, also the discriminant is different from zero, $\Delta\neq0$. We obtain the solution of (\[31\]) from (\[x\]) for $\varphi_0$, $$\label{phi3} \varphi=\varphi_0\,\left[\frac{4\,n\,(n+1)\,\varphi_0^2\,\wp(\theta;g_2,0)-A\,(n-1)^2 }{4\,n\,(n+1)\,\varphi_0^2\,\wp(\theta;g_2,0)+A\,(n-1)^2}\right]$$ and we get the solution of Eq. (\[1.3\]) from (\[uxt\]) as $$\label{u3} u(x,t)=\left[\varphi_0^2\,\left(\frac{4\,n\,(n+1)\,\varphi_0^2\,\wp(h\,x+w\,t;g_2,0)-A\,(n-1)^2 }{4\,n\,(n+1)\,\varphi_0^2\,\wp(h\,x+w\,t;g_2,0)+A\,(n-1)^2}\right)^2\right]^{\frac{1}{n-1}}$$ with the conditions for real solutions: $A<0,\,B>0,\,g_2<0$ or $A>0,\,B<0,\,g_2<0$. Having in mind the relation (\[plc\]), the solution (\[u3\]) can be expressed in terms of the pseudo-lemniscatic case of the Weierstrass function: $$\label{u32} u(x,t)=\left[\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}\, \left(\frac{2\,\wp((h\,x+w\,t)|g_2|^{1/4};-1,0)+1 }{2\,\wp((h\,x+w\,t)|g_2|^{1/4};-1,0)-1}\right)^2\right]^{\frac{1}{n-1}}$$ for $A<0,\,B>0,\,g_2<0$ and $$\label{u31} u(x,t)=\left[\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}\, \left(\frac{2\,\wp((h\,x+w\,t)|g_2|^{1/4};-1,0)-1 }{2\,\wp((h\,x+w\,t)|g_2|^{1/4};-1,0)+1}\right)^2\right]^{\frac{1}{n-1}}$$ for $A>0,\,B<0,\,g_2<0$. The case $C=0,\,D=0$, $\displaystyle p=- \frac{n}{1-n}$ ------------------------------------------------------- - $m=2\,n-1$ Now, the polynomial is cubic $$P(\varphi)=\frac{2\,A\,(n-1)^2}{n\,(n+1)}\,\varphi+\frac{2\,B\,(n-1)^2}{n\,(3\,n-1)}\,\varphi^3$$ and has three distinct real roots: $\varphi_0=0$ and $\varphi_0=\pm\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}$ for $A<0,\,B>0$ or $A>0,\,B<0$. Now, the invariants are $$g_2=\frac{-A\,B\,(n-1)^4}{n^2\,(n+1)\,(3\,n-1)},\qquad g_3=0$$ and $\Delta\neq0$. The solution of (\[32\]) is obtained from (\[x\]) for $\varphi_0$, $$\label{phi4} \varphi=\varphi_0\,\left[\frac{2\,n\,(n+1)\,\varphi_0\,\wp(\theta;g_2,0)-A\,(n-1)^2 }{2\,n\,(n+1)\,\varphi_0\,\wp(\theta;g_2,0)+A\,(n-1)^2}\right]$$ and substituting (\[phi4\]) in (\[uxt\]), we get the solution of Eq. (\[1.3\]) as $$\label{u4} u(x,t)=\left[\varphi_0\,\left(\frac{2\,n\,(n+1)\,\varphi_0\,\wp(h\,x+w\,t;g_2,0)-A\,(n-1)^2 }{2\,n\,(n+1)\,\varphi_0\,\wp(h\,x+w\,t;g_2,0)+A\,(n-1)^2}\right)\right]^{\frac{1}{n-1}}$$ with the conditions: $A<0,\,B>0,\,g_2>0$ and $A>0,\,B<0,\,g_2>0$ for $\varphi_0=\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}$. While the root $\varphi_0=0$ leads to the trivial solution, $u(x,t)=0$, the other root $\varphi_0=-\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}$ gives rise to imaginary solutions. Now, we can rewrite the solution (\[u4\]) in terms of the lemniscatic case of the Weierstrass function using the relation (\[lc\]) in (\[u4\]): $$\label{u41} u(x,t)=\left[\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}\, \left(\frac{2\,\wp((h\,x+w\,t)\,g_2^{1/4};1,0)+1 }{2\,\wp((h\,x+w\,t)\,g_2^{1/4};1,0)-1}\right)\right]^{\frac{1}{n-1}}$$ for $A<0,\,B>0,\,g_2>0$ and $$\label{u42} u(x,t)=\left[\left(\frac{-A\,(3\,n-1)}{B\,(n+1)}\right)^{1/2}\, \left(\frac{2\,\wp((h\,x+w\,t)\,g_2^{1/4};1,0)-1 }{2\,\wp((h\,x+w\,t)\,g_2^{1/4};1,0)+1}\right)\right]^{\frac{1}{n-1}}$$ for $A>0,\,B<0,\,g_2>0$. - $m=3\,n-2$ In this case, we have also a quartic polynomial $$P(\varphi)=\frac{2\,A\,(n-1)^2}{n\,(n+1)}\,\varphi+\frac{B\,(n-1)^2}{n\,(2\,n-1)}\,\varphi^4 \, .$$ It has two real roots: $\varphi_0=0$ and $\varphi_0=\left(-\frac{2\,A\,(2\,n-1)}{B\,(n+1)}\right)^{1/3}$. For the equation (\[33\]), the invariants are $$g_2=0,\qquad g_3=\frac{-A^2\,B\,(n-1)^6}{4\,n^3\,(n+1)^2\,(2\,n-1)}$$ and $\Delta\neq0$. Now, the solution of (\[33\]) reads from (\[x\]) for $\varphi_0$, $$\label{phi5} \varphi=\varphi_0\,\left[\frac{2\,n\,(n+1)\,\varphi_0\,\wp(\theta;0,g_3)-A\,(n-1)^2 }{2\,n\,(n+1)\,\varphi_0\,\wp(\theta;0,g_3)+2\,A\,(n-1)^2}\right]\,.$$ Then, the solution of Eq. (\[1.3\]) is from (\[uxt\]) as $$\label{u5} u(x,t)=\left[\varphi_0\,\left(\frac{2\,n\,(n+1)\,\varphi_0\,\wp(h\,x+w\,t;0,g_3)-A\,(n-1)^2 }{2\,n\,(n+1)\,\varphi_0\,\wp(h\,x+w\,t;0,g_3)+2\,A\,(n-1)^2}\right)\right]^{\frac{1}{n-1}}$$ with the conditions: $B<0,\,g_3>0$. Taking into account the relation (\[ec\]), this solution also can be expressed in terms of the equianharmonic case of the Weierstrass function: $$\label{u51} u(x,t)=\left[\left(-\frac{2\,A\,(2\,n-1)}{B\,(n+1)}\right)^{1/3}\, \left(\frac{2^{2/3}\,\wp((h\,x+w\,t)\,g_3^{1/6};0,1)-1 }{2^{2/3}\,\wp((h\,x+w\,t)\,g_3^{1/6};0,1)+2}\right)\right]^{\frac{1}{n-1}}\,.$$ We have also plotted these solutions for some special values in Figs. (\[figuras1\])-(\[figuras333\]). We can appreciate that for the considered cases, except the parabolic case (\[35\]), they consist in periodic waves, some are singular while others are regular. Their amplitude is governed by the non-vanishing constants $A,B$ and their formulas are given in terms of the special forms (\[lc\])-(\[ec\]) of the $\wp$ function. ![The left figure corresponds to the solution (\[u31\]) for $h=-2$, $w=1$, $a=-1$, $n=3$, $m=5$ and the right one corresponds to the solution (\[u32\]) for $h=1$, $w=1$, $a=-1$, $n=3$, $m=5$.[]{data-label="figuras1"}](235b.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u31\]) for $h=-2$, $w=1$, $a=-1$, $n=3$, $m=5$ and the right one corresponds to the solution (\[u32\]) for $h=1$, $w=1$, $a=-1$, $n=3$, $m=5$.[]{data-label="figuras1"}](235a.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u31\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=3$ and the right one corresponds to the solution (\[u32\]) for $h=1$, $w=1$, $a=-1$, $n=2$, $m=3$.[]{data-label="figuras111"}](223b.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u31\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=3$ and the right one corresponds to the solution (\[u32\]) for $h=1$, $w=1$, $a=-1$, $n=2$, $m=3$.[]{data-label="figuras111"}](223a.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u41\]) for $h=-2$, $w=1$, $a=-1$, $n=3$, $m=5$ and the right one corresponds to the solution (\[u42\]) for $h=1$, $w=1$, $a=-1$, $n=3$, $m=5$. []{data-label="figuras2"}](135b.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u41\]) for $h=-2$, $w=1$, $a=-1$, $n=3$, $m=5$ and the right one corresponds to the solution (\[u42\]) for $h=1$, $w=1$, $a=-1$, $n=3$, $m=5$. []{data-label="figuras2"}](135a.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u41\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=3$ and the right one corresponds to the solution (\[u42\]) for $h=1$, $w=1$, $a=-1$, $n=2$, $m=3$. []{data-label="figuras222"}](123b.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u41\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=3$ and the right one corresponds to the solution (\[u42\]) for $h=1$, $w=1$, $a=-1$, $n=2$, $m=3$. []{data-label="figuras222"}](123a.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u21\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=5/2$ and the right one corresponds to the solution (\[u51\]) for $h=1$, $w=1$, $a=-1$, $n=3/2$, $m=5/2$.[]{data-label="figuras333"}](2252b.eps "fig:"){width="40.00000%"} ![The left figure corresponds to the solution (\[u21\]) for $h=-2$, $w=1$, $a=-1$, $n=2$, $m=5/2$ and the right one corresponds to the solution (\[u51\]) for $h=1$, $w=1$, $a=-1$, $n=3/2$, $m=5/2$.[]{data-label="figuras333"}](13252a.eps "fig:"){width="40.00000%"} Lagrangian and Hamiltonian ========================== Since Eq. (\[19\]) is a motion-type, we can write the corresponding Lagrangian $$\label{lag} L_W=\frac{1}{2}\,W_{\theta}^2+\frac{A\,n}{n+1}\,W^\frac{n+1}{n}+\frac{B\,n}{m+n}\,W^\frac{m+n}{n}-D\,W\,$$ and, the Hamiltonian $H_W=W_{\theta} P_W -L_W$ reads $$H_W(W,P_W,\theta)=\frac{1}{2}\left[P_W^2-\left(\frac{2\,A\,n}{n+1}\,W^\frac{n+1}{n}+ \frac{2\,B\,n}{m+n}\,W^\frac{m+n}{n}-2\,D\,W\right)\right] \label{hamil}$$ where the canonical momentum is $$P_W=\frac{\partial L_W}{\partial W_\theta}=W_\theta.\label{mo}$$ The independent variable $\theta$ does not appear explicitly in (\[hamil\]), then $H_W$ is a constant of motion, $H_W=E$, with $$E=\frac{1}{2} \left[\left(\frac{dW}{d\theta}\right)^2-\left(\frac{2\,A\,n}{n+1}\,W^\frac{n+1}{n}+ \frac{2\,B\,n}{m+n}\,W^\frac{m+n}{n}-2\,D\,W\right)\right].\label{ee}$$ Note that this equation also leads to the first order ODE (\[25\]) with the identification $C=2\,E$. Now, the energy $E$ can be expressed as a product of two independent constant of motions $$E=\frac{1}{2}\, I_+\,I_- \label{5.1}$$ where $$I_{\pm}(z)=\left(W_\theta\mp \sqrt{\frac{2\,A\,n}{n+1}\,W^\frac{n+1}{n}+ \frac{2\,B\,n}{m+n}\,W^\frac{m+n}{n}-2\,D\,W}\,\right) \,e^{\pm S(\theta)} \label{const}$$ and the phase $S(\theta)$ is chosen in such a way that $I_\pm(\theta)$ be constants of motion ($dI_\pm(\theta)/d\theta=0 $) $$S(\theta)=\int \frac{A\,W^{\frac{1}{n}}+B\,W^{\frac{m}{n}}-D}{\sqrt{\frac{2\,A\,n}{n+1}\,W^\frac{n+1}{n}+ \frac{2\,B\,n}{m+n}\,W^\frac{m+n}{n}-2\,D\,W}}\,d\theta.$$ Conclusions =========== In this paper, we have applied the factorization technique to the $B(m,n)$ equations in order to get travelling wave solutions. We have considered some representative cases of the $B(m,n)$ equation for $m\neq n$. By using this method, we obtained the travelling wave solutions in a very compact form, where the constants appear as modulating the amplitude, in terms of some special forms of the Weierstrass elliptic function: lemniscatic, pseudo-lemniscatic and equiaharmonic. Furthermore, these solutions are not only valid for integer $m$ and $n$ but also non integer $m$ and $n$. The case $m=n$ for the $B(m,n)$ equations has been examined by means of the factorization technique in a previous paper [@kuru] where the compactons and kink-like solutions recovering all the solutions previously reported have been constructed. Here, for $m\neq n$, solutions with compact support can also be obtained following a similar procedure. We note that, this method is systematic and gives rise to a variety of solutions for nonlinear equations. We have also built the Lagrangian and Hamiltonian for the second order nonlinear ODE corresponding to the travelling wave reduction of the $B(m,n)$ equation. Since the Hamiltonian is a constant of motion, we have expressed the energy as a product of two independent constant of motions. Then, we have seen that these factors are related with first order ODE’s that allow us to get the solutions of the nonlinear second order ODE. Remark that the Lagrangian underlying the nonlinear system also permits to get solutions of the system. There are some interesting papers in the literature, where starting with the Lagrangian show how to obtain compactons or kink-like travelling wave solutions of some nonlinear equations [@arodz; @adam; @gaeta1; @gaeta2; @gaeta3]. Acknowledgments {#acknowledgments .unnumbered} =============== Partial financial support is acknowledged to Junta de Castilla y León (Spain) Project GR224. The author acknowledges to Dr. Javier Negro for useful discussions. [99]{} T.B. Benjamin, J.L. Bona and J.J. Mahony, [Philos. Trans. R. Soc., Ser. A]{} 272 (1972) 47. P. Rosenau, J.M. Hyman, [Phys. Rev. Lett.]{} 70 (1993) 564. P. Rosenau, [Phys. Lett. A]{} 275 (2000) 193. A.-M. Wazwaz, T. Taha, [Math. Comput. Simul.]{} 62 (2003) 171. A.-M. Wazwaz, [Appl. Math. Comput.]{} 133 (2002) 229. A.-M. Wazwaz, [Math. Comput. Simul.]{} 63 (2003) 35. A.-M. Wazwaz, [Appl. Math. Comput.]{} 139 (2003) 37. A.-M. Wazwaz, [Chaos, Solitons and Fractals]{} 28 (2006) 454. M.S. Ismail, T.R. Taha, [Math. Comput. Simul.]{} 47 (1998) 519. A. Ludu, J.P. Draayer, [Physica D]{} 123 (1998) 82. A.-M. Wazwaz, M.A. Helal, [Chaos, Solitons and Fractals]{} 26 (2005) 767. S. Yadong, [Chaos, Solitons and Fractals]{} 25 (2005) 1083. L. Wang, J. Zhou, L. Ren, [Int. J. Nonlinear Science]{} 1 (2006) 58. Ş. Kuru, (2008) arXiv:0810.4166. P.G. Estévez, Ş. Kuru, J. Negro and L.M. Nieto, to appear in [Chaos, Solitons and Fractals]{} (2007) arXiv:0707.0760. P.G. Estévez, Ş. Kuru, J. Negro and L.M. Nieto, [J. Phys. A: Math. Gen.]{} 39 (2006) 11441. O. Cornejo-Pérez, J. Negro, L.M. Nieto and H.C. Rosu, [Found. Phys.]{} 36 (2006) 1587. P.G. Estévez, Ş. Kuru, J. Negro and L.M. Nieto, [J. Phys. A: Math. Theor.]{} 40 (2007) 9819. C. Liu, (2006) arXiv.org:nlin/0609058. A.-M. Wazwaz, [Phys. Lett. A]{} 355 (2006) 358. D.-S. Wang and H. Li, [J. Math. Anal. Appl.]{} 243 (2008) 273. M.A. Helal, [Chaos, Solitons and Fractals]{} 13 (2002) 1917. Ji-H. He, Xu-H. Wu, [Chaos, Solitons and Fractals]{} 29 (2006) 108. E.L. Ince, [Ordinary Differential Equations]{}, Dover, New York, 1956. A. Erdelyi et al, [The Bateman Manuscript Project. Higher Transcendental Functions]{},FL: Krieger Publishing Co., Malabar, 1981. E.T. Whittaker and G. Watson, [A Course of Modern Analysis]{}, Cambridge University Press, Cambridge, 1988. M. Abramowitz and I.A. Stegun, [Handbook of Mathematical Functions]{}, Dover, New York, 1972. H. Arodź, [Acta Phys. Polon. B]{} 33 (2002) 1241. C. Adam, J. Sánchez-Guillén and A. Wereszczyński, [J. Phys. A:Math. Theor.]{} 40 (2007) 13625. M. Destrade, G. Gaeta, G. Saccomandi, [Phys. Rev. E]{} 75 (2007) 047601. G. Gaeta, T. Gramchev and S. Walcher, [J. Phys. A: Math. Theor.]{} 40 (2007) 4493. G. Gaeta, [Europhys. Lett.]{} 79 (2007) 20003.
--- author: - 'Dimitrios A. Gouliermis' - Stefan Schmeja - Volker Ossenkopf - 'Ralf S. Klessen' - 'Andrew E. Dolphin' title: Hierarchically Clustered Star Formation in the Magellanic Clouds --- Method: The identification of stellar clusters {#sec:1} ============================================== For the investigation of the clustering behavior of stars it is necessary to thoroughly characterize distinct concentrations of stars, which can only be achieved by the accurate identification of individual stellar clusters. Considering the importance of this process, different identification methods were developed, which can be classified in two families. The first, represented by [*friend of friend*]{} algorithms and [*cluster analysis*]{} techniques, e.g., [@battinelli96], are designed for limited samples of observed stars, and thus are based on linking individual stars into coherent stellar groups. These methods are recently superseded by [*minimum spanning trees*]{}, e.g., [@bastian09]. The second family of identification codes, represented by [*nearest-neighbors*]{} and [*star-counts*]{}, make use of surface stellar density maps constructed from rich observed stellar samples. Distinct stellar systems are identified as statistically significant over-densities in respect to the average stellar density in the observed regions, e.g., [@gouliermis10]. Tests on artificial clusters of various density gradients and shapes showed that the latter (density) techniques are more robust in detecting real stellar concentrations, provided that rich stellar samples are available [@schmeja11]. A schematic representation of stellar density maps constructed with star-counts is shown in Fig. \[fig:1\]. ![Schematic of the star-count process. (a) The chart of an observed stellar sample. (b) The corresponding stellar density map, after counting stars in quadrilateral grid of elements (pixels) of size $1.8\arcsec$ each, and after filtering the map with a Gaussian of FWHM$\simeq2.8$px ($\sim$5). (c) The corresponding isodensity contour map. Isopleths at levels [ -2.truept]{}3$\sigma$ are indicated with white lines. \[fig:1\]](fig1.ps) ![Isodensity contour map from star-counts of the young bright main-sequence and faint PMS populations identified with HST/ACS in the region of NGC 346 in the SMC. Lines represent isopleths of significance [ -2.truept]{}$1\sigma$. Apart from the dominating central large stellar aggregate, there are peripheral young sub-clusters, revealed as statistically important stellar concentrations. The central aggregate, denoted by the 1$\sigma$ isopleth, encompass various distinct sub-groups, which appear at higher density thresholds. NGC 346 itself appears at [ -2.truept]{}3$\sigma$ significance. \[fig:2\]](fig2.ps) Data: Stellar clustering in the region NGC 346/N66 {#sec:2} ================================================== One of the most prominent bright stellar systems in the Small Magellanic Cloud (SMC) is the stellar association NGC 346, related to the  region LHA 115-N66 [@henize56], the brightest in this galaxy. This system appears in partially-resolved observations form the ground as a single stellar concentration, but recent imaging with the [*Advanced Camera for Surveys*]{} onboard the Hubble Space Telescope (HST) allowed the detection of smaller sub-clusters within the boundaries of the  nebula. The images were collected within the HST GO Program 10248 and were retrieved from the HST Data Archive. Their photometry demonstrated that the faint young stellar populations in the region are still in their pre–main-sequence (PMS) phase, and revealed a plethora of sub-solar PMS stars [@gouliermis06]. Our [*nearest-neighbor*]{} cluster analysis of the observed young stellar populations, i.e., the bright main-sequence (down to $m_{555} {\ \raise -2.truept\hbox{\rlap{\hbox{$\sim$}}\raise5.truept\hbox{$<$}\ }}21$) and the faint PMS stars, revealed a significant number of smaller, previously unresolved, young stellar sub-clusters [@schmeja09]. This clustering behavior of young stars in NGC 346 is further demonstrated here by the stellar density contour map of Fig \[fig:2\], constructed with star-counts. Results: Hierarchical clustering of young stars {#sec:3} =============================================== The map of Fig. \[fig:2\] shows significant sub-structure, in particular within the 1$\sigma$ boundaries of the central dominant stellar aggregate. This structuring behavior indicates hierarchy. The minimum spanning tree (MST) of the young stars in the whole region allows to determine the statistical  parameter, introduced by [@cw04]. This parameter is a measure of the fractal dimension $D$ of a stellar group, permitting to distinguish between centrally concentrated clusters and hierarchical clusters with fractal substructure. The application of the MST to our data shows that the region NGC 346/N66 is highly hierarchical with a  that corresponds to a fractal dimension $D \simeq 2.5$. Constructing surface stellar density maps allows us to further characterize the clustering behavior of stars with the application of tools, which are originally designed for the study of the structuring of the interstellar medium (ISM), as observed at far-infrared or longer wavelengths. The so-called [*dendrograms*]{} are used for the visualization of hierarchy through structural trees [@rosolowsky08]. The dendrogram of the stellar density map of NGC 346 demonstrates that the observed hierarchy is mostly due to the substructure in the dominant stellar aggregate. The $\Delta$-variance analysis [@stutzki98; @ossenkopf08] is a robust structure analysis method that measures the amount of structure on a given scale $l$. In principle the $\Delta$-variance is directly related to the power spectrum of the map, and thus for a power law spectrum of index $-\beta$, $\Delta$-variance also follows a power law, $\displaystyle \sigma_\Delta^2 \propto l^\alpha$, with $\alpha={\beta-2}$. The application of the $\Delta$-variance analysis on the surface stellar density map of NGC 346 verifies that indeed the clustering of the young stars in the region is self-similar (Fig. \[fig:3\]), with a spectral index $\beta \simeq 2.8$, corresponding to a fractal dimension $D=2.6$ of the corresponding fractional Brownian motion structure [@stutzki98], similar to that previously derived for Galactic molecular clouds. Self-similarity appears to brake, i.e., we find different hierarchical properties for the short-range scaling and the behavior at the overall scale of the region, at length-scales $l \geq 25$px, corresponding to physical scales of $\sim$40($\sim 11$pc at the distance of the SMC). ![The $\Delta$-variance spectrum of the surface stellar density map of the entire region of NGC 346/N66. This analysis shows that the young stellar populations in this region are hierarchically structured up to length-scales of $\sim$40. The spectral index $\beta$ is determined from the fit of the spectrum for data between lags 4and 13(indicated by the gray shaded area). The dashed line provides the used virtual beamsize (5). \[fig:3\]](fig3.ps) D.A.G., S.S. and V.O. kindly acknowledge support by the German Research Foundation (DFG) through grants GO 1659/3-1, SFB 881 and OS 177/2-1 respectively. Based on observations made with the NASA/ESA [*Hubble Space Telescope*]{}, obtained from the data archive at the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. [99.]{} Bastian, N., et al. 2009. Mon. Not. R. Astron. Soc. 392, 868 Battinelli, P., Efremov, Y., & Magnier, E. A. 1996. Astron. Astrophys. 314, 51 Cartwright, A., & Whitworth, A. P. 2004. Mon. Not. R. Astron. Soc. 348, 589 Gouliermis, D. A., et al. 2006. Astroph. J. Suppl. Ser. 166, 549 Gouliermis, D. A., et al. 2010. Astroph. J. 725, 1717 Henize, K. G. 1956. Astroph. J. Suppl. Ser. 2, 315 Ossenkopf, V., Krips, M., & Stutzki, J. 2008. Astron. Astrophys. 485, 917 Rosolowsky, E. W., et al. 2008. Astroph. J. 679, 1338 Schmeja, S., Gouliermis, D. A., & Klessen, R. S. 2009. Astroph. J. 694, 367 Schmeja, S. 2011, Astronomische Nachrichten, 332, 172 Stutzki J., et al. 1998. Astron. Astrophys. 336, 697
--- bibliography: - 'references-Forstneric.bib' --- [**Holomorphic embeddings and immersions\ of Stein manifolds: a survey**]{} [**Franc Forstnerič**]{} > [**Abstract**]{} In this paper we survey results on the existence of holomorphic embeddings and immersions of Stein manifolds into complex manifolds. Most of them pertain to proper maps into Stein manifolds. We include a new result saying that every continuous map $X\to Y$ between Stein manifolds is homotopic to a proper holomorphic embedding provided that $\dim Y>2\dim X$ and we allow a homotopic deformation of the Stein structure on $X$. > > [**Keywords**]{} Stein manifold, embedding, density property, Oka manifold > > [**MSC (2010):**]{} > > 32E10, 32F10, 32H02, 32M17, 32Q28, 58E20, 14C30 Introduction {#sec:intro} ============ In this paper we review what we know about the existence of holomorphic embeddings and immersions of Stein manifolds into other complex manifolds. The emphasis is on recent results, but we also include some classical ones for the sake of completeness and historical perspective. Recall that Stein manifolds are precisely the closed complex submanifolds of Euclidean spaces ${\mathbb{C}}^N$ (see Remmert [@Remmert1956], Bishop [@Bishop1961AJM], and Narasimhan [@Narasimhan1960AJM]; cf. Theorem \[th:classical\]). Stein manifolds of dimension $1$ are open Riemann surfaces (see Behnke and Stein [@BehnkeStein1949]). A domain in ${\mathbb{C}}^n$ is Stein if and only if it is a domain of holomorphy (see Cartan and Thullen [@CartanThullen1932]). For more information, see the monographs [@Forstneric2017E; @GrauertRemmert1979; @GunningRossi2009; @HormanderSCV]. In §\[sec:Euclidean\] we survey results on the existence of proper holomorphic immersions and embeddings of Stein manifolds into Euclidean spaces. Of special interest are the minimal embedding and immersion dimensions. Theorem \[th:EGS\], due to Eliashberg and Gromov [@EliashbergGromov1992AM] (1992) and Schürmann [@Schurmann1997] (1997), settles this question for Stein manifolds of dimension $>1$. It remains an open problem whether every open Riemann surface embeds holomorphically into ${\mathbb{C}}^2$; we describe its current status in §\[ss:RS\]. We also discuss the use of holomorphic automorphisms of Euclidean spaces in the construction of wild holomorphic embeddings (see §\[ss:wild\] and §\[ss:complete\]). It has recently been discovered by Andrist et al. [@AndristFRW2016; @AndristWold2014; @Forstneric-immersions] that there is a big class of Stein manifolds $Y$ which contain every Stein manifold $X$ with $2\dim X< \dim Y$ as a closed complex submanifold (see Theorem \[th:density\] ). In fact, this holds for every Stein manifold $Y$ enjoying Varolin’s [*density property*]{} [@Varolin2000; @Varolin2001]: the Lie algebra of all holomorphic vector fields on $Y$ is spanned by the ${\mathbb{C}}$-complete vector fields, i.e., those whose flow is an action of the additive group $({\mathbb{C}},+)$ by holomorphic automorphisms of $Y$ (see Definition \[def:density\]). Since the domain $({\mathbb{C}}^*)^n$ enjoys the volume density property, we infer that every Stein manifold $X$ of dimension $n$ admits a proper holomorphic immersion to $({\mathbb{C}}^*)^{2n}$ and a proper pluriharmonic map into ${\mathbb{R}}^{2n}$ (see Corollary \[cor:harmonic\]). This provides a counterexample to the Schoen-Yau conjecture [@SchoenYau1997] for any Stein source manifold (see §\[ss:S-Y\]). The class of Stein manifolds (in particular, of affine algebraic manifolds) with the density property is quite big and contains most complex Lie groups and homogeneous spaces, as well as many nonhomogeneus manifolds. This class has been the focus of intensive research during the last decade; we refer the reader to the recent surveys [@KalimanKutzschebauch2015] and [@Forstneric2017E §4.10]. An open problem posed by Varolin [@Varolin2000; @Varolin2001] is whether every contractible Stein manifold with the density property is biholomorphic to a Euclidean space. In §\[sec:PSC\] we recall a result of Drinovec Drnovšek and the author [@DrinovecForstneric2007DMJ; @DrinovecForstneric2010AJM] to the effect that every smoothly bounded, strongly pseudoconvex Stein domain $X$ embeds properly holomorphically into an arbitrary Stein manifold $Y$ with $\dim Y>2\dim X$. More precisely, every continuous map $\overline X\to Y$ which is holomorphic on $X$ is homotopic to a proper holomorphic embedding $X{\hookrightarrow}Y$ (see Theorem \[th:BDF2010\]). The analogous result holds for immersions if $\dim Y\ge 2\dim X$, and also for every $q$-complete manifold $Y$ with $q\in \{1,\ldots,\dim Y-2\dim X+1\}$, where the Stein case corresponds to $q=1$. This summarizes a long line of previous results. In §\[ss:Hodge\] we mention a recent application of these techniques to the [*Hodge conjecture*]{} for the highest dimensional a priori nontrivial cohomology group of a $q$-complete manifold [@FSS2016]. In §\[ss:complete\] we survey recent results on the existence of [*complete*]{} proper holomorphic embeddings and immersions of strongly pseudoconvex domains into balls. Recall that a submanifold of ${\mathbb{C}}^N$ is said to be [*complete*]{} if every divergent curve in it has infinite Euclidean length. In §\[sec:soft\] we show how the combination of the techniques from [@DrinovecForstneric2007DMJ; @DrinovecForstneric2010AJM] with those of Slapar and the author [@ForstnericSlapar2007MRL; @ForstnericSlapar2007MZ] can be used to prove that, if $X$ and $Y$ are Stein manifolds and $\dim Y>2\dim X$, then every continuous map $X\to Y$ is homotopic to a proper holomorphic embedding up to a homotopic deformation of the Stein structure on $X$ (see Theorem \[th:soft\]). The analogous result holds for immersions if $\dim Y\ge 2\dim X$, and for $q$-complete manifolds $Y$ with $q\le \dim Y-2\dim X+1$. A result in a similar vein, concerning proper holomorphic embeddings of open Riemann surfaces into ${\mathbb{C}}^2$ up to a deformation of their conformal structures, is due to Alarcón and L[ó]{}pez [@AlarconLopez2013] (a special case was proved in [@CerneForstneric2002]); see also Ritter [@Ritter2014] for embeddings into $({\mathbb{C}}^*)^2$. I have not included any topics from Cauchy-Riemann geometry since it would be impossible to properly discuss this major subject in the present survey of limited size and with a rather different focus. The reader may wish to consult the recent survey by Pinchuk et al. [@Pinchuk2017], the monograph by Baouendi et al. [@Baouendi1999] from 1999, and my survey [@Forstneric1993MN] from 1993. For a new direction in this field, see the papers by Bracci and Gaussier [@BracciGaussier2016X; @BracciGaussier2017X]. We shall be using the following notation and terminology. Let ${\mathbb{N}}=\{1,2,3,\ldots\}$. We denote by ${\mathbb D}=\{z\in {\mathbb{C}}:|z|<1\}$ the unit disc in ${\mathbb{C}}$, by ${\mathbb D}^n\subset{\mathbb{C}}^n$ the Cartesian product of $n$ copies of ${\mathbb D}$ (the unit polydisc in ${\mathbb{C}}^n$), and by ${\mathbb{B}}^n=\{z=(z_1,\ldots,z_n)\in{\mathbb{C}}^n : |z|^2 = |z_1|^2+\cdots +|z_n|^2<1\}$ the unit ball in ${\mathbb{C}}^n$. By ${\mathcal{O}}(X)$ we denote the algebra of all holomorphic functions on a complex manifold $X$, and by ${\mathcal{O}}(X,Y)$ the space of all holomorphic maps $X\to Y$ between a pair of complex manifolds; thus ${\mathcal{O}}(X)={\mathcal{O}}(X,{\mathbb{C}})$. These spaces carry the compact-open topology. This topology can be defined by a complete metric which renders them Baire spaces; in particular, ${\mathcal{O}}(X)$ is a Fréchet algebra. (See [@Forstneric2017E p. 5] for more details.) A compact set $K$ in a complex manifold $X$ is said to be [*${\mathcal{O}}(X)$-convex*]{} if $K={\widehat}K:= \{p\in X : |f(p)|\le \sup_K |f|\ \text{for every} \ f\in {\mathcal{O}}(X)\}$. Embeddings and immersions of Stein manifolds into Euclidean spaces {#sec:Euclidean} ================================================================== In this section we survey results on proper holomorphic immersions and embeddings of Stein manifolds into Euclidean spaces. Classical results {#ss:classical} ----------------- We begin by recalling the results of Remmert [@Remmert1956], Bishop [@Bishop1961AJM], and Narasimhan [@Narasimhan1960AJM] from the period 1956–1961. \[th:classical\] Assume that $X$ is a Stein manifold of dimension $n$. - If $N>2n$ then the set of proper embeddings $X{\hookrightarrow}{\mathbb{C}}^N$ is dense in ${\mathcal{O}}(X,{\mathbb{C}}^N)$. - If $N\ge 2n$ then the set of proper immersions $X{\hookrightarrow}{\mathbb{C}}^N$ is dense in ${\mathcal{O}}(X,{\mathbb{C}}^N)$. - If $N>n$ then the set of proper maps $X\to {\mathbb{C}}^N$ is dense in ${\mathcal{O}}(X,{\mathbb{C}}^N)$. - If $N\ge n$ then the set of almost proper maps $X\to {\mathbb{C}}^N$ is residual in ${\mathcal{O}}(X,{\mathbb{C}}^N)$. A proof of these results can also be found in the monograph by Gunning and Rossi [@GunningRossi2009]. Recall that a set in a Baire space (such as ${\mathcal{O}}(X,{\mathbb{C}}^N)$) is said to be [*residual*]{}, or a set [*of second category*]{}, if it is the intersection of at most countably many open everywhere dense sets. Every residual set is dense. A property of elements in a Baire space is said to be [*generic*]{} if it holds for all elements in a residual set. The density statement for embeddings and immersions is an easy consequence of the following result which follows from the jet transversality theorem for holomorphic maps. (See Forster [@Forster1970] for maps to Euclidean spaces and Kaliman and Zaidenberg [@KalimanZaidenberg1996TAMS] for the general case. A more complete discussion of this topic can be found in [@Forstneric2017E §8.8].) Note also that maps which are immersions or embeddings on a given compact set constitute an open set in the corresponding mapping space. \[prop:generic\] Assume that $X$ is a Stein manifold, $K$ is a compact set in $X$, and $U\Subset X$ is an open relatively compact set containing $K$. If $Y$ is a complex manifold such that $\dim Y>2\dim X$, then every holomorphic map $f\colon X\to Y$ can be approximated uniformly on $K$ by holomorphic embeddings $U{\hookrightarrow}Y$. If $2\dim X \le \dim Y$ then $f$ can be approximated by holomorphic immersions $U\to Y$. Proposition \[prop:generic\] fails in general without shrinking the domain of the map, for otherwise it would yield nonconstant holomorphic maps of ${\mathbb{C}}$ to any complex manifold of dimension $>1$ which is clearly false. On the other hand, it holds without shrinking the domain of the map if the target manifold $Y$ satisfies a suitable holomorphic flexibility property, in particular, if it is an [*Oka manifold*]{}. See [@Forstneric2017E Chap. 5] for the definition of this class of complex manifolds and [@Forstneric2017E Corollary 8.8.7] for the mentioned result. In the proof of Theorem \[th:classical\], parts (a)–(c), we exhaust $X$ by a sequence $K_1\subset K_2\subset \cdots$ of compact ${\mathcal{O}}(X)$-convex sets and approximate the holomorphic map $f_j\colon X\to{\mathbb{C}}^N$ in the inductive step, uniformly on $K_j$, by a holomorphic map $f_{j+1}\colon X\to{\mathbb{C}}^N$ whose norm $|f_{j+1}|$ is not too small on $K_{j+1}\setminus K_j$ and such that $|f_{j+1}(x)|>1+\sup_{K_j} |f_j|$ holds for all $x\in bK_{j+1}$. If the approximation is close enough at every step then the sequence $f_j$ converges to a proper holomorphic map $f=\lim_{j\to\infty} f_j\colon X\to{\mathbb{C}}^N$. If $N>2n$ then every map $f_j$ in the sequence can be made an embedding on $K_{j}$ (immersion in $N\ge 2n$) by Proposition \[prop:generic\], and hence the limit map $f$ is also such. A more efficient way of constructing proper maps, immersions and embeddings of Stein manifolds into Euclidean space was introduced by Bishop [@Bishop1961AJM]. He showed that any holomorphic map $X\to{\mathbb{C}}^n$ from an $n$-dimensional Stein manifold $X$ can be approximated uniformly on compacts by [*almost proper*]{} holomorphic maps $h\colon X\to {\mathbb{C}}^n$; see Theorem \[th:classical\](d). More precisely, there is an increasing sequence $P_1\subset P_2\subset \cdots\subset X$ of relatively compact open sets exhausting $X$ such that every $P_j$ is a union of finitely many special analytic polyhedra and $h$ maps $P_j$ properly onto a polydisc $a_j {\mathbb D}^n\subset {\mathbb{C}}^n$, where $0<a_1<a_2<\ldots$ and $\lim_{j\to\infty} a_j=+\infty$. We then obtain a proper map $(h,g)\colon X\to {\mathbb{C}}^{n+1}$ by choosing $g\in {\mathcal{O}}(X)$ such that for every $j\in{\mathbb{N}}$ we have $g>j$ on the compact set $L_j=\{x\in \overline P_{j+1}\setminus P_j : |h(x)|\le a_{j-1}\}$; since $\overline P_{j-1}\cup L_j$ is ${\mathcal{O}}(X)$-convex, this is possible by inductively using the Oka-Weil theorem. One can then find proper immersions and embeddings by adding a suitable number of additional components to $(h,g)$ (any such map is clearly proper) and using Proposition \[prop:generic\] and the Oka-Weil theorem inductively. The first of the above mentioned procedures easily adapts to give a proof of the following interpolation theorem due to Acquistapace et al. [@Acquistapace1975 Theorem 1]. Their result also pertains to Stein spaces of bounded embedding dimension. \[th:ABT\] [[@Acquistapace1975 Theorem 1]]{} Assume that $X$ is an $n$-dimensional Stein manifold, $X'$ is a closed complex subvariety of $X$, and $\phi\colon X'{\hookrightarrow}{\mathbb{C}}^N$ is a proper holomorphic embedding for some $N > 2n$. Then the set of all proper holomorphic embeddings $X{\hookrightarrow}{\mathbb{C}}^N$ that extend $\phi$ is dense in the space of all holomorphic maps $X \to{\mathbb{C}}^N$ extending $\phi$. The analogous result holds for proper holomorphic immersions $X\to {\mathbb{C}}^N$ when $N\ge 2n$. This interpolation theorem fails when $N<2n$. Indeed, for every $n>1$ there exists a proper holomorphic embedding $\phi \colon {\mathbb{C}}^{n-1} {\hookrightarrow}{\mathbb{C}}^{2n-1}$ such that ${\mathbb{C}}^{2n-1}\setminus \phi({\mathbb{C}}^{n-1})$ is Eisenman $n$-hyperbolic, so $\phi$ does not extend to an injective holomorphic map $f\colon {\mathbb{C}}^n\to {\mathbb{C}}^{2n-1}$ (see [@Forstneric2017E Proposition 9.5.6]; this topic is discussed in §\[ss:wild\]). The answer to the interpolation problem for embeddings seems unknown in the borderline case $N=2n$. Embeddings and immersions into spaces of minimal dimension {#ss:minimal} ---------------------------------------------------------- After Theorem \[th:classical\] was proved in the early 1960’s, one of the main questions driving this theory during the next decades was to find the smallest number $N=N(n)$ such that every Stein manifold $X$ of dimension $n$ embeds or immerses properly holomorphically into ${\mathbb{C}}^N$. The belief that a Stein manifold of complex dimension $n$ admits proper holomorphic embeddings to Euclidean spaces of dimension smaller than $2n+1$ was based on the observation that such a manifold is homotopically equivalent to a CW complex of dimension at most $n$; this follows from Morse theory (see Milnor [@Milnor1963]) and the existence of strongly plurisubharmonic Morse exhaustion functions on $X$ (see Hamm [@Hamm1983] and [@Forstneric2017E §3.12]). This problem, which was investigated by Forster [@Forster1970], Eliashberg and Gromov [@EliashbergGromov1971] and others, gave rise to major new methods in Stein geometry. Except in the case $n=1$ when $X$ is an open Riemann surface, the following optimal answer was given by Eliashberg and Gromov [@EliashbergGromov1992AM] in 1992, with an improvement by one for odd values on $n$ due to Sch[ü]{}rmann [@Schurmann1997]. \[th:EGS\] [*[@EliashbergGromov1992AM; @Schurmann1997]*]{} Every Stein manifold $X$ of dimension $n$ immerses properly holomorphically into ${\mathbb{C}}^M$ with $M = \left[\frac{3n+1}{2}\right]$, and if $n>1$ then $X$ embeds properly holomorphically into ${\mathbb{C}}^N$ with $N = \left[\frac{3n}{2}\right]+ 1$. Schürmann [@Schurmann1997] also found optimal embedding dimensions for Stein spaces with singularities and with bounded embedding dimension. The key ingredient in the proof of Theorem \[th:EGS\] is a certain major extension of the Oka-Grauert theory, due to Gromov whose 1989 paper [@Gromov1989] marks the beginning of [*modern Oka theory*]{}. (See [@ForstnericLarusson2011] for an introduction to Oka theory and [@Forstneric2017E] for a complete account.) Forster showed in [@Forster1970 Proposition 3] that the embedding dimension $N=\left[\frac{3n}{2}\right]+ 1$ is the minimal possible for every $n>1$, and the immersion dimension $M=\left[\frac{3n+1}{2}\right]$ is minimal for every even $n$, while for odd $n$ there could be two possible values. (See also [@Forstneric2017E Proposition 9.3.3].) In 2012, Ho et al. [@HoJacobowitzLandweber2012] found new examples showing that these dimensions are optimal already for Grauert tubes around compact totally real submanifolds, except perhaps for immersions with odd $n$. A more complete discussion of this topic and a self-contained proof of Theorem \[th:EGS\] can be found in [@Forstneric2017E Sects. 9.2–9.5]. Here we only give a brief outline of the main ideas used in the proof. One begins by choosing a sufficiently generic almost proper map $h\colon X\to {\mathbb{C}}^{n}$ (see Theorem \[th:classical\](d)) and then tries to find the smallest possible number of functions $g_1,\ldots, g_q\in {\mathcal{O}}(X)$ such that the map $$\label{eq:f} f=(h,g_1,\ldots,g_q)\colon X\to {\mathbb{C}}^{n+q}$$ is a proper embedding or immersion. Starting with a big number of functions $\tilde g_1,\ldots, \tilde g_{\tilde q}\in {\mathcal{O}}(X)$ which do the job, we try to reduce their number by applying a suitable fibrewise linear projection onto a smaller dimensional subspace, where the projection depends holomorphically on the base point. Explicitly, we look for functions $$\label{eq:ajk} g_j=\sum_{k=1}^{\tilde q} a_{j,k}\tilde g_k, \qquad a_{j,k}\in {\mathcal{O}}(X),\ \ j=1,\ldots,q$$ such that the map is a proper embedding or immersion. In order to separate those pairs of points in $X$ which are not separated by the base map $h\colon X\to {\mathbb{C}}^{n}$, we consider coefficient functions of the form $a_{j,k}=b_{j,k}\circ h$ where $b_{j,k}\in{\mathcal{O}}({\mathbb{C}}^n)$ and $h\colon X\to{\mathbb{C}}^n$ is the chosen base almost proper map. This outline cannot be applied directly since the base map $h\colon X\to{\mathbb{C}}^n$ may have too complicated behavior. Instead, one proceeds by induction on strata in a suitably chosen complex analytic stratification of $X$ which is equisingular with respect to $h$. The induction steps are of two kinds. In a step of the first kind we find a map $g=(g_1,\ldots,g_q)$ which separates points on the (finite) fibres of $h$ over the next bigger stratum and matches the map from the previous step on the union of the previous strata (the latter is a closed complex subvariety of $X$). A step of the second kind amounts to removing the kernel of the differential $dh_x$ for all points in the next stratum, thereby ensuring that $df_x=dh_x\oplus dg_x$ is injective there. Analysis of the immersion condition shows that the graph of the map $\alpha=(a_{j,k}) \colon X\to {\mathbb{C}}^{q{\tilde q}}$ in over the given stratum must avoid a certain complex subvariety $\Sigma$ of $E=X\times {\mathbb{C}}^{q{\tilde q}}$ with algebraic fibres. Similarly, analysis of the point separation condition leads to the problem of finding a map $\beta=(b_{j,k})\colon {\mathbb{C}}^n\to {\mathbb{C}}^{q{\tilde q}}$ avoiding a certain complex subvariety of $E={\mathbb{C}}^n\times {\mathbb{C}}^{q{\tilde q}}$ with algebraic fibers. In both cases the projection $\pi\colon E\setminus \Sigma\to X$ is a stratified holomorphic fibre bundle all of whose fibres are Oka manifolds. More precisely, if $q\ge \left[\frac{n}{2}\right]+1$ then each fibre $\Sigma_x= \Sigma\cap\, E_x$ is either empty or a union of finitely many affine linear subspaces of $E_x$ of complex codimension $>1$. The same lower bound on $q$ guarantees the existence of a continuous section $\alpha\colon X\to E\setminus \Sigma$ avoiding $\Sigma$. Gromov’s Oka principle [@Gromov1989] then furnishes a holomorphic section $X\to E\setminus \Sigma$. A general Oka principle for sections of stratified holomorphic fiber bundles with Oka fibres is given by [@Forstneric2017E Theorem 5.4.4]. We refer the reader to the original papers or to [@Forstneric2017E §9.3–9.4] for further details. The classical constructions of proper holomorphic embeddings of Stein manifolds into Euclidean spaces are coordinate dependent and hence do not generalize to more general target manifolds. A conceptually new method has been found recently by Ritter and the author [@ForstnericRitter2014MZ]. It is based on a method of separating certain pairs of compact polynomially convex sets in ${\mathbb{C}}^N$ by Fatou-Bieberbach domains which contain one of the sets and avoid the other one. Another recently developed method, which also depends on holomorphic automorphisms and applies to a much bigger class of target manifolds, is discussed in §\[sec:density\]. Embedding open Riemann surfaces into ${\mathbb{C}}^2$ {#ss:RS} ----------------------------------------------------- The constructions described so far fail to embed open Riemann surfaces into ${\mathbb{C}}^2$. The problem is that the subvarieties $\Sigma$ in the proof of Theorem \[th:EGS\] may contain hypersurfaces, and hence the Oka principle for sections of $E\setminus \Sigma\to X$ fails in general due to hyperbolicity of its complement. It is still an open problem whether every open Riemann surface embeds as a smooth closed complex curve in ${\mathbb{C}}^2$. (By Theorem \[th:classical\] it embeds properly holomorphically into ${\mathbb{C}}^3$ and immerses with normal crossings into ${\mathbb{C}}^2$. Every compact Riemann surface embeds holomorphically into ${\mathbb{CP}}^3$ and immerses into ${\mathbb{CP}}^2$, but very few of them embed into ${\mathbb{CP}}^2$; see [@GriffithsHarris1994].) There are no topological obstructions to this problem — it was shown by Alarcón and L[ó]{}pez [@AlarconLopez2013] that every open orientable surface $S$ carries a complex structure $J$ such that the Riemann surface $X=(S,J)$ admits a proper holomorphic embedding into ${\mathbb{C}}^2$. There is a variety of results in the literature concerning the existence of proper holomorphic embeddings of certain special open Riemann surfaces into ${\mathbb{C}}^2$; the reader may wish to consult the survey in [@Forstneric2017E §9.10–9.11]. Here we mention only a few of the currently most general known results on the subject. The first one from 2009, due to Wold and the author, concerns bordered Riemann surfaces. \[th:FW1\] [[@ForstnericWold2009 Corollary 1.2]]{} Assume that $X$ is a compact bordered Riemann surface with boundary of class ${\mathscr{C}}^r$ for some $r>1$. If $f \colon X {\hookrightarrow}{\mathbb{C}}^2$ is a ${\mathscr{C}}^1$ embedding that is holomorphic in the interior $\mathring X=X\setminus bX$, then $f$ can be approximated uniformly on compacts in $\mathring X$ by proper holomorphic embeddings $\mathring X{\hookrightarrow}{\mathbb{C}}^2$. The proof relies on techniques introduced mainly by Wold [@Wold2006; @Wold2006-2; @Wold2007]. One of them concerns exposing boundary points of an embedded bordered Riemann surface in ${\mathbb{C}}^2$. This technique was improved in [@ForstnericWold2009]; see also the exposition in [@Forstneric2017E §9.9]. The second one depends on methods of Andersén-Lempert theory concerning holomorphic automorphisms of Euclidean spaces; see §\[ss:wild\]. A proper holomorphic embedding $\mathring X {\hookrightarrow}{\mathbb{C}}^2$ is obtained by first exposing a boundary point in each of the boundary curves of $f(X)\subset {\mathbb{C}}^2$, sending these points to infinity by a rational shear on ${\mathbb{C}}^2$ without other poles on $f(X)$, and then using a carefully constructed sequence of holomorphic automorphisms of ${\mathbb{C}}^2$ whose domain of convergence is a Fatou-Bieberbach domain $\Omega\subset {\mathbb{C}}^2$ which contains the embedded complex curve $f(X)\subset {\mathbb{C}}^2$, but does not contain any of its boundary points. If $\phi\colon \Omega\to {\mathbb{C}}^2$ is a Fatou-Bieberbach map then $\phi\circ f \colon X{\hookrightarrow}{\mathbb{C}}^2$ is a proper holomorphic embedding. A complete exposition of this proof can also be found in [@Forstneric2017E §9.10]. The second result due to Wold and the author [@ForstnericWold2013] (2013) concerns domains with infinitely many boundary components. A domain $X$ in the Riemann sphere $\mathbb P^1$ is a *generalized circled domain* if every connected component of ${\mathbb{P}}^1 \setminus X$ is a round disc or a point. Note that ${\mathbb{P}}^1 \setminus X$ contains at most countably many discs. By the uniformization theorem of He and Schramm [@HeSchramm1993; @HeSchramm1995], every domain in $\mathbb P^1$ with at most countably many complementary components is conformally equivalent to a generalized circled domain. \[th:FW2\] [[@ForstnericWold2013 Theorem 5.1]]{} Let $X$ be a generalized circled domain in ${\mathbb{P}}^1$. If all but finitely many punctures in $\mathbb P^1\setminus X$ are limit points of discs in $\mathbb P^1\setminus X$, then $X$ embeds properly holomorphically into $\mathbb C^2$. The paper [@ForstnericWold2013] contains several other more precise results on this subject. The special case of Theorem \[th:FW2\] for plane domains $X\subset {\mathbb{C}}$ bounded by finitely many Jordan curves (and without punctures) is due to Globevnik and Stens[ø]{}nes [@GlobevnikStensones1995]. Results on embedding certain Riemann surfaces with countably many boundary components into ${\mathbb{C}}^2$ were also proved by Majcen [@Majcen2009]; an exposition can be found in [@Forstneric2017E §9.11]. The proof of Theorem \[th:FW2\] relies on similar techniques as that of Theorem \[th:FW1\], but it uses a considerably more involved induction scheme for dealing with infinitely many boundary components, clustering them together into suitable subsets to which the available analytic methods can be applied. The same technique gives the analogous result for domains in tori. There are a few other recent results concerning embeddings of open Riemann surfaces into ${\mathbb{C}}\times {\mathbb{C}}^*$ and $({\mathbb{C}}^*)^2$, where ${\mathbb{C}}^*={\mathbb{C}}\setminus \{0\}$. Ritter showed in [@Ritter2013JGEA] that, for every circular domain $X\subset {\mathbb D}$ with finitely many boundary components, each homotopy class of continuous maps $X\to {\mathbb{C}}\times {\mathbb{C}}^*$ contains a proper holomorphic map. If ${\mathbb D}\setminus X$ contains finitely many punctures, then every continuous map $X\to {\mathbb{C}}\times {\mathbb{C}}^*$ is homotopic to a proper holomorphic immersion that identifies at most finitely many pairs of points in $X$ (L[árusson and Ritter [@LarussonRitter2014]). Ritter [@Ritter2014] also gave an analogue of Theorem]{} \[th:FW1\] for proper holomorphic embeddings of certain open Riemann surfaces into $({\mathbb{C}}^*)^2$. Automorphisms of Euclidean spaces and wild embeddings {#ss:wild} ----------------------------------------------------- There is another line of investigations that we wish to touch upon. It concerns the question how many proper holomorphic embeddings $X{\hookrightarrow}{\mathbb{C}}^N$ of a given Stein manifold $X$ are there up to automorphisms of ${\mathbb{C}}^N$, and possibly also of $X$. This question was motivated by certain famous results from algebraic geometry, such as the one of Abhyankar and Moh [@AbhyankarMoh1975] and Suzuki [@Suzuki1974] to the effect that every polynomial embedding ${\mathbb{C}}{\hookrightarrow}{\mathbb{C}}^2$ is equivalent to the linear embedding $z\mapsto (z,0)$ by a polynomial automorphism of ${\mathbb{C}}^2$. It is a basic fact that for any $N>1$ the holomorphic automorphism group ${\mathrm{Aut}}({\mathbb{C}}^N)$ is very big and complicated. This is in stark contrast to the situation for bounded or, more generally, hyperbolic domains in ${\mathbb{C}}^N$ which have few automorphism; see Greene et al. [@Greene2011] for a survey of the latter topic. Major early work on understanding the group ${\mathrm{Aut}}({\mathbb{C}}^N)$ was made by Rosay and Rudin [@RosayRudin1988]. This theory became very useful with the papers of Anders[é]{}n and Lempert [@AndersenLempert1992] and Rosay and the author [@ForstnericRosay1993] in 1992–93. The central result is that every map in a smooth isotopy of biholomorphic mappings $\Phi_t\colon \Omega=\Omega_0 \to \Omega_t$ $(t\in [0,1])$ between Runge domains in ${\mathbb{C}}^N$, with $\Phi_0$ the identity on $\Omega$, can be approximated uniformly on compacts in $\Omega$ by holomorphic automorphisms of ${\mathbb{C}}^N$ (see [@ForstnericRosay1993 Theorem 1.1] or [@Forstneric2017E Theorem 4.9.2]). The analogous result holds on any Stein manifold with the density property; see §\[sec:density\]. A comprehensive survey of this subject can be found in [@Forstneric2017E Chap. 4]. By twisting a given submanifold of ${\mathbb{C}}^N$ with a sequence of holomorphic automorphisms, one can show that for any pair of integers $1\le n<N$ the set of all equivalence classes of proper holomorphic embeddings ${\mathbb{C}}^n{\hookrightarrow}{\mathbb{C}}^N$, modulo automorphisms of both spaces, is uncountable (see [@DerksenKutzschebauchWinkelmann1999]). In particular, the Abhyankar-Moh theorem fails in the holomorphic category since there exist proper holomorphic embeddings $\phi \colon {\mathbb{C}}{\hookrightarrow}{\mathbb{C}}^2$ that are nonstraightenable by automorphisms of ${\mathbb{C}}^2$ [@ForstnericGlobevnikRosay1996], as well as embeddings whose complement ${\mathbb{C}}^2\setminus \phi({\mathbb{C}})$ is Kobayashi hyperbolic [@BuzzardFornaess1996]. More generally, for any pair of integers $1\le n<N$ there exists a proper holomorphic embedding $\phi\colon {\mathbb{C}}^n{\hookrightarrow}{\mathbb{C}}^N$ such that every nondegenerate holomorphic map ${\mathbb{C}}^{N-n}\to {\mathbb{C}}^N$ intersects $\phi({\mathbb{C}}^n)$ at infinitely many points [@Forstneric1999JGA]. It is also possible to arrange that ${\mathbb{C}}^N\setminus \phi({\mathbb{C}}^n)$ is Eisenman $(N-n)$-hyperbolic [@BorellKutzschebauch2006]. A more comprehensive discussion of this subject can be found in [@Forstneric2017E §4.18]. By using nonlinearizable proper holomorphic embeddings ${\mathbb{C}}{\hookrightarrow}{\mathbb{C}}^2$, Derksen and Kutzschauch gave the first known examples of nonlinearizable periodic automorphisms of ${\mathbb{C}}^n$ [@DerksenKutzschebauch1998]. For instance, there is a nonlinearizable holomorphic involution on ${\mathbb{C}}^4$. In another direction, Baader et al. [@Baaderall2010] constructed an example of a properly embedded disc in ${\mathbb{C}}^2$ whose image is topologically knotted, thereby answering a questions of Kirby. It is unknown whether there exists a knotted proper holomorphic embedding ${\mathbb{C}}{\hookrightarrow}{\mathbb{C}}^2$, or an unknotted proper holomorphic embedding ${\mathbb D}{\hookrightarrow}{\mathbb{C}}^2$ of the disc. Automorphisms of ${\mathbb{C}}^2$ and ${\mathbb{C}}^*\times {\mathbb{C}}$ were used in a very clever way by Wold in his landmark construction of non-Runge Fatou-Bieberbach domains in ${\mathbb{C}}^2$ [@Wold2008] and of non-Stein long ${\mathbb{C}}^2$’s [@Wold2010]. Each of these results solved a long-standing open problem. More recently, Wold’s construction was developed further by Boc Thaler and the author [@BocThalerForstneric2016] who showed that there is a continuum of pairwise nonequivalent long ${\mathbb{C}}^n$’s for any $n>1$ which do not admit any nonconstant holomorphic or plurisubharmonic functions. (See aso [@Forstneric2017E 4.21].) Embeddings into Stein manifolds with the density property {#sec:density} ========================================================= Universal Stein manifolds {#ss:universal} ------------------------- It is natural to ask which Stein manifolds, besides the Euclidean spaces, contain all Stein manifolds of suitably low dimension as closed complex submanifolds. To facilitate the discussion, we introduce the following notions. \[def:universal\] Let $Y$ be a Stein manifold. 1. $Y$ is [*universal for proper holomorphic embeddings*]{} if every Stein manifold $X$ with $2\dim X<\dim Y$ admits a proper holomorphic embedding $X{\hookrightarrow}Y$. 2. $Y$ is [*strongly universal for proper holomorphic embeddings*]{} if, under the assumptions in (1), every continuous map $f_0\colon X\to Y$ which is holomorphic in a neighborhood of a compact ${\mathcal{O}}(X)$-convex set $K\subset X$ is homotopic to a proper holomorphic embedding $f_0\colon X{\hookrightarrow}Y$ by a homotopy $f_t\colon X\to Y$ $(t\in[0,1])$ such that $f_t$ is holomorphic and arbitrarily close to $f_0$ on $K$ for every $t\in [0,1]$. 3. $Y$ is (strongly) [*universal for proper holomorphic immersions*]{} if condition (1) (resp. (2)) holds for proper holomorphic immersions $X\to Y$ from any Stein manifold $X$ satisfying $2\dim X\le \dim Y$. In the terminology of Oka theory (cf. [@Forstneric2017E Chap. 5]), a complex manifold $Y$ is (strongly) universal for proper holomorphic embeddings if it satisfies the basic Oka property (with approximation) for proper holomorphic embeddings $X\to Y$ from Stein manifolds of dimension $2\dim X<\dim Y$. The dimension hypotheses in the above definition are justified by Proposition \[prop:generic\]. The main goal is to find good sufficient conditions for a Stein manifold to be universal. If a manifold $Y$ is Brody hyperbolic [@Brody1978] (i.e., it does not admit any nonconstant holomorphic images of ${\mathbb{C}}$) then clearly no complex manifold containing a nontrivial holomorphic image of ${\mathbb{C}}$ can be embedded into $Y$. In order to get positive results, one must therefore assume that $Y$ enjoys a suitable holomorphic flexibility (anti-hyperbolicity) property. \[prob:Oka\] Is every Stein Oka manifold (strongly) universal for proper holomorphic embeddings and immersions? Recall (see e.g. [@Forstneric2017E Theorem 5.5.1]) that every Oka manifold is strongly universal for not necessarily proper holomorphic maps, embeddings and immersions. Indeed, the cited theorem asserts that a generic holomorphic map $X\to Y$ from a Stein manifold $X$ into an Oka manifold $Y$ is an immersion if $\dim Y\ge 2\dim X$, and is an injective immersion if $\dim Y > 2\dim X$. However, the Oka condition does not imply universality for [*proper*]{} holomorphic maps since there are examples of (compact or noncompact) Oka manifolds without any closed complex subvarieties of positive dimension (see [@Forstneric2017E Example 9.8.3]). Manifolds with the (volume) density property -------------------------------------------- The following condition was introduced in 2000 by Varolin [@Varolin2000; @Varolin2001]. \[def:density\] A complex manifold $Y$ enjoys the (holomorphic) [*density property*]{} if the Lie algebra generated by the ${\mathbb{C}}$-complete holomorphic vector fields on $Y$ is dense in the Lie algebra of all holomorphic vector fields in the compact-open topology. A complex manifold $Y$ endowed with a holomorphic volume form $\omega$ enjoys the [*volume density property*]{} if the analogous density condition holds in the Lie algebra of all holomorphic vector fields on $Y$ with vanishing $\omega$-divergence. The algebraic density and volume density properties were introduced by Kaliman and Kutzschebauch [@KalimanKutzschebauch2010IM]. The class of Stein manifolds with the (volume) density property includes most complex Lie groups and homogeneous spaces, as well as many nonhomogeneous manifolds. We refer to [@Forstneric2017E §4.10] for a more complete discussion and an up-to-date collection of references on this subject. Another recent survey is the paper by Kaliman and Kutzschebauch [@KalimanKutzschebauch2015]. Every complex manifold with the density property is an Oka manifold, and a Stein manifold with the density property is elliptic in the sense of Gromov (see [@Forstneric2017E Proposition 5.6.23]). It is an open problem whether a contractible Stein manifold with the density property is biholomorphic to a complex Euclidean space. The following result is due to Andrist and Wold [@AndristWold2014] in the special case when $X$ is an open Riemann surface, to Andrist et al. [@AndristFRW2016 Theorems 1.1, 1.2] for embeddings, and to the author [@Forstneric-immersions Theorem 1.1] for immersions in the double dimension. \[th:density\] [[@AndristFRW2016; @AndristWold2014; @Forstneric-immersions]]{} Every Stein manifold with the density or the volume density property is strongly universal for proper holomorphic embeddings and immersions. To prove Theorem \[th:density\], one follows the scheme of proof of the Oka principle for maps from Stein manifolds to Oka manifolds (see [@Forstneric2017E Chapter 5]), but with a crucial addition which we now briefly describe. Assume that $D\Subset X$ is a relatively compact strongly pseudoconvex domain with smooth boundary and $f\colon \overline D{\hookrightarrow}Y$ is a holomorphic embedding such that $f(bD) \subset Y\setminus L$, where $L$ is a given compact ${\mathcal{O}}(Y)$-convex set in $Y$. We wish to approximate $f$ uniformly on $\overline D$ by a holomorphic embedding $f'\colon \overline{D'}{\hookrightarrow}Y$ of a certain bigger strongly pseudoconvex domain $\overline{D'} \Subset X$ to $Y$, where $D'$ is either a union of $D$ with a small convex bump $B$ chosen such that $f(\overline {D\cap B})\subset Y\setminus L$, or a thin handlebody whose core is the union of $D$ and a suitable smoothly embedded totally real disc in $X\setminus D$. (The second case amounts to a change of topology of the domain, and it typically occurs when passing a critical point of a strongly plurisubharmonic exhaustion function on $X$.) In view of Proposition \[prop:generic\], we only need to approximate $f$ by a holomorphic map $f'\colon \overline{D'} \to Y$ since a small generic perturbation of $f'$ then yields an embedding. It turns out that the second case involving a handlebody easily reduces to the first one by applying a Mergelyan type approximation theorem; see [@Forstneric2017E §5.11] for this reduction. The attachment of a bump is handled by using the density property of $Y$. This property allows us to find a holomorphic map $g\colon \overline B\to Y\setminus L$ approximating $f$ as closely as desired on a neighborhood of the attaching set $\overline{B\cap D}$ and satisfying $g(\overline B)\subset Y\setminus L$. (More precisely, we use that isotopies of biholomorphic maps between pseudoconvex Runge domains in $Y$ can be approximated by holomorphic automorphisms of $Y$; see [@ForstnericRosay1993 Theorem 1.1] and also [@Forstneric2017E Theorem 4.10.5] for the version pertaining to Stein manifolds with the density property.) Assuming that $g$ is sufficiently close to $f$ on $\overline{B\cap D}$, we can glue them into a holomorphic map $f'\colon \overline{D'}\to Y$ which approximates $f$ on $\overline D$ and satisfies $f'(\overline{B})\subset Y\setminus L$. The proof is completed by an induction procedure in which every induction step is of the type described above. The inclusion $f'(\overline{B})\subset Y\setminus L$ satisfied by the next map in the induction step guarantees properness of the limit embedding $X{\hookrightarrow}Y$. Of course the sets $L\subset Y$ also increase and form an exhaustion of $Y$. The case of immersions in double dimension requires a more precise analysis. In the induction step described above, we must ensure that the immersion $f \colon \overline D\to Y$ is injective (an embedding) on the attaching set $\overline{B\cap D}$ of the bump $B$. This can be arranged by general position provided that $\overline{B\cap D}$ is very thin. It is shown in [@Forstneric-immersions] that it suffices to work with convex bumps such that, in suitably chosen holomorphic coordinates on a neighborhood of $\overline B$, the set $B$ is a convex polyhedron and $\overline{B\cap D}$ is a very thin neighborhood of one of its faces. This means that $\overline{B\cap D}$ is small thickening of a $(2n-1)$-dimensional object in $X$, and hence we can easily arrange that $f$ is injective on it. The remainder of the proof proceeds exactly as before, completing our sketch of proof of Theorem \[th:density\]. On the Schoen-Yau conjecture {#ss:S-Y} ---------------------------- The following corollary to Theorem \[th:density\] is related to a conjecture of Schoen and Yau [@SchoenYau1997] that the disc ${\mathbb D}=\{\zeta \in{\mathbb{C}}:|\zeta|<1\}$ does not admit any proper harmonic maps to ${\mathbb{R}}^2$. \[cor:harmonic\] Every Stein manifold $X$ of complex dimension $n$ admits a proper holomorphic immersion to $({\mathbb{C}}^*)^{2n}$, and a proper pluriharmonic map into ${\mathbb{R}}^{2n}$. The space $({\mathbb{C}}^*)^n$ with coordinates $z=(z_1,\ldots,z_n)$ (where $z_j\in {\mathbb{C}}^*$ for $j=1,\ldots,n$) enjoys the volume density property with respect to the volume form $$\omega= \frac{dz_1\wedge\cdots\wedge dz_n}{z_1\cdots z_n}.$$ (See Varolin [@Varolin2001] or [@Forstneric2017E Theorem 4.10.9(c)].) Hence, [@Forstneric-immersions Theorem 1.2] (the part of Theorem \[th:density\] above concerning immersions into the double dimension) furnishes a proper holomorphic immersion $f=(f_1,\ldots,f_{2n})\colon X\to ({\mathbb{C}}^*)^{2n}$. It follows that the map $$\label{eq:log} u=(u_1,\ldots,u_{2n})\colon X\to {\mathbb{R}}^{2n}\quad \text{with}\ \ u_j=\log|f_j|\ \ \text{for}\ \ j=1,\ldots, 2n$$ is a proper map of $X$ to ${\mathbb{R}}^{2n}$ whose components are pluriharmonic functions. Corollary \[cor:harmonic\] gives a counterexample to the Schoen-Yau conjecture in every dimension and for any Stein source manifold. The first and very explicit counterexample was given by Bo[ž]{}in [@Bozin1999IMRN] in 1999. In 2001, Globevnik and the author [@ForstnericGlobevnik2001MRL] constructed a proper holomorphic map $f=(f_1,f_2)\colon{\mathbb D}\to{\mathbb{C}}^2$ whose image is contained in $({\mathbb{C}}^*)^2$, i.e., it avoids both coordinate axes. The associated harmonic map $u=(u_1,u_2)\colon {\mathbb D}\to{\mathbb{R}}^2$ then satisfies $\lim_{|\zeta|\to 1} \max\{u_1(\zeta),u_2(\zeta)\} = +\infty$ which implies properness. Next, Alarc[ó]{}n and L[ó]{}pez [@AlarconLopez2012JDG] showed in 2012 that every open Riemann surface $X$ admits a conformal minimal immersion $u=(u_1,u_2,u_3)\colon X\to{\mathbb{R}}^3$ with a proper (harmonic) projection $(u_1,u_2)\colon X\to {\mathbb{R}}^2$. In 2014, Andrist and Wold [@AndristWold2014 Theorem 5.6] proved Corollary \[cor:harmonic\] in the case $n=1$. Comparing Corollary \[cor:harmonic\] with the above mentioned result of Globevnik and the author [@ForstnericGlobevnik2001MRL], one is led to the following question. Let $X$ be a Stein manifold of dimension $n>1$. Does there exist a proper holomorphic immersion $f\colon X\to {\mathbb{C}}^{2n}$ such that $f(X)\subset ({\mathbb{C}}^*)^{2n}$? More generally, one can ask which type of sets in Stein manifolds can be avoided by proper holomorphic maps from Stein manifolds of sufficiently low dimension. In this direction, Drinovec Drnovšek showed in [@Drinovec2004MRL] that any closed complete pluripolar set can be avoided by proper holomorphic discs; see also Borell et al. [@Borell2008MRL] for embedded discs in ${\mathbb{C}}^n$. Note that every closed complex subvariety is a complete pluripolar set. Embeddings of strongly pseudoconvex Stein domains {#sec:PSC} ================================================= The Oka principle for embeddings of strongly pseudoconvex domains ----------------------------------------------------------------- What can be said about proper holomorphic embeddings and immersions of Stein manifolds $X$ into arbitrary (Stein) manifolds $Y$? If $Y$ is Brody hyperbolic [@Brody1978], then no complex manifold containing a nontrivial holomorphic image of ${\mathbb{C}}$ embeds into $Y$. However, if $\dim Y>1$ and $Y$ is Stein then $Y$ still admits proper holomorphic images of any bordered Riemann surface [@DrinovecForstneric2007DMJ; @Globevnik2000]. For domains in Euclidean spaces, this line of investigation was started in 1976 by Forn[æ]{}ss [@Fornaess1976] and continued in 1985 by L[ø]{}w [@Low1985MZ] and the author [@Forstneric1986TAMS] who proved that every bounded strongly pseudoconvex domain $X\subset {\mathbb{C}}^n$ admits a proper holomorphic embedding into a high dimensional polydisc and ball. The long line of subsequent developments culminated in the following result of Drinovec Drnovšek and the author [@DrinovecForstneric2007DMJ; @DrinovecForstneric2010AJM]. \[th:BDF2010\] [[@DrinovecForstneric2010AJM Corollary 1.2]]{} Let $X$ be a relatively compact, smoothly bounded, strongly pseudoconvex domain in a Stein manifold ${\widetilde}X$ of dimension $n$, and let $Y$ be a Stein manifold of dimension $N$. If $N>2n$ then every continuous map $f\colon \overline X \to Y$ which is holomorphic on $X$ can be approximated uniformly on compacts in $X$ by proper holomorphic embeddings $X{\hookrightarrow}Y$. If $N\ge 2n$ then the analogous result holds for immersions. The same conclusions hold if the manifold $Y$ is strongly $q$-complete for some $q\in \{1,2,\ldots, N-2n+1\}$, where the case $q=1$ corresponds to Stein manifolds. In the special case when $Y$ is a domain in a Euclidean space, this is due to Dor [@Dor1995]. The papers [@DrinovecForstneric2007DMJ; @DrinovecForstneric2010AJM] include several more precise results in this direction and references to numerous previous works. Note that a continuous map $f\colon \overline X \to Y$ from a compact strongly pseudoconvex domain which is holomorphic on the open domain $X$, with values in an arbitrary complex manifold $Y$, can be approximated uniformly on $\overline X$ by holomorphic maps from small open neighborhoods of $\overline X$ in the ambient manifold ${\widetilde}X$, where the neighborhood depends on the map (see [@DrinovecForstneric2008FM Theorem 1.2] or [@Forstneric2017E Theorem 8.11.4]). However, unless $Y$ is an Oka manifold, it is impossible to approximate $f$ uniformly on $\overline X$ by holomorphic maps from a fixed bigger domain $X_1\subset {\widetilde}X$ independent of the map. For this reason, it is imperative that the initial map $f$ in Theorem \[th:BDF2010\] be defined on all of $\overline X$. One of the main techniques used in the proof of Theorem \[th:BDF2010\] are special holomorphic peaking functions on $X$. The second tool is the method of holomorphic sprays developed in the context of Oka theory; this is essentially a nonlinear version of the ${\overline\partial}$-method. Here is the main idea of the proof of Theorem \[th:BDF2010\]. Choose a strongly $q$-convex Morse exhaustion function $\rho\colon Y\to{\mathbb{R}}_+$. (When $q=1$, $\rho$ is strongly plurisubharmonic.) By using the mentioned tools, one can approximate any given holomorphic map $f\colon \overline X\to Y$ uniformly on compacts in $X$ by another holomorphic map $\tilde f\colon \overline X\to Y$ such that $\rho\circ \tilde f > \rho\circ f + c$ holds on $bX$ for some constant $c>0$ depending only on the geometry of $\rho$ on a given compact set $L\subset Y$ containing $f(\overline X)$. Geometrically speaking, this means that we lift the image of the boundary of $X$ in $Y$ to a higher level of the function $\rho$ by a prescribed amount. At the same time, we can ensure that $\rho\circ \tilde f > \rho \circ f -\delta$ on $X$ for any given $\delta>0$, and that $\tilde f$ approximates $f$ as closely as desired on a given compact ${\mathcal{O}}(X)$-convex subset $K\subset X$. By Proposition \[prop:generic\] we can ensure that our maps are embeddings. An inductive application of this technique yields a sequence of holomorphic embeddings $f_k\colon\overline X{\hookrightarrow}Y$ converging to a proper holomorphic embedding $X{\hookrightarrow}Y$. The same construction gives proper holomorphic immersions when $N\ge 2n$. On the Hodge Conjecture for $q$-complete manifolds {#ss:Hodge} -------------------------------------------------- A more precise analysis of the proof of Theorem \[th:BDF2010\] was used by Smrekar, Sukhov and the author [@FSS2016] to show the following result along the lines of the Hodge conjecture. If $Y$ is a $q$-complete complex manifold of dimension $N$ and of finite topology such that $q<N$ and the number $N+q-1=2p$ is even, then every cohomology class in $H^{N+q-1}(Y;{\mathbb{Z}})$ is Poincaré dual to an analytic cycle in $Y$ consisting of proper holomorphic images of the ball ${\mathbb{B}}^p\subset {\mathbb{C}}^p$. If the manifold $Y$ has infinite topology, the same result holds for elements of the group ${\mathscr{H}}^{N+q-1}(Y;{\mathbb{Z}}) = \lim_j H^{N+q-1}(M_j;Z)$ where $\{M_j\}_{j\in {\mathbb{N}}}$ is an exhaustion of $Y$ by compact smoothly bounded domains. Note that $H^{N+q-1}(Y;{\mathbb{Z}})$ is the highest dimensional a priori nontrivial cohomology group of a $q$-complete manifold $Y$ of dimension $N$. We do not know whether a similar result holds for lower dimensional cohomology groups of a $q$-complete manifold. In the special case when $Y$ is a Stein manifold, the situation is better understood thanks to the Oka-Grauert principle, and the reader can find appropriate references in the paper [@FSS2016]. Complete bounded complex submanifolds {#ss:complete} ------------------------------------- There are interesting recent constructions of properly embedded complex submanifolds $X\subset {\mathbb{B}}^N$ of the unit ball in ${\mathbb{C}}^N$ (or of pseudoconvex domains in ${\mathbb{C}}^N$) which are [*complete*]{} in the sense that every curve in $X$ terminating on the sphere $b{\mathbb{B}}^N$ has infinite length. Equivalently, the metric on $X$, induced from the Euclidean metric on ${\mathbb{C}}^N$ by the embedding $X{\hookrightarrow}{\mathbb{C}}^N$, is a complete metric. The question whether there exist complete bounded complex submanifolds in Euclidean spaces was asked by Paul Yang in 1977. The first such examples were provided by Jones [@Jones1979PAMS] in 1979. Recent results on this subject are due to Alarc[ó]{}n and the author [@AlarconForstneric2013MA], Alarc[ó]{}n and L[ópez]{} [@AlarconLopez2016], Drinovec Drnov[š]{}ek [@Drinovec2015JMAA], Globevnik [@Globevnik2015AM; @Globevnik2016JMAA; @Globevnik2016MA], and Alarc[ó]{}n et al. [@AlarconGlobevnik2017; @AlarconGlobevnikLopez2016Crelle]. In [@AlarconForstneric2013MA] it was shown that any bordered Riemann surface admits a proper complete holomorphic immersion into ${\mathbb{B}}^2$ and embedding into ${\mathbb{B}}^3$ (no change of the complex structure on the surface is necessary). In [@AlarconGlobevnik2017] the authors showed that properly embedded complete complex curves in the ball ${\mathbb{B}}^2$ can have any topology, but their method (using holomorphic automorphisms) does not allow one to control the complex structure of the examples. Drinovec Drnov[š]{}ek [@Drinovec2015JMAA] proved that every strongly pseudoconvex domain embeds as a complete complex submanifold of a high dimensional ball. Globevnik proved [@Globevnik2015AM; @Globevnik2016MA] that any pseudoconvex domain in ${\mathbb{C}}^N$ for $N>1$ can be foliated by complete complex hypersurfaces given as level sets of a holomorphic function, and Alarc[ó]{}n showed [@Alarcon2018] that there are nonsingular foliations of this type goven as level sets of a holomorphic function without critical points. Furthermore, there is a complete proper holomorphic embedding ${\mathbb D}{\hookrightarrow}{\mathbb{B}}^2$ whose image contains any given discrete subset of ${\mathbb{B}}^2$ [@Globevnik2016JMAA], and there exist complex curves of arbitrary topology in ${\mathbb{B}}^2$ satisfying this property [@AlarconGlobevnik2017]. The constructions in these papers, except those in [@Alarcon2018; @Globevnik2015AM; @Globevnik2016MA], rely on one of the following two methods: - Riemann-Hilbert boundary values problem (or holomorphic peaking functions in the case of higher dimensional domains considered in [@Drinovec2015JMAA]); - holomorphic automorphisms of the ambient space ${\mathbb{C}}^N$. Each of these methods can be used to increase the intrinsic boundary distance in an embedded or immersed submanifold. The first method has the advantage of preserving the complex structure, and the disadvantage of introducing self-intersections in the double dimension or below. The second method is precisely the opposite — it keeps embeddedness, but does not provide any control of the complex structure since one must cut away pieces of the image manifold to keep it suitably bounded. The first of these methods has recently been applied in the theory of minimal surfaces in ${\mathbb{R}}^n$; we refer to the papers [@AlarconDrinovecForstnericLopez2015PLMS; @AlarconDrinovecForstnericLopez2017TAMS; @AlarconForstneric2015MA] and the references therein. On the other hand, ambient automorphisms cannot be applied in minimal surface theory since the only class of self-maps of ${\mathbb{R}}^n$ $(n>2)$ mapping minimal surfaces to minimal surfaces are the rigid affine linear maps. Globevnik’s method in [@Globevnik2015AM; @Globevnik2016MA] is different from both of the above. He showed that for every integer $N>1$ there is a holomorphic function $f$ on the ball ${\mathbb{B}}^N$ whose real part $\Re f$ is unbounded on every path of finite length that ends on $b{\mathbb{B}}^N$. It follows that every level set $M_c=\{f=c\}$ is a closed complete complex hypersurface in ${\mathbb{B}}^N$, and $M_c$ is smooth for most values of $c$ in view of Sard’s lemma. The function $f$ is constructed such that its real part grows sufficiently fast on a certain labyrinth $\Lambda\subset {\mathbb{B}}^N$, consisting of pairwise disjoint closed polygonal domains in real affine hyperplanes, such that every curve in ${\mathbb{B}}^N\setminus \Lambda$ which terminates on $b{\mathbb{B}}^N$ has infinite length. The advantage of his method is that it gives an affirmative answer to Yang’s question in all dimensions and codimensions. The disadvantage is that one cannot control the topology or the complex structure of the level sets. By using instead holomorphic automorphisms in order to push a submanifold off the labyrinth $\Lambda$, Alarc[ó]{}n et al.  [@AlarconGlobevnikLopez2016Crelle] succeeded to obtain partial control of the topology of the embedded submanifold, and complete control in the case of complex curves [@AlarconGlobevnik2017]. Finally, by using the method of constructing noncritical holomorphic functions due to Forstnerič [@Forstneric2003AM], Alarc[ó]{}n [@Alarcon2018] improved Globevnik’s main result from [@Globevnik2015AM] by showing that every closed complete complex hypersurface in the ball ${\mathbb{B}}^n$ $(n>1)$ is a leaf in a nonsingular holomorphic foliation of ${\mathbb{B}}^n$ by closed complete complex hypersurfaces. By using the labyrinths constructed in [@AlarconGlobevnikLopez2016Crelle; @Globevnik2015AM] and methods of Andersén-Lempert theory, Alarc[ó]{}n and the author showed in [@AlarconForstneric2018PAMS] that there exists a complete injective holomorphic immersion $\mathbb{C}\to\mathbb{C}^2$ whose image is everywhere dense in $\mathbb{C}^2$ [@AlarconForstneric2018PAMS Corollary 1.2]. The analogous result holds for any closed complex submanifold $X\subsetneqq \mathbb{C}^n$ for $n>1$ (see [@AlarconForstneric2018PAMS Theorem 1.1]). Furthermore, if $X$ intersects the ball $\mathbb{B}^n$ and $K$ is a connected compact subset of $X\cap\mathbb{B}^n$, then there is a Runge domain $\Omega\subset X$ containing $K$ which admits a complete injective holomorphic immersion $\Omega\to\mathbb{B}^n$ whose image is dense in $\mathbb{B}^n$. Submanifolds with exotic boundary behaviour {#ss:exotic} ------------------------------------------- The boundary behavior of proper holomorphic maps between bounded domains with smooth boundaries in complex Euclidean spaces has been studied extensively; see the recent survey by Pinchuk et al. [@Pinchuk2017]. It is generally believed, and has been proved under a variety of additional conditions, that proper holomorphic maps between relatively compact smoothly bounded domains of the same dimension always extend smoothly up to the boundary. In dimension $1$ this is the classical theorem of Carath[é]{}odory (see [@Caratheodory1913MA] or [@Pommerenke1992 Theorem 2.7]). On the other hand, proper holomorphic maps into higher dimensional domains may have rather wild boundary behavior. For example, Globevnik [@Globevnik1987MZ] proved in 1987 that, given $n\in {\mathbb{N}}$, if $N\in{\mathbb{N}}$ is sufficiently large then there exists a continuous map $f\colon \overline {\mathbb{B}}^n \to \overline {\mathbb{B}}^N$ which is holomorphic in ${\mathbb{B}}^n$ and satisfies $f(b{\mathbb{B}}^n)=b{\mathbb{B}}^N$. Recently, the author [@Forstneric2017Sept] constructed a properly embedded holomorphic disc ${\mathbb D}{\hookrightarrow}{\mathbb{B}}^2$ in the $2$-ball with arbitrarily small area (hence it is the zero set of a bounded holomorphic function on $\mathbb{B}^2$ according to Berndtsson [@Berndtsson1980]) which extends holomorphically across the boundary of the disc, with the exception of one boundary point, such that its boundary curve is injectively immersed and everywhere dense in the sphere $b\mathbb{B}^2$. Examples of proper (not necessarily embedded) discs with similar behavior were found earlier by Globevnik and Stout [@GlobevnikStout1986]. The soft Oka principle for proper holomorphic embeddings {#sec:soft} ======================================================== By combining the technique in the proof of Theorem \[th:BDF2010\] with methods from the papers by Slapar and the author [@ForstnericSlapar2007MRL; @ForstnericSlapar2007MZ] one can prove the following seemingly new result. \[th:soft\] Let $(X,J)$ and $Y$ be Stein manifolds, where $J\colon TX\to TX$ denotes the complex structure operator on $X$. If $\dim Y > 2\dim X$ then for every continuous map $f\colon X\to Y$ there exists a Stein structure $J'$ on $X$, homotopic to $J$, and a proper holomorphic embedding $f'\colon (X,J'){\hookrightarrow}Y$ homotopic to $f$. If $\dim Y\ge 2\dim X$ then $f'$ can be chosen a proper holomorphic immersion having only simple double points. The same holds if the manifold $Y$ is $q$-complete for some $q\in \{1,2,\ldots, \dim Y-2\dim X+1\}$, where $q=1$ corresponds to Stein manifolds. Intuitively speaking, every Stein manifold $X$ embeds properly holomorphically into any other Stein manifold $Y$ of dimension $\dim Y >2\dim X$ up to a change of the Stein structure on $X$. The main result of [@ForstnericSlapar2007MZ] amounts to the same statement for holomorphic maps (instead of proper embeddings), but without any hypothesis on the target complex manifold $Y$. In order to obtain [*proper*]{} holomorphic maps $X\to Y$, we need a suitable geometric hypothesis on $Y$ in view of the examples of noncompact (even Oka) manifolds without any closed complex subvarieties (see [@Forstneric2017E Example 9.8.3]). The results from [@ForstnericSlapar2007MRL; @ForstnericSlapar2007MZ] were extended by Prezelj and Slapar [@PrezeljSlapar2011] to $1$-convex source manifolds. For Stein manifolds $X$ of complex dimension $2$, these results also stipulate a change of the underlying ${\mathscr{C}}^\infty$ structure on $X$. It was later shown by Cieliebak and Eliashberg that such change is not necessary if one begins with an integrable Stein structure; see [@CieliebakEliashberg2012 Theorem 8.43 and Remark 8.44]. For the constructions of exotic Stein structures on smooth orientable $4$-manifolds, in particular on ${\mathbb{R}}^4$, see Gompf [@Gompf1998; @Gompf2005; @Gompf2017GT]. In order to fully understand the proof, the reader should be familiar with [@ForstnericSlapar2007MZ proof of Theorem 1.1]. (Theorem 1.2 in the same paper gives an equivalent formulation where one does not change the Stein structure on $X$, but instead finds a desired holomorphic map on a Stein Runge domain $\Omega\subset X$ which is diffeotopic to $X$. An exposition is also available in [@CieliebakEliashberg2012 Theorem 8.43 and Remark 8.44] and [@Forstneric2017E §10.9].) We explain the main step in the case $\dim Y > 2\dim X$; the theorem follows by using it inductively as in [@ForstnericSlapar2007MZ]. An interested reader is invited to provide the details. Assume that $X_0\subset X_1$ is a pair of relatively compact, smoothly bounded, strongly pseudoconvex domains in $X$ such that there exists a strongly plurisubharmonic Morse function $\rho$ on an open set $U\supset \overline {X_1\setminus X_0}$ in $X$ satisfying $$X_0 \cap U = \{x\in U\colon \rho(x)<a\},\quad X_1 \cap U = \{x\in U \colon \rho(x)<b\},$$ for a pair of constants $a<b$ and $d\rho\ne 0$ on $bX_0\cup bX_1$. Let $L_0\subset L_1$ be a pair of compact sets in $Y$. (In the induction, $L_0$ and $L_1$ are sublevel sets of a strongly $q$-convex exhaustion function on $Y$.) Assume that $f_0\colon X\to Y$ is a continuous map whose restriction to a neighborhood of $\overline X_0$ is a $J$-holomorphic embedding satisfying $f_0(bX_0) \subset Y\setminus L_0$. The goal is to find a new Stein structure $J_1$ on $X$, homotopic to $J$ by a smooth homotopy that is fixed in a neighborhood of $\overline X_0$, such that $f_0$ can be deformed to a map $f_1\colon X\to Y$ whose restriction to a neighborhood of $\overline X_1$ is a $J_1$-holomorphic embedding which approximates $f_0$ uniformly on $\overline X_0$ as closely as desired and satisfies $$\label{eq:lifting} f_1(\overline {X_1\setminus X_0})\subset Y\setminus L_0, \qquad f_1(bX_1) \subset Y\setminus L_1.$$ An inductive application of this result proves Theorem \[th:soft\] as in [@ForstnericSlapar2007MZ]. (For the case $\dim X=2$, see [@CieliebakEliashberg2012 Theorem 8.43 and Remark 8.44].) By subdividing the problem into finitely many steps of the same kind, it suffices to consider the following two basic cases: - [*The noncritical case:*]{} $d\rho\ne 0$ on $\overline{X_1\setminus X_0}$. In this case we say that $X_1$ is a [*noncritical strongly pseudoconvex extension*]{} of $X_0$. - [*The critical case:*]{} $\rho$ has exactly one critical point $p$ in $\overline{X_1\setminus X_0}$. Let $U_0 \subset U'_0 \subset X$ be a pair of small open neighborhoods of $\overline X_0$ such that $f_0$ is an embedding on $U'_0$. Also, let $U_1\subset U'_1\subset X$ be small open neighborhoods of $\overline X_1$. In case (a), there exists a smooth diffeomorphism $\phi\colon X\to X$ which is diffeotopic to the identity map on $X$ by a diffeotopy which is fixed on $U_0\cup (X\setminus U'_1)$ such that $\phi(U_1)\subset U'_0$. The map $\tilde f_0=f_0\circ \phi \colon X \to Y$ is then a holomorphic embedding on the set $U_1$ with respect to the Stein structure $J_1=\phi^*(J)$ on $X$ (the pullback of $J$ by $\phi$). Applying the lifting procedure in the proof of Theorem \[th:BDF2010\] and up to shrinking $U_1$ around $\overline X_1$, we can homotopically deform $\tilde f_0$ to a continuous map $f_1\colon X\to Y$ whose restriction to $U_1$ is a $J_1$-holomorphic embedding $U_1{\hookrightarrow}Y$ satisfying conditions . In case (b), the change of topology of the sublevel sets of $\rho$ at the critical point $p$ is described by attaching to the strongly pseudoconvex domain $\overline X_0$ a smoothly embedded totally real disc $M\subset X_1\setminus X_0$, with $p\in M$ and $bM\subset bX_0$, whose dimension equals the Morse index of $\rho$ at $p$. As shown in [@Eliashberg1990; @CieliebakEliashberg2012; @ForstnericSlapar2007MZ], $M$ can be chosen such that $\overline X_0\cup M$ has a basis of smooth strongly pseudoconvex neighborhoods (handlebodies) $H$ which deformation retract onto $\overline X_0\cup M$ such that $X_1$ is a noncritical strongly pseudoconvex extension of $H$. Furthermore, as explained in [@ForstnericSlapar2007MZ], we can homotopically deform the map $f_0\colon X\to Y$, keeping it fixed in some neighborhood of $\overline X_0$, to a map that is holomorphic on $H$ and maps $H\setminus \overline X_0$ to $L_1\setminus L_0$. By Proposition \[prop:generic\] we can assume that the new map is a holomorphic embedding on $H$. This reduces case (b) to case (a). In the inductive construction, we alternate the application of cases (a) and (b). If $\dim Y\ge 2\dim X$ then the same procedure applies to immersions. A version of this construction, for embedding open Riemann surfaces into ${\mathbb{C}}^2$ or $({\mathbb{C}}^*)^2$ up to a deformation of their complex structure, can be found in the papers by Alarcón and L[ó]{}pez [@AlarconLopez2013] and Ritter [@Ritter2014]. However, they use holomorphic automorphisms in order to push the boundary curves to infinity without introducing self-intersections of the image complex curve. The technique in the proof of Theorem \[th:BDF2010\] will in general introduce self-intersections in double dimension. Acknowledgements {#acknowledgements .unnumbered} ---------------- The author is supported in part by the research program P1-0291 and grants J1-7256 and nd J1-9104 from ARRS, Republic of Slovenia. I wish to thank Antonio Alarc[ó]{}n and Rafael Andrist for a helpful discussion concerning Corollary \[cor:harmonic\] and the Schoen-Yau conjecture, Barbara Drinovec Drnov[š]{}ek for her remarks on the exposition, Josip Globevnik for the reference to the paper of Bo[ž]{}in [@Bozin1999IMRN], Frank Kutzschebauch for having proposed to include the material in §\[ss:wild\], and Peter Landweber for his remarks which helped me to improve the language and presentation. Franc Forstnerič Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI–1000 Ljubljana, Slovenia Institute of Mathematics, Physics and Mechanics, Jadranska 19, SI–1000 Ljubljana, Slovenia e-mail: [franc.forstneric@fmf.uni-lj.si]{}
--- abstract: 'We report the growth of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ superconducting single crystal fibers via slow cooling solid state reaction method. Superconducting transition temperature ($T_{c}\sim6.5$K) is confirmed from magnetization and transport measurements. A comparative study is performed for determination of superconducting anisotropy, $\Gamma$, via conventional method (by taking ration of two superconducting parameters) and scaling approach method. Scaling approach, defined within the framework of the Ginzburg-Landau theory is applied to the angular dependent resistivity measurements to estimate the anisotropy. The value of $\Gamma$ close to $T_{c}$ from scaling approach is found to be $\sim2.5$ that is slight higher compare to conventional approach ($\sim2.2$). Further, variation of anisotropy with temperature suggests that it is a type of multi-band superconductor.' address: '$^{\mbox{a}}$ Department of physics, Indian Institute of Technology Bombay, Mumbai-400076 India' author: - 'Anil K. Yadav$^{\mbox{a,b}}$' - 'Himanshu Sharma$^{\mbox{a}}$' - 'C. V. Tomy$^{\mbox{a}}$' - 'Ajay D. Thakur$^{\mbox{c}}$' title: 'Growth and angular dependent resistivity of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ superconducting state single crystals fiber' --- introduction ============ Ternary chalcogenide of non-superconducting compound Nb$_{2}$Pd$_{0.81}$Se$_{5}$ turns into a superconductor with superconducting transition temperature $T_{c}\sim6.5$K when Se is replaced with S [\[]{}1[\]]{}. This superconductor has caused a lot of interest to research community due to its extremely large upper critical fields amongst the known Nb based superconductors and shown a possibility to grow long flexible superconducting fibers [\[]{}1,2[\]]{}. Structurally, this compound crystallizes in the monoclinic structure with symmetry $C2/m$ space group [\[]{}1,2,3[\]]{}. Its structure comprises laminar sheets, stacked along the b-axis, consisting of Pb, Nb and S atoms. Each sheet contains two unique building blocks of NbS$_{6}$ and NbS$_{7}$ atoms inter-linked by the Pd-atoms [\[]{}1,3,4[\]]{}. Yu *et al.*, have constructed the superconducting phase diagram of Nb$_{2}$Pd$_{1-x}$S$_{5\pm\delta}$ ($0.6<x<1$ ) single crystal fibers by varying composition of Pd and S and found maximum $T_{c}\sim7.43$K in Nb$_{2}$Pd$_{1.1}$S$_{6}$ stoichiometry compound [\[]{}2[\]]{}. One of the important parameter which needs to be determine precisely for this compound is the anisotropy ($\Gamma$) as it shown extremely large direction dependent upper critical field [\[]{}1[\]]{}. In the conventional approach, the anisotropy is determined as ration of two superconducting parameters (such as band dependent effective masses, penetration depth, upper critical fields etc.) in two orientations w.r.t. the crystallographic axes and applied magnetic field [\[]{}5[\]]{}. Zhang *et al.*, [\[]{}1[\]]{} have determined the temperature dependent anisotropy in this compound using the above conventional method by taking the ratio of $H_{c2}(T)$ in two orientations. However, in this case, estimation of $H_{c2}(0)$ is subject to different criteria and formalism which may introduce some uncertainty in the anisotropy ($\Gamma$) calculation [\[]{}6[\]]{}. Blatter *et al.*, have given a simple alternate way to estimate the anisotropy of a superconductor, known as the scaling approach [\[]{}7[\]]{}. In this approach, any anisotropic data can be changed into isotropic form by using some scaling rule in which only one parameter has to adjust for which all isotropic curves collapse into single curve, that adjusted parameter is anisotropy of superconductor. Thus its limits the uncertainty in the determination of $\Gamma$ as compared to the conventional approach. Employing scaling approach, Wen et al., have estimated the anisotropy of several Fe-based superconductors such as NdFeAsO$_{0.82}$F$_{0.18}$ [\[]{}8[\]]{}, Ba$_{1-x}$K$_{x}$Fe$_{2}$As$_{2}$ [\[]{}6[\]]{} and Rb$_{0.8}$Fe$_{2}$Se$_{2}$ [\[]{}9[\]]{}. Shahbazi *et al.*, have also performed similar studies on Fe$_{1.04}$Se$_{0.6}$Te$_{0.4}$ [\[]{}10[\]]{} and BaFe$_{1.9}$Co$_{0.1}$As$_{2}$ [\[]{}11[\]]{} single crystals. In this paper, we report the anisotropy estimation of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ single crystals via conventional and scaling approach methods near $T_{c}$. We also provide further evidence that the bulk superconducting anisotropy is not universally constant, but is temperature dependent down to $T_{c}$. method ====== Single crystal fibers of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ were synthesized via slow cooling of the charge in the solid state reaction method, as reported in reference [\[]{}1[\]]{}. Starting raw materials (powder) Nb (99.99%), Pd (99.99%) and S (99.999%) were taken in the stoichiometry ratio of 2:1:6 and mixed in an Ar atmosphere inside a glove box. The well-homogenized powder was sealed in a long evacuated quartz tube and heated to 800ïC with a rate of 10$^{\circ}$C/h. After the reaction for 24 hours at this temperature, the reactants were cooled down at a rate of 2$^{\circ}$C/h to 360$^{\circ}$C, followed by cooling to room temperature by switching the furnace off. As-grown samples look like a mesh of small wires when viewed under an optical microscope. Some part of the as-grown sample was dipped in dilute HNO$_{3}$ to remove the bulk material and to pick up a few fiber rods for further measurements. X-ray diffraction (XRD) was performed on powdered Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ single crystal fibers for structure determination. High energy x-ray diffraction analysis (EDAX) is used to identify the chemical elements and composition. Magnetization measurement was performed using a superconducting quantum interference device - vibrating sample magnetometer (SQUID-VSM, Quantum Design Inc. USA). Angular dependent resistivity was carried out using the resistivity option with horizontal rotator in a physical property measurement system (PPMS) of Quantum Design Inc. USA. Electrical connections were made in four probe configuration using gold wires bonded to the sample with silver epoxy. Results ======= Structure analysis ------------------ Figure \[fig1\](a) shows the scanning electron microscope (SEM) image of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ single crystals fibers. It is clear from the image that the fibers are grown in different shapes and lengths. Figure \[fig1\](b) shows the XRD patterns of powdered Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ single crystals. Rietveld refinement was performed on the powder XRD data using $C2/m$ monoclinic crystal structure of Nb$_{2}$Pd$_{0.81}$Se$_{5}$ as reference in the FullProf suite software. The lattice parameters ($a$= 12.154(1)A, $b$ = 3.283(7)A and $c$ = 15.09(9)A) obtained from the refinement are approximately same as reported earlier in reference [\[]{}1,3[\]]{}, even though the intensities could not be matched perfectly. Peak (200) is found to be the one with the highest intensity even when the XRD was obtained with a bunch of fibers, indicating a preferred crystal plan orientation along the ($l$00) direction in our powdered samples. Similar preferred orientation was also reported for single crystals in reference [\[]{}2[\]]{}. This may be the reason for the discrepancy in the intensities between the observed and the fitted XRD peaks. Further, to confirm the single crystalline nature of the fibers, we have taken the selective area electron diffraction (SAED) pattern of the fibers; a typical pattern is shown in Figure \[fig1\](c). Nicely ordered spotted diffraction pattern confirms the single crystalline nature of the fibers. Figure \[fig1\](d) shows the optical image of a typical cylindrical fiber of diameter $\sim1.2\,\mbox{\ensuremath{\mu}m}$ and of length $\sim1814\,\mbox{\ensuremath{\mu}m},$ which was used for the four probe electrical resistivity measurements (Fig. \[fig1\](e) shows the gold wires and silver paste used for the electrical connections). All chemical elements are found to be present in the compound with slight variation from starting composition in EDAX analysis. ![ \[fig1\] (Color online) (a) SEM image of bunch of single crystal fibers of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$. (b) X-ray diffraction patterns: observed (green), calculated (red) and difference (blue) (c) SAED pattern of a single crystal fiber (d) optical image of a typical cylindrical wire used for transport study (e) Four probe connections on a fiber.](1) Confirmation of superconducting properties ------------------------------------------ In order to confirm the occurrence of superconductivity in the prepared single crystals, magnetic measurement was performed on a bunch of fibers (as alone single crystal fiber did not give large enough signal in magnetization). Figure \[fig2\] shows a part of the temperature dependent zero field-cooled (ZFC) and field-cooled (FC) magnetization measurements at H = 20Oe. The onset superconducting transition temperature ($T_{c}^{{\rm on;M}})$ is observed to be $\sim6.5$K which is taken from the bifurcation point of ZFC and FC curves. In order to confirm the superconducting nature of the grown single crystal fibers, resistivity was measured using one of the fibers removed from the ingot. We have plotted a part of resistivity measurement (in zero applied magnetic fields) in Fig. \[fig2\] along with the magnetization curve where zero resistivity transition temperature, $T_{c}^{{\rm zero}}$ matches well with the onset transition temperature of magnetization, $T_{c}^{{\rm on;M}}$ as well as the $T_{c}$ reported in reference [\[]{}1,2[\]]{}. However, the onset transition temperature from resistivity ($T_{c}^{{\rm on}}$ : the temperature at which resistivity drop to 90% from normal state resistivity) is found to be $\sim7.8$K, which is comparable to the optimized maximum $T_{c}^{{\rm on}}$ for this compound reported by Yu *et al.*, [\[]{}2[\]]{}. The narrow superconducting transition width ($\sim1.3$K) in resistivity indicates the quality of the single crystal fibers (see Fig. \[fig2\]). The residual resistivity ratio $(RRR\thickapprox\frac{R(300\,{\rm K)}}{R(8\,{\rm K)}})$, which indicates the metallicity of a material, is found to be $\sim3.4$ for our sample. This value of RRR is much less than the corresponding value for good conductors, that categorized it as bad metals. ![\[fig2\](Color online) Zero field-cooled (ZFC) and field-cooled (FC) magnetization curves at 20Oe (open circle) and resistivity measurement at zero field (open triangle). Onset superconducting transition temperature, $T_{c}^{{\rm on;M}}$, from magnetization and zero resistivity transition temperature, $T_{c}^{{\rm zero}}$, from resistivity measurements confirm the $T_{c}$ of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ superconductor.](2) Angular dependent transport properties -------------------------------------- In order to estimate the superconducting anisotropy properties, we have to assign single crystal fibers orientation axis. Since we cannot assign a growth direction for our cylindrical single crystal fibers from XRD due to very fine single crystals therefore we have adopted b-axis along length of fibers as given in reference [\[]{}1[\]]{} because of same synthesis method is followed to grow these single crystal fibers. Figures \[fig3\](a) and \[fig3\](c) show the resistivity plots as function of temperature in different applied magnetic fields from zero to 90kOe along H || b-axis and H $\bot$ b-axis. Three transition temperatures, $T_{c}^{{\rm on}}$, $T_{c}^{{\rm mid}}$and $T_{c}^{{\rm off}}$ are marked in the figure using the criteria, 90%$\rho_{n}$, 50%$\rho_{n}$ and 10%$\rho_{n}$ (where $\rho_{n}$ is normal state resistivity at 8K), respectively. The $T_{c}$ shifts toward the lower temperatures as field increases with the rate of 0.05K/kOe and 0.02K/kOe along H || b-axis and H $\bot$ b-axis, respectively. The HT phase diagrams are plotted at three transition temperatures in Figs. \[fig3\](b) and \[fig3\](d) for both the orientations. In order to find out upper critical fields ($H_{c2}(0)$), these HT curves are fitted with the empirical formula, $H_{c2}(0)=H_{c2}(T)(1-(T/T_{c})^{2})$ [\[]{}1, 2[\]]{}, further these fitted curves have been extrapolated to the zero temperature to extract the $H_{c2}(0)$ values, that come out to be $\sim$180kOe and $\sim$390kOe at $T_{c}^{{\rm on}}$ along H || b-axis and H $\bot$ b-axis, respectively. Conventionally the anisotropy is found to be $\sim2.2$, estimated by taking ratio of $H_{c2}(0)$ values in two orientations. In order to corroborate the $\Gamma$ values further, we have measured the angular dependent resistivity $\rho(\theta)$ at different magnetic fields at certain temperatures close to $T_{c}$. ![\[fig3\](Color online) Temperature dependent resistivity plots at different applied fields vary from 0kOe to 90kOe (a) for H || b-axis (c) for H $\bot$ b-axis. (b) and (d) plots show H–T phase diagrams at $T_{c}^{{\rm on}}$, $T_{c}^{{\rm mid}}$ and $T_{c}^{{\rm off}}$ transition temperatures. Dashed curves show the fitting curves corresponding empirical formula, $H_{c2}(0)=H_{c2}(T)(1-(T/T_{c})^{2})$. ](3) The insets of Figs. \[fig4\](a), (b), (c) and (d) show $\rho(\theta)$ curves at 10kOe, 30kOe, 50kOe, 70kOe and 90kOe for T = 5.0K, 5.5K, 6.0K and 6.5K, respectively. All the $\rho(\theta)$ curves show a symmetric dip at $\theta=90$$^{\circ}$ and a maximum at 0$^{\circ}$ and 180$^{\circ}$. In all the curves, the center of the dip shifts from zero to non-zero resistivity as the temperature and field increases. The main panel of the Fig. \[fig4\] shows rescaled $\rho(\theta)$ curves of 10kOe, 30kOe, 50kOe, 70kOe and 90kOe fields at temperatures (a) 5.0K (b) 5.5K (c) 6.0K and (d) 6.5K, respectively using the rescaling function: $$\tilde{H}=H\,\sqrt{{\rm sin^{2}}\theta+\Gamma^{2}{\rm cos^{2}}\theta}$$ where $\Gamma$ is anisotropy and $\theta$ is angle between the field and crystal axis. All rescaled curves at fixed temperature are now isotropic, i.e., all curves collapse on the single curve. In this method only anisotropic parameter, $\Gamma$, was adjusted to convert data into the isotropic form, that value of $\Gamma$ is the anisotropy at that temperature. ![\[fig4\](Color online) Insets of figure (a) to (d) show resistivity ($\rho$) plots as a function of angle, $\theta$ (angle between b-axis and applied magnetic fields) at fields 10kOe, 30kOe, 50kOe, 70kOe and 90kOe for temperatures (a) 5K (b) 5.5K (c) 6.0K and (d) 6.5K and main panels of figure show the resistivity plots as function of scaling field $\tilde{H}$ = $H\,\sqrt{{\rm sin^{2}}\theta+\Gamma^{2}{\rm cos^{2}}\theta}$.](4) Figure \[fig5\] shows temperature dependent anisotropy ($\Gamma(T)$) plot which is obtained from the angular resistivity data. Anisotropy decreases slowly as the temperature goes down in superconducting state. As Zhang *et al.* [\[]{}1[\]]{} have explained that this dependency of anisotropy in temperature may be due to the opening of superconducting gap of different magnitude on different Fermi surface sheets where each associated with bands of distinct electronic anisotropy. Li *et al.*, have reported similar temperature dependent anisotropy behavior for Rb$_{0.76}$Fe$_{2}$Se$_{1.6}$ , Rb$_{0.8}$Fe$_{1.6}$Se$_{2}$ , Ba$_{0.6}$K$_{0.4}$Fe$_{2}$As$_{2}$, Ba(Fe$_{0.92}$Co$_{0.08})_{2}$As$_{2}$ single crystals and explained that this may be due to the multiband effect or gradual setting of pair breaking due to spin-paramagnetic effect [\[]{}9[\]]{}. Shahbazi *et al.*, have also reported similar results for Fe$_{1.04}$Te$_{0.6}$Se$_{0.4}$ and BaFe$_{1.9}$Co$_{0.8}$As$_{2}$ single crystal through angular dependent transport measurements [\[]{}10,11[\]]{}. Various theoretical models for study of Fermi surface have supported the presence of multiband superconducting gap in Fe-based superconductors [\[]{}12,13,14[\]]{}. Here, the density functional theory (DFT) calculation indeed has shown that the Nb$_{2}$Pd$_{0.81}$S$_{5}$ superconductor is a multi-band superconductor [\[]{}1[\]]{}. Compared to MgB$_{2}$ [\[]{}15,16[\]]{} and cuprate superconductors [\[]{}17[\]]{} the anisotropy of Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ is very small; however, it is comparable with some of the iron based (Fe-122 type) superconductors [\[]{}9[\]]{}. ![\[fig5\](Color online) Anisotropy variation with temperature measured from angular dependent resistivity.](5) Conclusions =========== In conclusion, we have successfully synthesized the Nb$_{2}$Pd$_{0.73}$S$_{5.7}$ single crystal fibers via slow cooling solid state reaction method. Superconducting properties of sample have been confirmed via magnetic and transport measurements. Conventionally, upper critical fields are measured from magneto-transport study. Angular dependence of resistivity are measured in presence of magnetic fields at different temperatures in superconducting state which further rescaled using a scaling function to convert isotropic form that direct provides anisotropy. The anisotropy is found to be $\sim2.5$ near $T_{c}$ which is less $\sim2.2$ compare to achieve from conventional method. Anisotropy decreases slowly with decreasing temperature, which is attributed to the multi-band nature of the superconductor. AKY would like to thank CSIR, India for SRF grant. [References]{} Q. Zhang, G. Li, D. Rhodes, A. Kiswandhi, T. Besara, B. Zeng, J. Sun, T. Siegrist, M. D. Johannes, L. Balicas, Scientific Reports, **3**, 1446 (2013).H. Yu, M. Zuo, L. Zhang, S. Tan, C. Zhang, Y. Zhang , J. Am. Chem. Soc. **135**, 12987 (2013). H. Yu, M. Zuo, L. Zhang, S. Tan, C. Zhang, Y. Zhang , J. Am. Chem. Soc.**135**, 12987 (2013). R. Jha, B. Tiwari, P. Rani, V. P. S. Awana, arXiv:1312.0425 (2013). D. A. Keszler, J. A. Ibers, M. Y. Shang and J. X. Lu, J. solid state chem. **57**, 68 (1985). W. E. Lawrence and S. Doniach, in Proceedings of the 12th International Conference Low Temperature Physics, edited by E. Kanda Keigaku, Tokyo (1971). Z. S. Wang, H. Q. Luo, C. Ren, H. H. Wen, Phys. Rev. B **78**, 140501(R) (2008). G. Blatter, V. B. Geshkenbein, and A. I. Larkin, Phys. Rev. Lett. **68**, 875 (1992). Y. Jia, P. Cheng, L. Fang, H. Yang, C. Ren, L. Shan, C. Z. Gu, H. H. Wen , Supe. Science and Technology **21**, 105018 (2008). C. H. Li, B. Shen, F. Han, X. Zhu, and H. H. Wen, Phys. Rev. B **83**, 184521 (2011). M. Shahbazi, X. L. Wang, S. X. Dou, H. Fang, and C. T. Lin, J. Appl. Phys. **113**, 17E115 (2013). M. Shahbazi, X. L. Wang, S. R. Ghorbani, S. X. Dou, and K. Y. Choi, Appl. Phys. Lett. **100**, 102601 (2012). Q. Han, Y. Chen and Z. D. Wang, EPL **82**, 37007 (2008). C. Ren, Z. S. Wang, H. Q. Luo, H. Yang, L. Shan, and H. H. Wen, Phys. Rev. Lett. **101**, 257006 (2008). V. Cvetkovic, Z. Tesanovic, Europhysics Letters **85**, 37002 (2009). A. Rydh, U. Welp, A. E. Koshelev, W. K. Kwok, G. W. Crabtree, R. Brusetti, L. Lyard, T. Klein, C. Marcenat, B. Kang, K. H. Kim, K. H. P. Kim, H.-S. Lee, and S.-I. Lee , Phys. Rev. B **70**, 132503 (2004). K. Takahashi, T. Atsumi, N. Yamamoto, M. Xu, H. Kitazawa, and T. Ishida, Phys. Rev. B **66**, 012501 (2002). C. P. Poole, Jr. H. A. Farach, J. Richard. Creswick, Superconductivity (Elsevier) (2007).
--- abstract: 'We study the effects of Supernova (SN) feedback on the formation of galaxies using hydrodynamical simulations in a $\Lambda$CDM cosmology. We use an extended version of the code GADGET-2 which includes chemical enrichment and energy feedback by Type II and Type Ia SN, metal-dependent cooling and a multiphase model for the gas component. We focus on the effects of SN feedback on the star formation process, galaxy morphology, evolution of the specific angular momentum and chemical properties. We find that SN feedback plays a fundamental role in galaxy evolution, producing a self-regulated cycle for star formation, preventing the early consumption of gas and allowing disks to form at late times. The SN feedback model is able to reproduce the expected dependence on virial mass, with less massive systems being more strongly affected.' --- Introduction ============ Supernova explosions play a fundamental role in galaxy formation and evolution. On one side, they are the main source of heavy elements in the Universe and the presence of such elements substantially enhances the cooling of gas (White & Frenk 1991). On the other hand, SNe eject a significant amount of energy into the interstellar medium. It is believed that SN explosions are responsible of generating a self-regulated cycle for star formation through the heating and disruption of cold gas clouds, as well as of triggering important galactic winds such as those observed (e.g. Martin 2004). Smaller systems are more strongly affected by SN feedback, because their shallower potential wells are less efficient in retaining baryons (e.g. White & Frenk 1991). Numerical simulations have become an important tool to study galaxy formation, since they can track the joint evolution of dark matter and baryons in the context of a cosmological model. However, this has shown to be an extremely complex task, because of the need to cover a large dynamical range and describe, at the same time, large-scale processes such as tidal interactions and mergers and small-scale processes related to stellar evolution. One of the main problems that galaxy formation simulations have repeteadly found is the inability to reproduce the morphologies of disk galaxies observed in the Universe. This is generally refered to as the angular momentum problem that arises when baryons transfer most of their angular momentum to the dark matter components during interactions and mergers (Navarro & Benz 1991; Navarro & White 1994). As a result, disks are too small and concentrated with respect to real spirals. More recent simulations which include prescriptions for SN feedback have been able to produce more realistic disks (e.g. Abadi et al. 2003; Robertson et al. 2004; Governato et al. 2007). These works have pointed out the importance of SN feedback as a key process to prevent the loss of angular momentum, regulate the star formation activity and produce extended, young disk-like components. In this work, we investigate the effects of SN feedback on the formation of galaxies, focusing on the formation of disks. For this purpose, we have run simulations of a Milky-Way type galaxy using an extended version of the code [GADGET-2]{} which includes chemical enrichment and energy feedback by SN. A summary of the simulation code and the initial conditions is given in Section \[simus\]. In Section \[results\] we investigate the effects of SN feedback on galaxy morphology, star formation rates, evolution of specific angular momentum and chemical properties. We also investigate the dependence of the results on virial mass. Finally, in Section \[conclusions\] we give our conclusions. Simulations {#simus} =========== We use the simulation code described in Scannapieco et al. (2005, 2006). This is an extended version of the Tree-PM SPH code [GADGET-2]{} (Springel & Hernquist 2002; Springel 2005), which includes chemical enrichment and energy feedback by SN, metal-dependent cooling and a multiphase model for the gas component. Note that our star formation and feedback model is substantially different from that of Springel & Hernquist (2003), but we do include their treatment of UV background. We focus on the study of a disk galaxy similar to the Milky Way in its cosmological context. For this purpose we simulate a system with $z=0$ halo mass of $\sim 10^{12}$ $h^{-1}$ M$_\odot$ and spin parameter of $\lambda\sim 0.03$, extracted from a large cosmological simulation and resimulated with improved resolution. It was selected to have no major mergers since $z=1$ in order to give time for a disk to form. The simulations adopt a $\Lambda$CDM Universe with the following cosmological parameters: $\Omega_\Lambda=0.7$, $\Omega_{\rm m}=0.3$, $\Omega_{\rm b}=0.04$, a normalization of the power spectrum of $\sigma_8=0.9$ and $H_0=100\ h$ km s$^{-1}$ Mpc$^{-1}$ with $h=0.7$. The particle mass is $1.6\times 10^7$ for dark matter and $2.4\times 10^6$ $h^{-1}$ M$_\odot$ for baryonic particles, and we use a maximum gravitational softening of $0.8\ h^{-1}$ kpc for gas, dark matter and star particles. At $z=0$ the halo of our galaxy contains $\sim 1.2\times 10^5$ dark matter and $\sim 1.5\times 10^5$ baryonic particles within the virial radius. In order to investigate the effects of SN feedback on the formation of galaxies, we compare two simulations which only differ in the inclusion of the SN energy feedback model. These simulations are part of the series analysed in Scannapieco et al. (2008), where an extensive investigation of the effects of SN feedback on galaxies and a parameter study is performed. In this work, we use the no-feedback run NF (run without including the SN energy feedback model) and the feedback run E-0.7. We refer the interested reader to Scannapieco et al. (2008) for details in the characteristics of these simulations. Results ======= In Fig. \[maps\] we show stellar surface density maps at $z=0$ for the NF and E-0.7 runs. Clearly, SN feedback has an important effect on the final morphology of the galaxy. If SN feedback is not included, as we have done in run NF, the stars define a spheroidal component with no disk. On the contrary, the inclusion of SN energy feedback allows the formation of an extended disk component. ![Edge-on stellar surface density maps for the no-feedback (NF, left-hand panel) and feedback (E-0.7, right-hand panel) simulations at $z=0$. The colors span 4 orders of magnitude in projected density, with brighter colors representing higher densities. []{data-label="maps"}](map-NF.eps "fig:"){width="60mm"} ![Edge-on stellar surface density maps for the no-feedback (NF, left-hand panel) and feedback (E-0.7, right-hand panel) simulations at $z=0$. The colors span 4 orders of magnitude in projected density, with brighter colors representing higher densities. []{data-label="maps"}](map-E-0.7.eps "fig:"){width="60mm"} ![Left: Star formation rates for the no-feedback (NF) and feedback (E-0.7) runs. Right: Mass fraction as a function of formation time for stars of the disk and spheroidal components in simulation E-0.7. []{data-label="sfr_stellarage"}](sfr-copen.ps "fig:"){width="70mm"}![Left: Star formation rates for the no-feedback (NF) and feedback (E-0.7) runs. Right: Mass fraction as a function of formation time for stars of the disk and spheroidal components in simulation E-0.7. []{data-label="sfr_stellarage"}](fig6.ps "fig:"){width="64mm"} The generation of a disk component is closely related to the star formation process. In the left-hand panel of Fig. \[sfr\_stellarage\] we show the star formation rates (SFR) for our simulations. In the no-feedback case (NF), the gas cools down and concentrates at the centre of the potential well very early, producing a strong starburst which feeds the galaxy spheroid. As a result of the early consumption of gas to form stars, the SFR is low at later times. On the contrary, the SFR obtained for the feedback case is lower at early times, indicating that SN feedback has contributed to self-regulate the star formation process. This is the result of the heating of gas and the generation of galactic winds. In this case, the amount of gas available for star formation is larger at recent times and consequently the SFR is higher. In the right-hand panel of Fig. \[sfr\_stellarage\] we show the mass fraction as a function of formation time for stars of the disk and spheroidal components in our feedback simulation (see Scannapieco et al. 2008 for the method used to segregate stars into disk and spheroid). From this plot it is clear that star formation at recent times ($z\lesssim 1$) significantly contributes to the formation of the disk component, while stars formed at early times contribute mainly to the spheroid. In this simulation, $\sim 50$ per cent of the mass of the disk forms since $z=1$. Note that in the no-feedback case, only a few per cent of the final stellar mass of the galaxy is formed since $z=1$. Our simulation E-0.7 has produced a galaxy with an extended disk component. By using the segregation of stars into disk and spheroid mentioned above, we can calculate the masses of the different components, as well as characteristic scales. The disk of the simulated galaxy has a mass of $3.3\times 10^{10}\ h^{-1}\ M_\odot$, a half-mass radius of $5.7\ h^{-1}$ kpc, a half-mass height of $0.5\ h^{-1}$ kpc, and a half-mass formation time of $6.3$ Gyr. The spheroid mass and half-mass formation time are $4.1\times 10^{10}\ h^{-1}\ M_\odot$ and $2.5$ Gyr, respectively. It is clear that the characteristic half-mass times are very different in the two cases, the disk component being formed by younger stars. In Fig. \[j\_evolution\] we show the evolution of the specific angular momentum of the dark matter (within the virial radius) and of the cold gas plus stars (within twice the optical radius) for the no-feedback case (left-hand panel) and for the feedback case E-0.7 (right-hand panel). The evolution of the specific angular momentum of the dark matter component is similar in the two cases, growing as a result of tidal torques at early epochs and being conserved from turnaround ($z\approx 1.5$) until $z=0$. On the contrary, the cold baryonic components in the two cases differ significantly, in particular at late times. In the no-feedback case (NF), much angular momentum is lost through dynamical friction, particularly through a satellite which is accreted onto the main halo at $z\sim 1$. In E-0.7, on the other hand, the cold gas and stars lose rather little specific angular momentum between $z=1$ and $z=0$. Two main factors contribute to this difference. Firstly, in E-0.7 a significant number of young stars form between $z=1$ and $z=0$ with high specific angular momentum (these stars form from high specific angular momentum gas which becomes cold at late times); and secondly, dynamical friction affects the system much less than in NF, since satellites are less massive. At $z=0$, disk stars have a specific angular momentum comparable to that of the dark matter, while spheroid stars have a much lower specific angular momentum. ![Dashed lines show the specific angular momentum as a function of time for the dark matter that, at $z=0$, lies within the virial radius of the system for NF (left panel) and E-0.7 (right panel). We also show with dots the specific angular momentum for the baryons which end up as cold gas or stars in the central $20\ h^{-1}$ kpc at $z=0$. The arrows show the specific angular momentum of disk and spheroid stars. []{data-label="j_evolution"}](fig5a.ps "fig:"){width="65mm"} ![Dashed lines show the specific angular momentum as a function of time for the dark matter that, at $z=0$, lies within the virial radius of the system for NF (left panel) and E-0.7 (right panel). We also show with dots the specific angular momentum for the baryons which end up as cold gas or stars in the central $20\ h^{-1}$ kpc at $z=0$. The arrows show the specific angular momentum of disk and spheroid stars. []{data-label="j_evolution"}](fig5b.ps "fig:"){width="65mm"} In Fig \[metal\_profiles\] we show the oxygen profiles for the no-feedback (NF) and feedback (E-0.7) runs. From this figure we can see that SN feedback strongly affects the chemical distributions. If no feedback is included, the gas is enriched only in the very central regions. Including SN feedback triggers a redistribution of mass and metals through galactic winds and fountains, giving the gas component a much higher level of enrichment out to large radii. A linear fit to this metallicity profile gives a slope of $-0.048$ dex kpc$^{-1}$ and a zero-point of $8.77$ dex, consistent with the observed values in real disk galaxies (e.g. Zaritsky et al. 1994). ![Oxygen abundance for the gas component as a function of radius projected onto the disk plane for our no-feedback simulation (NF) and for the feedback case E-0.7. The error bars correspond to the standard deviation around the mean. []{data-label="metal_profiles"}](fig9.ps){width="80mm"} Finally, we investigate the effects of SN feedback on different mass systems. For that purpose we have scaled down our initial conditions to generate galaxies of $10^{10}\ h^{-1}\ M_\odot$ and $10^9\ h^{-1}\ M_\odot$ halo mass, and simulate their evolution including the SN feedback model (with the same parameters than E-0.7). These simulations are TE-0.7 and DE-0.7, respectively. In Fig. \[dwarf\] we show the SFRs for these simulations, as well as for E-0.7, normalized to the scale factor ($\Gamma=1$ for E-0.7, $\Gamma=10^{-2}$ for TE-0.7 and $\Gamma=10^{-3}$ for DE-0.7). From this figure it is clear that SN feedback has a dramatic effect on small galaxies. This is because more violent winds develop and baryons are unable to condensate and form stars. In the smallest galaxy, the SFR is very low at all times because most of the gas has been lost after the first starburst episode. This proves that our model is able to reproduce the expected dependence of SN feedback on virial mass, without changing the relevant physical parameters. ![SFRs for simulations DE-0.7 ($10^{9}\ h^{-1}$ M$_\odot$), TE-0.7 ($10^{10}\ h^{-1}$ M$_\odot$) and E-0.7 ($10^{12}\ h^{-1}$ M$_\odot$) run with energy feedback. To facilitate comparison, the SFRs are normalized to the scale factor $\Gamma$. []{data-label="dwarf"}](fig10b.ps){width="80mm"} Conclusions =========== We have run simulations of a Milky Way-type galaxy in its cosmological setting in order to investigate the effects of SN feedback on the formation of galaxy disks. We compare two simulations with the only difference being the inclusion of the SN energy feedback model of Scannapieco et al. (2005, 2006). Our main results can be summarized as follows: - [ SN feedback helps to settle a self-regulated cycle for star formation in galaxies, through the heating and disruption of cold gas and the generation of galactic winds. The regulation of star formation allows gas to be mantained in a hot halo which can condensate at late times, becoming a reservoir for recent star formation. This contributes significantly to the formation of disk components. ]{} - [When SN feedback is included, the specific angular momentum of the baryons is conserved and disks with the correct scale-lengths are obtained. This results from the late collapse of gas with high angular momentum, which becomes available to form stars at later times, when the system does not suffer from strong interactions. ]{} - [ The injection of SN energy into the interstellar medium generates a redistribution of chemical elements in galaxies. If energy feedback is not considered, only the very central regions were stars are formed are contaminated. On the contrary, the inclusion of feedback triggers a redistribution of metals since gas is heated and expands, contaminating the outer regions of galaxies. In this case, metallicity profiles in agreement with observations are produced. ]{} - [ Our model is able to reproduce the expected dependence of SN feedback on virial mass: as we go to less massive systems, SN feedback has stronger effects: the star formation rates (normalized to mass) are lower, and more violent winds develop. This proves that our model is well suited for studying the cosmological growth of structure where large systems are assembled through mergers of smaller substructures and systems form simultaneously over a wide range of scales. ]{} , 2003, *ApJ*, 591, 499 , 2007, *MNRAS*, 374, 1479 , 2004, *A&AS*, 205, 8901 , 1991, *ApJ*, 380, 320 , 1993, *MNRAS*, 265, 271 , 2004, *ApJ*, 606, 32 , 2005, *MNRAS*, 364, 552 , 2006, *MNRAS*, 371, 1125 , 2008, *MNRAS*, in press (astro-ph/0804.3795) , 2002, *MNRAS*, 333, 649 , 2003, *MNRAS*, 339, 289 2005, *MNRAS*, 364, 1105 , 1991, *ApJ*, 379, 52 , 1994, *ApJ*, 420, 87
--- abstract: | The paper proposes a method for measuring available bandwidth, based on testing network packets of various sizes (Variable Packet Size method, VPS). The boundaries of applicability of the model have been found, which are based on the accuracy of measurements of packet delays, also we have derived a formula of measuring the upper limit of bandwidth. The computer simulation has been performed and relationship between the measurement error of available bandwidth and the number of measurements has been found. Experimental verification with the use of RIPE Test Box measuring system has shown that the suggested method has advantages over existing measurement techniques. *Pathload* utility has been chosen as an alternative technique of measurement, and to ensure reliable results statistics by SNMP agent has been withdrawn directly from the router.\ author: - - title: Simulation technique for available bandwidth estimation --- Available bandwidth, RIPE Test Box, packet size, end-to-end delay, variable delay component. Introduction ============ Various real-time applications in the Internet, especially transmission audio and video information, become more and more popular. The major factors defining quality of the service are quality of the equipment (the codec and a video server) and available bandwidth in Internet link. ISP providers should provide the required bandwidth for voice and video applications to guarantee the submission of demanded services in the global network. In this paper, network path is defined as a sequence of links (hops), which forward packets from the sender to the receiver. There are various definitions for the throughput metrics, but we will use the approaches accepted in [@s5; @s9; @s10]. Two bandwidth metrics that are commonly associated with a path are the capacity $C$ and the available bandwidth $B_{av}$ (see Fig \[f1\]). The [*capacity C*]{} is the maximum IP-layer throughput that the path can provide to a flow, when there is no competing traffic load (cross traffic). The [*available bandwidth*]{} $B_{av}$, on the other hand, is the maximum IP-layer throughput that the path can provide to a flow, given the path’s current cross traffic load. The link with the minimum transmission rate determines the capacity of the path, while the link with the minimum unused capacity limits the available bandwidth. Moreover measuring available bandwidth is important to provide information to network applications on how to control their incoming and outgoing traffic and fairly share the network bandwidth. ![Illustration of throughput metrics[]{data-label="f1"}](il1){height="2.525cm"} Another related throughput metric is the [*Bulk-Transfer-Capacity*]{} (BTC). BTC of a path in a certain time period is the throughput of a bulk TCP transfer, when the transfer is only limited by the network resources and not by limitations at the end-systems. The intuitive definition of BTC is the expected long-term average data rate (bits per second) of a single ideal TCP implementation over the path in question. In order to construct a perfect picture of a global network (monitoring and bottlenecks troubleshooting) and develop the standards describing new applications, modern measuring infrastructure should be installed. In this paper we are describing usage of RIPE Test Box measurement system, which is widely used [@s7]. According to [@s7] this system doesn’t measure the available bandwidth, but it collects the numerical values, which characterize key network parameters such as packet delay, jitter, routing path, etc. In this paper we attempt to provide universal and simple model that allow us to estimate available bandwidth based on received data from RIPE Test Box measurement infrastructure. The method is based on [*Variable Packet Size*]{} (VPS) method and was used in [@s6]. This method allows us to estimate network capacity of a hop $i$ by using connection between the Round-Trip Time (RTT) and packet size $W$. The model and its applicability =============================== The well-known expression for throughput metric describing the relation between a network delay and the packet size is Little’s Law [@s13]: $$B_{av}=W/D, \label{e1}$$ where $B_{av}$ is available bandwidth, $W$ is the size of transmitted packet and $D$ is network packet delay (One Way Delay). This formula is ideal for calculating the bandwidth between two points on the network that are connected without any routing devices. In general case delay value is caused by constant network factors as propagation delay, transmission delay, per-packet router processing time, etc [@s9]. According to [@s1], Little’s Law could be modified with $D^{fixed}$: $$B_{av}=W/(D-D^{fixed}), \label{e2}$$ where $D^{fixed}$ is [*minimum fixed delay*]{} for the packet size $W$. The difference between the delays $D$ and $D^{fixed}$ is the [*variable delay component*]{} $d_{var}$. In paper [@s3] it was shown that variable delay is exponentially distributed. Choi [@s2], Hohn [@s12] showed that minimum fixed delay component $D^{fixed}(W)$ for the packet size $W$ is an linear (or affine) function of its size: $$D^{fixed}(W)=W\sum_{i=1}^h 1/C_i + \sum_{i=1}^h \delta_i, \label{e3}$$ where $C_i$ is capacity of appropriate link and $\delta_i$ is propagation delay. To prove this assumption authors experimentally found the minimum fixed delays for packets of the same size for three different routes and constructed function of dependence of a delay from the packet size $W$. In order to eliminate minimum fixed delay $D^{fixed}(W)$ from Eqn. (\[e2\]) we are suggesting to test network link with packets of different sizes [@s1], so that the packet size varied at the maximum possible value without possible router fragmentation. Then Eqn. (\[e2\]) could be modified to a suitable form for measuring available bandwidth: $$B_{av}=\frac{W_2-W_1}{D_2-D_1} \label{e4}$$ This method allows us to find a way to eliminate the measurement limitations of variable delay component $d_{var}$. The variable delay component is the cause rather large measurement errors of other methods, which will be described in the last section of this paper. Proposed model is quite simple, but it’s still difficult to find accurate measuring infrastructure. The first problem concerns the applicability of the model, i.e., what range of throughput metrics can be measured using with this method. The second issue is number of measurements (group of packets) needed to achieve a given accuracy. First problem could be solved by using measurement error (based on delay accuracy measurement): $$\eta=\frac{\Delta B}{B}=\frac{2\Delta D}{D_2-D_1}, \label{e5}$$ where $\eta$ is relative error of measurement available bandwidth, $\Delta B$ is absolute error of measurement available bandwidth and $\Delta D$ is precision of measuring the packet delay. With this expression we can easily find an upper bound $\bar{B}$ for available bandwidth: $$\bar{B}=\frac{W_2-W_1}{2\Delta D}\eta \label{e6}$$ Thus, with the RIPE Test Box, which allows to find the delay to within 2 microseconds $\Delta D=2\cdot10^{-6}$ second precision, we can measure the available bandwidth to the upper bound $\ensuremath{\bar{B}}=300$ [*Mbps*]{} with relative error $\eta=10\%$. Moreover, if we were using standard utility *ping*, with a relative error $\eta=25\%$ and precision of 1 millisecond $\Delta D=10^{-3}$ second, we could get results for network available bandwidth up to $1.5$ [*Mbps*]{}. Experimental comparison of different methods ============================================ In this part of the paper we would like to show results based on comparing different methods of measuring available bandwidth by using obtained results of experiments. The experiment was divided in three stages. In the first stage we have used RIPE Test Box measurement system with two different packet sizes. Number of measurement systems in global measurement infrastructure reaches 80, these points are covering major Internet world’s centers of the Internet, reaching highest density in Europe. The measurement error of packet delay is 2-12 $\mu s$ [@s14]. In order to prepare the experiments, three Test Boxes have been installed in Moscow, Samara and Rostov on Don in Russia during 2006-2008 years in support of RFBR grant 06-07-89074. For further analysis we collected several data sets containing up to 3000 data results in different directions, including Samara - Amsterdam (tt01.ripe.net - tt143.ripe.net). Based on these data, we calculated available bandwidth and dependence of measurement error on the number of measurements (see Fig \[f3\]). The second stage was comparing data obtained with our method, with the results of traditional methods of throughputs measurement. *Pathload* was selected as a tool that implements a traditional method of measurement product [@s10]. This software is considered one of the best tools to assess the available bandwidth. *Pathload* uses Self-Loading Periodic Streams (SLoPS). It is based on client-server architecture, which is its disadvantage, since you want to install the utility on both hosts. *Pathload* advantage is that it does not require root privileges, as the utility sends only UDP packets. The results of measurements *pathload* displayed as a range of values rather than as a single value. Mid-range corresponds to the average throughput, and the band appreciates the change in available bandwidth during the measurements. The third stage involves the comparison of data obtained in the first and second stages with the data directly from the router SSAU which serves the narrowest part of the network. ![image](il2){height="6cm"} The experiment between points tt143.ripe.net (Samara State Aerospace University) and tt146.ripe.net (FREENet, Moscow) consists of three parts: 1. Measuring the available bandwidth by testing pairs of packets of different sizes using the measuring system RIPE Test Box (packet size of 100 and 1100 bytes); 2. Measurements of available bandwidth using the utility *pathload*; 3. Measuring the available bandwidth by MRTG on the router SSAU which serves the narrowest hop of routing (see Fig \[f2\]). It is worth noting that all the three inspections should be conducted simultaneously in order to maximize the reliability of the statistics. The structure of the measuring system RIPE Test Box meets all the requirements of our method - it allows to change the size of the probe packet and find high-precision delay. By default, the test packet size is 100 bytes. There are special settings that allow adding testing packets of up to 1500 bytes to the desired frequency. In our case it is reasonable to add a packet size of 1100 bytes. It should be noted that testing of these packets does not begin until the next day after sending a special request. In order to gain access to the test results it is necessary to apply for remote access (*telnet*) to the RIPE Test Box on port 9142. The data includes information about the desired delay packets of different sizes. In order to extract the data it is necessary to identify the packet on receiving and transmitting sides. First, it should to explore sender’s side: ------------------------------------------------------- ---------------- SNDP 9 1263374005 -h tt01.ripe.net -p 6000 -n 1024 -s 1353080538 SNDP 9 1263374005 -h tt146.ripe.net -p 6000 -n 100 -s **1353080554** SNDP 9 1263374005 -h tt103.ripe.net -p 6000 -n 100 -s 1353080590 ------------------------------------------------------- ---------------- : The data of sending box[]{data-label="t1"} The last value in line is the serial number of packet. It should to be remembered to get a packet already on the receiving side of the channel. Below is a sample line on the receiving side. [ll]{} [RCDP 12 2 89.186.245.200 55730 193.233.1.69 6000]{} & [1263374005.779364]{}\ [**0.009001** 0X2107 0X2107 **1353080554** 0.000001 0.000001]{}\ \ [RCDP 12 2 200.19.119.120 57513 193.233.1.69 6000]{} & [1263374005.905792]{}\ [0.160090 0X2107 0X2107 1353080554 0.000003 0.000001]{}\ For a given number of packet is easy to find the packet delay. In this case it is 0.009001 sec. The following packet 1353091581 is the size of 1100 bytes and the delay is 0.027033 seconds. Thus, the difference is 0.018032 seconds. Other values are processed packet delay similar. The mean value $D_2-D_1$ should be used in Eqn. (\[e4\]), so it’s necessary to average several values, going consistently. In the present experiment, the averaged difference $D_{av}(1100)-D_{av}(100)$ amounted to 0.000815 seconds in the direction $tt143\rightarrow tt146$. Then the available bandwidth can be calculated as: $$B_{av}(tt143\rightarrow tt146)=\frac{8\times 1000}{0.000815}=9.8 \textit{ Mbps}$$ The average difference in the direction $tt146\rightarrow tt143$ was 0.001869 seconds. Then the available bandwidth will be: $$B_{av}(tt146\rightarrow tt143)=\frac{8\times 1000}{0.001869}=4.28 \textit{ Mbps}$$ Measuring of *pathload* utility was with periodically troubles, even though that had been opened all necessary ports. In the direction of measurement $tt146\rightarrow tt143$ the program has not get any results despite all our attempts. It is idle and filled the channel chain packets. The *pathload* results give a large spread of values, clearly beyond the capacity of the investigated channel. The other measurements with utilities *pathChirp* and *IGI* were also unsuccessful. Programs give errors and refused to measure the available bandwidth. Therefore, it was decided to compare the results obtained by different methods with data obtained directly from the router. *Traceroute* utility determines “bottleneck” of route path between SSAU and Institute of Organic Chemistry at the Russian Academy of Sciences. It was an external SSAU router which bandwidth was limited up to $30$ [*Mbps*]{}. SNMP agent collects statistic of the border router SSAU. All data are presented in Table \[t3\] indicating the time of the experiment. --- ------------ -------------------------- ------------------------------------------------------------------------------------------------------------------------ ---------------------- ------------------ N Date Direction Available bandwidth Available bandwidth Data from router (data of RIPE Test Box) (data of *pathload*) 1 13.01.2010 $tt143\rightarrow tt146$ $10.0\pm2.2$ *[Mbps]{} & $21.9\pm14.2$ *[Mbps]{} & $12.1\pm2.5$ *[Mbps]{}\ 2 & 13.01.2010 & $tt146\rightarrow tt143$ & $4.4\pm1.2$ *[Mbps]{} & & $7.8\pm3.8$ *[Mbps]{}\ 3 & 23.01.2010 & $tt143\rightarrow tt146$ & $20.3\pm5.1$ *[Mbps]{} & $41.2\pm14.0$ *[Mbps]{} & $18.7\pm1.1$ *[Mbps]{}\ 4 & 23.01.2010 & $tt146\rightarrow tt143$ & $9.3\pm2.7$ *[Mbps]{} & & $11.3\pm2.6$ *[Mbps]{}\ 5 & 06.02.2010 & $tt143\rightarrow tt146$ & $9.2\pm1.4$ *[Mbps]{} & $67\pm14$ *[Mbps]{} & $12.0\pm2.0$ *[Mbps]{}\ 6 & 06.02.2010 & $tt146\rightarrow tt143$ & $3.5\pm1.2$ *[Mbps]{} & & $4.5\pm2.0$ *[Mbps]{}\ *************** --- ------------ -------------------------- ------------------------------------------------------------------------------------------------------------------------ ---------------------- ------------------ Table \[t3\] shows that the results obtained by our method and router data are in a good agreement, while the *pathload* measurements differ. The study of the statistical type of delay [@s3] provides an answer to the question why this is happening. The dispersion of measurements results speaks presence of a variable part of delay $d_{var}$. This utility uses Self-Loading Periodic Streams (SLoPS) like most others. This method consists in the generation of packets chain with redundant frequency when time packet delivery will significantly increase due to long queues at the routers. In this case, the transmitter starts to reduce the frequency of packets generation until the queue it disappears. Next, the process will be repeated for as long as the average frequency of packets generation will not approach the available bandwidth. The main disadvantage of this technique is unreliable measurements because they have not considered the influence of the variable part of delay. This is the reason for fantastic $90$ [*Mbps*]{} *pathload* result for channel with a $30$ [*Mbps*]{} capacity. The required number of measurements =================================== The main disadvantage of most modern tools is a large spread of values of available bandwidth. Measurement mechanisms of throughput utilities do not take into account the effect of the variable part of the delay. Unfortunately, in all developed utilities compensations mechanism for the random component’s delay isn’t provided. Any method that gives accurate results should contain mechanism for smoothing the impact $d_{var}$. In order to understand the effect of the variable part on the measurement results we turn to the following experiment. The series of measurements have been made between RIPE Test Boxes: tt01.ripe.net (Amsterdam, Holland) and tt143.ripe.net (Samara State Aerospace University, Russian Federation). It was received about 3000 values of delay packet size of 100 and 1024 bytes in both directions. Using the presented method quantities of available bandwidth have been calculated for cases where the averaging is performed on 20, 50 and 100 pairs of values. On Fig. \[f3\] the schedule of the available bandwidth calculated for various conditions of averaging is represented. ![image](il3){height="8cm"} Apparently from the schedule, beatings of the calculated available bandwidth remain critical at 20 averaged values. At 50 it is less noticeable, and at 100 the values the curve is almost equalized. There was a clear correlation between the number of measurements and variation of the calculated available bandwidth. The beats are caused by the variable part of the delay; its role is reduced as the number of measurements. In this section the necessary number of measurements is calculated using two methods: from experimental data of the RIPE Test Box and by simulation knowing the distribution type for network delay. Based on data obtained from tt01 and tt143 Boxes we were computed standard deviations (SD) $\sigma_n(B)$ of available bandwidth. Data are presented in Table \[t4\] and graphically depicted in Fig. \[f4\]. Number of measurements, $n$ 5 10 20 30 40 50 70 100 200 300 ----------------------------- ------------ ------------ ------------ ----------- ----------- ----------- ----------- ----------- ----------- ----------- Standard deviations, \*[22.2]{} \*[14.9]{} \*[10.2]{} \*[8.3]{} \*[7.3]{} \*[6.7]{} \*[5.7]{} \*[4.9]{} \*[2.9]{} \*[2.3]{} $\sigma_n(B)$ ([*Mbps*]{}) The average value of available bandwidth, $B_{av}$ ([*Mbps*]{}) ![image](il4){height="7.5cm"} Figure 4 shows that it is necessary to take at least 50 measurements (the delay difference for 50 pairs of packets). In this case, the calculated value exceeds twice the capacity of SD, i.e.: $B\geq 2\sigma_n(B)$. A more accurate result can be obtained using the generating functions for describing the delay packets. In paper [@s3] it is shown that the delay distribution is described by exponential law and the following generating function can be used for delay emulation: $$D=D_{min}+W/B-(1/\lambda)ln(1-F(D,W)), \label{e7}$$ where $\lambda=1/(D_{av}-D_{min})$. The function $F(W,D)$ is a standard random number generator in the interval $[0;1)$. Knowledge of the generating function allows calculating the tabulated values of $\eta^{T}_{n}$ from Eqn. \[e5\]. Earlier standard deviation $\sigma^{T}_{n}(D_2-D_1)$ for the delay difference is found taking $\lambda^T=1000 s^{-1}$. Calculation will hold for the following values: $\Delta W^T=W_2-W_1=1000$ [*bytes*]{}, $B^T=10$ [*Mbps*]{}, which corresponds to $D^{T}_{2}-D^{T}_{1}=8\cdot 10^{-4}$[*s*]{}. Number of measurements, $n$ 5 10 20 30 50 100 200 -------------------------------------- ------------- ------------- ------------- ------------- ------------- ------------- ------------- Standard deviations, \*[0.661]{} \*[0.489]{} \*[0.354]{} \*[0.284]{} \*[0.195]{} \*[0.111]{} \*[0.075]{} $\sigma^{T}_{n}(D_2-D_1)$ ([*ms*]{}) For the $\sigma^{T}_{n}(D_2-D_1)$ values from Table \[t5\] values of $\eta^{T}_{n}$ could be found (see Table \[t6\]). --------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ----------- Number $n$ \*[5]{} \*[10]{} \*[20]{} \*[30]{} \*[50]{} \*[100]{} \*[200]{} of measurements, Measurement \*[82.6]{} \*[61.1]{} \*[44.2]{} \*[35.5]{} \*[24.4]{} \*[13.9]{} \*[9.4]{} error, $\eta^{T}_{n}$ (%) --------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ----------- : Dependence of error on the number of measurements[]{data-label="t6"} During the real experimental measured quantities $\lambda^{exp}$, $D^{exp}_{2}-D^{exp}_{1}$, and $B^{exp}$ take arbitrary values, but correction factors can easily calculate the required number of measurements: $$\eta^{T}_{n}=k(D_2-D_1)\cdot k(\lambda)\cdot \eta^{exp}_{n}, \label{e8}$$ where $k(\lambda)=\lambda^{exp}/\lambda^T$, and $k(D_2-D_1)=(D^{exp}_{2}-D^{exp}_{1})/(D^{T}_{2}-D^{T}_{1})$. Substituting in Eqn. \[e8\] values of the coefficients $k(D_2-D_1)$, $k(\lambda)$ and the desired accuracy of measurements $\eta^{exp}$ we compare the obtained values with the tabulated $\eta^{T}_{n}$ and find the number of measurements $n$ required to achieve a given error. Conclusion ========== In this paper we found a way to measure available bandwidth by data of delays that could be collected by RIPE Test Box. This method consists in the fact that comparing the average end-to-end delays for packets of different sizes, we can calculate the available bandwidth. We carried out a further study of the model and found the limits of its applicability, which depend on the accuracy of measurement delays. The experiment results were obtained with our method and the alternative one. The benchmark tool has been selected utility *pathload*. The paper shows that the accuracy of calculations available bandwidth depends on variable delay component, $d_{var}$. The experiments and computer simulation, using the generating function of the delay were conducted. They have shown that achieving a given error requires to average large number of measurements. We found a relationship between accuracy and the number of measurements to ensure the required level of accuracy. In the future we plan to implement this method in mechanism of the measurement infrastructure RIPE Test Box. Acknowledgment {#acknowledgment .unnumbered} ============== We would like to express special thanks Prasad Calyam, Gregg Trueb from the University of Ohio, Professor Richard Sethmann, Stephan Gitz and Till Schleicher from Hochschule Bremen University of Applied Sciences, Dmitry Sidelnikov from the Institute of Organic Chemistry at the Russian Academy of Sciences for their invaluable assistance in the measurements. We would also like to express special thanks all the staff of technical support RIPE Test Box especially Ruben fon Staveren and Roman Kaliakin for constant help in the event of questions concerning the measurement infrastructure. [14]{} Platonov A.P., Sidelnikov D.I., Strizhov M.V., Sukhov A.M., Estimation of available bandwidth and measurement infrastructure for Russian segment of Internet, Telecommunications, 2009, 1, pp.11-16. Choi, B.-Y., Moon, S., Zhang, Z.-L., Papagiannaki, K. and Diot, C.: Analysis of Point-To-Point Packet Delay In an Operational Network. In: Infocom 2004, Hong Kong, pp. 1797-1807 (2004). A.M. Sukhov, N Kuznetsova, What type of distribution for packet delay in a global network should be used in the control theory? 2009; arXiv: 0907.4468. Padhye, J., Firoiu, V., Towsley, D., Kurose, J.: Modeling TCP Throughput: A Simple Model and its Empirical Validation. In: Proc. SIGCOMM Symp. Communications Architectures and Protocols, pp. 304-314 (1998). Dovrolis C., Ramanathan P., and Moore D., Packet-Dispersion Techniques and a Capacity-Estimation Methodology, IEEE/ACM Transactions on Networking, v.12, n. 6, December 2004, p. 963-977. Downey A.B., Using Pathchar to estimate internet link characteristics, in Proc. ACM SICCOMM, Sept. 1999, pp. 222-223. Ripe Test Box, http://ripe.net/projects/ttm/ Jacobson, V. Congestion avoidance and control. In Proceedings of SIGCOMM 88 (Stanford, CA, Aug. 1988), ACM. Prasad R.S., Dovrolis C., and B. A. Mah B.A., The effect of layer-2 store-and-forward devices on per-hop capacity estimation, in Proc. IEEE INFOCOM, Mar. 2003, pp. 2090-2100. Jain, M., Dovrolis, K.: End-to-end Estimation of the Available Bandwidth Variation Range. In: SIGMETRICS’05, Ban, Alberta, Canada (2005). Crovella, M.E. and Carter, R.L.: Dynamic Server Selection in the Internet. In: Proc. of the Third IEEE Workshop on the Architecture and Implementation of High Performance Communication Subsystems (1995). N. Hohn, D. Veitch, K. Papagiannaki and C. Diot, Bridging Router Performance And Queuing Theory, Proc. ACM SIGMETRICS, New York, USA, Jun 2004. Kleinrock, L. Queueing Systems, vol. II. John Wiley & Sons, 1976. Georgatos, F., Gruber, F., Karrenberg, D., Santcroos, M., Susanj, A., Uijterwaal, H. and Wilhelm R., Providing active measurements as a regular service for ISP’s. In: PAM2001.
--- abstract: 'Wide-area data and algorithms in large power systems are creating new opportunities for implementation of measurement-based dynamic load modeling techniques. These techniques improve the accuracy of dynamic load models, which are an integral part of transient stability analysis. Measurement-based load modeling techniques commonly assume response error is correlated to system or model accuracy. Response error is the difference between simulation output and phasor measurement units (PMUs) samples. This paper investigates similarity measures, output types, simulation time spans, and disturbance types used to generate response error and the correlation of the response error to system accuracy. This paper aims to address two hypotheses: 1) can response error determine the total system accuracy? and 2) can response error indicate if a dynamic load model being used at a bus is sufficiently accurate? The results of the study show only specific combinations of metrics yield statistically significant correlations, and there is a lack of pattern of combinations of metrics that deliver significant correlations. Less than 20% of all simulated tests in this study resulted in statistically significant correlations. These outcomes highlight concerns with common measurement-based load modeling techniques, raising awareness to the importance of careful selection and validation of similarity measures and response output metrics. Naive or untested selection of metrics can deliver inaccurate and misleading results.' author: - 'Phylicia Cicilio,  and Eduardo Cotilla-Sanchez,  [^1][^2]' bibliography: - 'ref.bib' title: 'Evaluating Measurement-Based Dynamic Load Modeling Techniques and Metrics' --- Introduction ============ The introduction of phasor measurement units (PMUs) and advanced metering infrastructure (AMI) has ushered in the era of big data to electrical utilities. The ability to capture high-resolution data from the electrical grid during disturbances enables the more widespread use of measurement-based estimation techniques for validation of dynamic models such as loads. Transient stability studies use dynamic load models. These studies are key for ensuring electrical grid reliability and are leveraged for planning and operation purposes [@Shetye]. It is imperative that dynamic load models be as representative of the load behavior as possible to ensure that transient stability study results are accurate and useful. However, developing dynamic load models is challenging, as they attempt to represent uncertain and changing physical and human systems in an aggregate model. Several methods exist for determining load model parameters, such as measurement-based techniques using power systems sensor data [@Kim; @Kontis; @Zhang; @Renmu; @Choi_2; @Ma_2], and methods that use parameter sensitivities and trajectory sensitivities [@Choi; @Kim; @Son; @Ma; @Zhang]. A common practice in measurement-based techniques is to use system response outputs, such as bus voltage magnitude, from PMU data and simulation data and compare the output with a similarity measure, such as Euclidean distance. The error between PMU data and simulation output is referred to as response error in this paper. As investigated in [@siming_thesis], the underlying assumption that reducing response error results in a more accurate model and system is not guaranteed. This paper examines the relationship between response error and system and model accuracy to highlight concerns with common measurement-based technique practices. The methods used in the study examine whether the selection of a load model is accurate at a given bus. Measurement-based techniques typically perform dynamic load model parameter tuning to improve accuracy. In parameter tuning, significant inter-dependencies and sensitivities exist between many dynamic load model parameters [@Choi; @Kim; @Son; @Ma; @Zhang], which is one of the reasons why dynamic load model parameter tuning is challenging. This study compares the selection of two loads models, the dynamic composite load model (CLM) and the static ZIP model instead of parameter tuning. The static ZIP model is the default load model chosen by power system simulators and represents loads with constant impedance, current, and power. The CLM load model has become an industry standard, particularly for the western United States, which represents aggregate loads including induction machine motor models, the ZIP model, and power electronics [@Kosterev; @Renmu]. The choice of changing the load model is made to compare known differences in responses from load motor models with the CLM model and static load models with the ZIP model. By comparing load model selection, the presence of a correlation between response error and system accuracy will be assessed. This study performs two experiments to address two main hypotheses. The first experiment is a system level experiment to test hypothesis 1) can response error determine the total system accuracy of how many load models at buses in the system are accurate? The second experiment is a bus level experiment to test hypothesis 2) can response error indicate if a load model being used at a bus is accurate? The results from these experiments demonstrate that it can’t be assumed that response error and system accuracy are correlated. The main contribution of this paper is to identify the need for validation of techniques and metrics used in dynamic load modeling, as frequently used metrics can deliver inaccurate and meaningless results. The remainder of this paper is organized as follows. Section II discusses the use of dynamic load models in industry and those used in this paper. In Section III, similarity measures are discussed in relevance to power systems time series data. Section IV details the methodology used to evaluate the system level experiment of hypothesis 1. Section V provides and discusses the results from system level experiment. Section VI details the methodology used to evaluate the bus level experiment of hypothesis 2. These results are provided and discussed in Section VII. In conclusion, Section VIII discusses the implication of the results found in this study and calls for attention to the importance of careful selection and validation of measurement-based technique metrics. Similarity Measures {#similarity_section} =================== A similarity measure compares how similar data objects, such as time series vectors, are to each other. A key component of measurement-based techniques is to use a similarity measure to calculate the response error. Then typically, an optimization or machine learning algorithm reduces this response error to improve the models or parameters in the system. Several measurement-based dynamic load model estimation studies employ Euclidean distance as a similarity measure [@Renmu; @Visconti; @Kong] . However, there are characteristics of power systems time series data which should be ignored or not emphasized, such as noise, which are instead captured by Euclidean distance. Power system time series data characteristics include noise, initialization differences, and oscillations at different frequencies. These characteristics result in shifts and stretches in output amplitude and time as detailed in Table \[similarity\_measures\]. -- ------------------------------------------------------------------------------------------------ -- **Amplitude & **Time\ **Shift & initialization differences, discontinuities & different/unknown initialization time\ **Stretch & noise & oscillations at different frequencies\ ******** -- ------------------------------------------------------------------------------------------------ -- : Examples of amplitude and time shifting and stretching [@siming_thesis][]{data-label="similarity_measures"} The characteristics listed in Table \[similarity\_measures\] are the effect of specific phenomena in the system. For example, differences in control parameters in motor models and potentially also playback between motor models can cause oscillations at different frequencies. Certain changes in output are important to capture as they have reliability consequences to utilities. An increase in the initial voltage swing after a disturbance can trip protection equipment. An increase in the time it takes for the frequency to cross or return to 60 Hz in the United States has regulatory consequences resulting in fines. Response error produced by similarity measures should capture these important changes. Other changes to output, such as noise, should be ignored. Different situations when comparing simulation data to simulation data versus comparing simulation data to PMU data cause some characteristics listed in Table \[similarity\_measures\]. Comparing simulation data to simulation data occurs in theoretical studies, and comparing simulation data to PMU data would be the application for utilities. Initialization differences and differences in initialization time can occur when comparing simulation data to PMU data due to the difficulty in perfectly matching steady-state values. However, when comparing simulation data to simulation data, initialization differences and differences in initialization time likely highlight errors in the simulation models, parameters, or values. Similarity measures have the capability to be invariant to time shift and stretch or amplitude shift and stretch. Table \[similarity\_measures2\] lists the similarity measures examined in this study with their corresponding capabilities. These similarity measures are chosen to test the sensitivities to all four quadrants of Table \[similarity\_measures\]. [c| p[1.1cm]{} p[1.1cm]{} p[0.9cm]{} p[0.9cm]{} ]{}\ & **Amplitude Shift & **Amplitude Stretch & **Time Shift & **Time Stretch\ **Euclidean Distance & & & &\ **Manhattan Distance & & & &\ **Dynamic Time Warping & & & $\bullet$ & $\bullet$\ **Cosine Distance & $\bullet$ & & &\ **Correlation Coefficient & $\bullet$ & $\bullet$ & &\ ****************** Euclidean distance and Manhattan distance are norm-based measures which are variant to time and amplitude shifting and stretching. Euclidean distance is one of the most commonly used similarity measures in measurement-based techniques. These norm based distances can range from 0 to $\infty$. The cosine similarity takes the cosine of the angle between the two vectors to determine the similarity. By only using the angle between the vectors, this similarity is invariant to amplitude shifting [@siming_thesis]. This similarity can range from -1 to 1. The Pearson correlation coefficient is invariant to amplitude shifting and stretching and also ranges from -1 to 1 [@siming_thesis]. Dynamic time warping (DTW) identifies the path between two vectors of the lowest cumulative Euclidean distance by shifting the time axis. DTW is invariant to local and global time shifting and stretching [@Kong]. The DTW algorithm used in this study is only invariant to time shifting. DTW can range from 0 to $\infty$. Figure \[example\_plots\] and \[example\_comparison\] show how amplitude and time shifting and stretching affect the error produced by similarity measures. The time series plots in Figure \[example\_plots\] show a sine wave with corresponding amplitude or time shift or stretch. The similarity measures calculate the difference between each of the time series subplots. The error generated for each similarity measure is normalized for comparison. The error is normalized separately for each similarity measure, so the sum of the error from the amplitude and time shift and stretch sums to one. Figure \[example\_comparison\] compares the error results from each of the subplot scenarios. [0.22]{} ![Example time series with amplitude and time shift and stretch[]{data-label="example_plots"}](Images/Sin_Data_Plot_Amplitude_Stretch.eps "fig:"){height="1.3in"}   [0.22]{} ![Example time series with amplitude and time shift and stretch[]{data-label="example_plots"}](Images/Sin_Data_Plot_Amplitude_Shift.eps "fig:"){height="1.3in"} [0.22]{} ![Example time series with amplitude and time shift and stretch[]{data-label="example_plots"}](Images/Sin_Data_Plot_Time_Stretch.eps "fig:"){height="1.3in"} [0.22]{} ![Example time series with amplitude and time shift and stretch[]{data-label="example_plots"}](Images/Sin_Data_Plot_Time_Shift.eps "fig:"){height="1.3in"} ![Comparison of similarity measures[]{data-label="example_comparison"}](Images/error_comparsion_bar.eps){width="1\linewidth"} The results in Figure \[example\_comparison\] demonstrate the abilities of each similarity measure. The similarity measures are denoted as: Euclidean distance (ED), Manhattan distance (MH), dynamic time warping (DTW), cosine distance (COS), and correlation coefficient (COR). Correlation coefficient has negligible error produced with both amplitude shift and stretch. Cosine distance has negligible error with amplitude stretch. Dynamic time warping has negligible error with time shift. These results provide an example of what can be expected when they are used with simulation or PMU time series data. System Level Experiment Methodology =================================== The system level experiment is setup to determine whether system response error can determine the total system accuracy. This addresses the question: is it possible to determine if any models or the approximate percentage of models in the system are inaccurate and need to be updated, with out needing to test at each individual bus? This is determined by calculating the correlation between system accuracy, as defined in Equation \[system\_accuracy\], and system response error described below. This experiment is performed within the RTS96 test system [@RTS96; @Lassetter], using Siemens PSS/E software. Fourteen CLMs are randomly placed on loads in the system enhancing the RTS96 case to create a load model benchmark system. The remaining 37 loads are modeled with the static ZIP load model. Test systems are generated by replacing some ZIP load models from the benchmark system with CLM in the test system and some CLM in the benchmark system to ZIP load models in the test system. Switching load models creates “inaccurate” and “accurate” load models as a method to change the accuracy of the system. The “inaccurate” load models are those in the test system that are different from the benchmark system. The buses with the same load models in the test system and benchmark system are “accurate” load models. Switching these load models will also create difference responses, as described in Section I. A hundred of benchmark and test systems are created using the randomized placement of CLMs, based on a uniform random distribution, to reduce the sensitivity of the results to location of the CLM in the system. The percentage of buses in the test system with accurate load models is called the system accuracy. System accuracy is defined in Equation \[system\_accuracy\] and is also used in the Bus Level Experiment. \[system\_accuracy\] [accuracy]{.nodecor}\_[[system]{.nodecor}]{} = An example benchmark and test system pair at 50% system accuracy will have half of the CLMs removed from the benchmark system. The removed CLMs will be replaced with ZIP load models. System accuracy quantifies how many dynamic load models in the system are accurate. Accurate dynamic load models in the test systems are those models which are the same as those in the benchmark system. A bus fault is used to create a dynamic response in the system. Over a hundred simulations are performed where the location of the fault is randomized to reduce the sensitivity of fault location in comparison to CLM location. The bus fault is performed by applying a three-phase to ground fault with a duration of 0.1 s. During this fault, there is an impedance change at the bus fault causing the voltage to drop at the bus and a change in power flows throughout the system. The fault is cleared 0.1 seconds after it is created, and the power flows returns to a steady-state. The output captured from the simulations are voltage magnitude, voltage angle, and frequency from all of the load buses, and line flow active power and reactive power. The output from the benchmark system is compared to the test systems using the similarity measures outlined in Section \[similarity\_section\]. The response error generated by DTW, cosine distance, and correlation coefficient are a single measure for the entire time span of each output at each bus. The response error from Manhattan and Euclidean distance is generated at every time step in the time span. The error at each time step is then summed across the time span to create a single response error similar to the other similarity measures. The generation of response error for Manhattan and Euclidean distance is shown by Equation \[response\_accuracy\]. $$\label{response_accuracy} \centering \textnormal{error}_{\textnormal{response}} = \sum_{t=1}^Ts[t]$$ Similar to response error, system response error is calculated from the difference between the output of buses between the benchmark and test systems. However, system response error is a single metric which is the sum of all the response errors from each bus. Three time spans are tested: 3 seconds, 10 seconds, and 30 seconds. The disturbance occurs at 0.1 seconds and cleared at 0.2 seconds for all the scenarios. These time spans are chosen to test the sensitivity to the transient event occurring in the first 3 seconds, and sensitivity to the dynamic responses out to 30 seconds. The Pearson correlation coefficient is calculated between system accuracy and system response error using the student t-test, to determine the relationship between the two. The student t-test is a statistical test to determine if two groups of results being compared have means which are statistically different. The output of the Pearson correlation coefficient is the r and p-value. The r-value denotes the direction and strength of the relationship. R-values range from -1 to 1, where -1 to -0.5 signifies a strong negative relationship and 0.5 to 1 signifies a strong positive relationship between the groups. For this experiment, a strong negative relationship implies that as the system accuracy increases the system response error decreases. This is the relationship typically assumed by those performing measurement-based techniques. The p-value is the value which determines if the two results are different. A p-value of less than 0.05 signifies a statistically significant difference between the two groups of results being compared. Therefore, a p-value less than 0.05 signifies a statistically significant relationship quantified by the r-value. System Level Experiment Results =============================== In this section, the correlation between response and system accuracy is calculated to evaluate the ability of various time spans, output types, and similarity measures to predict system accuracy as used in measurement-based techniques. An example outputs from these results is visualized in Figures \[high\_accuracy\] and \[low\_accuracy\]. The plots compare the reactive power times series data from a bus in the benchmark system and test systems at two levels of system accuracy in a system undergoing a bus fault at the same bus. Figure \[low\_accuracy\] shows the benchmark and test system responses with low system accuracy, 8%. Figure \[high\_accuracy\] shows the responses with high system accuracy, 92%. ![Reactive Power Time Series Plot of Low System Accuracy and High Response Error with Generator Outage[]{data-label="high_accuracy"}](Images/Q_timeseries_low.eps){width=".9\linewidth"} ![Reactive Power Time Series Plot of High System Accuracy and Low Response Error with Generator Outage[]{data-label="low_accuracy"}](Images/Q_timeseries_high_clean.eps){width=".9\linewidth"} The response from the high system accuracy test system has a better curve fit to the benchmark system than the low system accuracy test system. This visual comparison confirms that with an appropriate similarity measure the response error should decrease as system accuracy increases. The results from all the simulations determining correlation between system accuracy and response error as grouped by the metrics used are shown as r-values in Figure \[R\_LO\_3\]. R-values of less than -0.5 are highlighted in orange to show they represent a strong relationship. R-values greater than -0.5, which do not have a strong relationship, are in white. All resulting p-values are found to be lower than 0.05, meaning all r-value relationships are statistically significant. The similarity measures listed in the plots use the same abbreviations as in Figure \[example\_comparison\]. The output types listed in the plots are abbreviated with: voltage angle (ANG), voltage magnitude (V), frequency (F), line active power flow (P), and line reactive power flow (Q). [0.5]{} ![Bus fault R-values for system level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="R_LO_3"}](Images/BF_3s_new.eps "fig:"){height="1.6in"}   [0.5]{} ![Bus fault R-values for system level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="R_LO_3"}](Images/BF_10s_new.eps "fig:"){height="1.6in"} [0.5]{} ![Bus fault R-values for system level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="R_LO_3"}](Images/BF_30s_new.eps "fig:"){height="1.6in"} Out of the 75 combinations of metrics tested in this experiment, only 12% yielded statistically significant differences. Considering the visual verification that indeed response error should decrease as system accuracy increases from Figures \[high\_accuracy\] and \[low\_accuracy\], the lack of strong negative correlations seen in Figure \[R\_LO\_3\] are concerning. Only the three and ten-second time span simulations have strong correlation relationships, none of the thirty-second scenarios have strong relationships. During a thirty-second simulation, the last ten to thirty seconds of the output response will flatten to a steady-state value. Therefore, in a thirty-second simulation there are many error data points that might contain flat steady-state responses limiting curve fitting opportunities and reducing a correlation relationship. This can explain why none of the thirty-second scenarios have strong relationships. The distribution of the r-values from the overall strongest correlation relationship, with an r-value of -0.5199, is examined to further investigate the correlation results. Figure \[distribution\] visualizes the distribution of the response error for this r-value at the tested levels of system accuracy. ![R-value distribution[]{data-label="distribution"}](Images/Distribution_sample.eps){width="1\linewidth"} The response error in figure \[distribution\] is normalized for a clearer comparison. A general negative correlation is seen, where there is lower response error at higher system accuracy. However, there are several outliers in the data preventing a stronger overall correlation, particularly between system accuracy levels 0% and 70%. This suggests at lower system accuracy levels the correlation is not as high as in the overall distribution. To test this, the correlation between system accuracy ranges is calculated to highlight where the weakest correlation regions exist. Table \[correlation\_ranges\] outlines the correlation at the following system accuracy ranges. [p[1.5cm]{} p[1.5cm]{} p[1.5cm]{} p[1.5cm]{}]{}\ 0-30% & 38%-54% & 62%-77% & 84%-100%\ \ -0.0632 & -0.2964 & 0.0322 & -0.4505\ Seen in Table \[correlation\_ranges\], the correlation is greatly degraded at the low levels of the system accuracy ranges, even reversing the r-value relationship from negative to positive between levels 62% and 77%. An ideal scenario would have a constant strong negative correlation through all system accuracy levels. This highlights a potential low effectiveness of measurement-based techniques using these testing conditions at low system accuracy levels. Overall, the results from this experiment highlight the lack of correlation between response error and system accuracy across all metrics. The application of the system level experiment is to use any of the metrics combinations that showed strong negative relationships in a measurement-based optimization program. Such an optimization program could change the dynamic load models in the system to reduce system response error in order to improve system accuracy. However, in order for such an optimization program to successfully improve system accuracy, there needs to be a strong negative correlation between system accuracy and system response error. Additionally, even with an overall strong negative correlation, Table \[correlation\_ranges\] shows that such an optimization program may determine a local minimum at a lower accuracy level to be the global minimum due to the lower correlation relationship strength found at lower accuracy levels. These results identify the need for measurement-based techniques, and potentially other power systems time series data curve fitting techniques, to evaluate the assumption that the system response error is correlated to the system accuracy. It cannot be assumed measurement-based techniques using similarity measures yield meaningful results. Any optimization or other estimation technique using the reduction of system response error will not yield accurate results of findings without a strong correlation between system response error and system accuracy. Bus Level Experiment Methodology ================================ The bus level experiment is setup to determine whether response error from an individual load bus can indicate if a load model being used at the bus is accurate. In comparison to the system level experiment which looked at system wide model accuracy, this experiment looks at model accuracy at the bus level. The results of this experiment are the p-values from the student t-test, indicating whether there is a statistical difference between the response error from buses with accurate and inaccurate load models. The p-value is the value which determines if the two results are different. A p-value of less than 0.05 signifies a statistically significant difference between the two groups of results being compared. The same system and system setup are used in this experiment as in the system level experiment. This experiment excludes comparing the output from line flow active power and reactive power with the previously used outputs of frequency, voltage angle, and voltage magnitude of the buses. In this experiment the simulations are performed at various levels of system accuracy to reduce the sensitivity of the results to the system accuracy. By reducing the sensitivity of the results to fault placement and system accuracy, the results focus the correlation to between response error and load model accuracy. All other metrics remain the same as the system level experiment. The response error from all the simulations are compared by output type, time span, and similarity measure, and binned into groups of buses with accurate load models and buses with inaccurate load models. A t-test is performed on the binned response error to determine if there is a statistically significant difference between the error from buses with accurate load models and buses with inaccurate load models. The results of this experiment are the p-values from the response error separated by disturbance scenario, output type, time span, and similarity measure. Bus Level Experiment Results ============================ The bus level experiment tests whether there is a statistical difference between the response error at individual buses with the accuracy of the load models at the buses. The p-values are calculated using response error from the output types, time spans, and similarity measures. Figure \[Bus\_level\_BF\] shows these p-values. [0.5]{} ![Bus fault P-values for bus level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="Bus_level_BF"}](Images/Map_BF_3s.eps "fig:"){height="1.6in"}   [0.5]{} ![Bus fault P-values for bus level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="Bus_level_BF"}](Images/Map_BF_10s.eps "fig:"){height="1.6in"} [0.5]{} ![Bus fault P-values for bus level experiment for time spans: a) 3 seconds, b) 10 seconds, c) 30 seconds[]{data-label="Bus_level_BF"}](Images/Map_BF_30s.eps "fig:"){height="1.6in"} Less than 15% of the combinations of time span, output type, and similarity measure have significant p-values. It is noted that the combinations of metrics best used for this experimental setup are different than those in the system level experiment. This experiment highlights a serious concern for other experiments using measurement-based techniques. Only select combinations of metrics in this experiment yielded significant differences, and this same result is likely present with other measurement-based experiments whether they involve changing load models, changing load model parameters, or changes in other dynamic models. The direct application of this experiment is to use any of the disturbance type, output type, time span, and similarity measure combinations that showed significant p-values in a measurement-based machine learning technique to identify if a bus in the system need a load model updated or a different load model. There needs to be a significant difference between response errors from buses with poor fitting or inaccurate load models and those which are accurate for such a machine learning algorithms to give meaningful results, whether it be from simulation or PMU outputs. In this case, if the machine learning algorithm was using a combination of metrics that did not have a proven significant difference between response error from buses with inaccurate and accurate load models, the machine learning algorithm would be unable to accurately tell the difference between the groups, causing the results to be inaccurate. The results from this experiment confirm the same conclusion from the system level experiment that there needs to be verification testing showing that the chosen measurement-based metrics used to calculate error will capture true differences between incorrect models and correct models. It cannot be assumed that any combination of metrics used in measurement-based techniques will yield meaningful results. Conclusion ========== This paper investigates common metrics used in measurement-based dynamic load modeling techniques to generate response error. These metrics include similarity measures, output types, and simulation time spans. The correlation between response error and accuracy is evaluated by comparing the system accuracy to system response error with the system level experiment and load model accuracy to bus response error with the bus level experiment. Both experiments demonstrated there is a lack of combinations of metrics that deliver significant findings. It is noted that the combinations of metrics best used in the bus level experiment are different than those in the system level experiment. This same result is likely to be found with other measurement-based experiments whether they involve changing load models, changing load model parameters, or changes in other dynamic models. These experiments expose a significant concern for measurement-based technique validity. This study raises awareness of the importance of careful selection and validation of similarity measures and response output metrics used, noting that naive or untested selection of metrics can deliver inaccurate and meaningless results. These results implicate that optimization or machine learning algorithms that use measurement-based techniques without validating their metrics to ensure correlation between error and accuracy may not generate accurate or meaningful results. These methods to determine the effectiveness of the use of these common metrics are specific to these experiments of model accuracy. Future work can expand these methods to dynamic model parameter tuning experiments. [Phylicia Cicilio]{} (S’15) received the B.S. degree in chemical engineering in 2013 from the University of New Hampshire, Durham, NH, USA. She received the M.S. degree in electrical and computer engineering in 2017 from Oregon State University, Corvallis, OR, USA, where she is currently working toward the Ph.D. degree in electrical and computer engineering. She is currently a Graduate Fellow at Idaho National Laboratory, Idaho Falls, ID, USA. Her research interests included power system reliability, dynamic modeling, and rural electrification. [Eduardo Cotilla-Sanchez]{} (S’08-M-12-SM-19) received the M.S. and Ph.D. degrees in electrical engineering from the University of Vermont, Burlington, VT, USA, in 2009 and 2012, respectively. He is currently an Associate Professor in the School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR, USA. His primary field of research is electrical infrastructure resilience and protection, in particular, the study of cascading outages. Prof. Cotilla-Sanchez is the Vice-Chair of the IEEE Working Group on Cascading Failures and President of the Society of Hispanic Professional Engineers Oregon Chapter. [^1]: This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1314109-DGE. [^2]: The authors are with the School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR 97331 USA e-mail: (ciciliop@oregonstate.edu; ecs@oregonstate.edu).
--- abstract: 'We study the suppression of the small-scale power spectrum due to the decay of charged matter to dark matter prior to recombination. Prior to decay, the charged particles couple to the photon-baryon fluid and participate in its acoustic oscillations. However, after decaying to neutral dark matter the photon-baryon fluid is coupled only gravitationally to the newly-created dark matter. This generically leads to suppression of power on length scales that enter the horizon prior to decay. For decay times of $\sim$$3.5$ years this leads to suppression of power on subgalactic scales, bringing the observed number of Galactic substructures in line with observation. Decay times of a few years are possible if the dark matter is purely gravitationally interacting, such as the gravitino in supersymmetric models or a massive Kaluza-Klein graviton in models with universal extra dimensions.' author: - Kris Sigurdson - Marc Kamionkowski title: 'Charged-particle decay and suppression of small-scale power' --- The standard inflation-inspired cosmological model, with its nearly scale-invariant power spectrum of primordial perturbations, is in remarkable agreement with observation. It predicts correctly the detailed pattern of temperature anisotropies in the cosmic microwave background (CMB) [@CMB], and accurately describes the large scale clustering of matter in the Universe [@LSS]. However, on subgalactic scales there are possible problems with the standard cosmology that warrant further investigation. Namely, the model overpredicts the number of subgalactic halos by an order of magnitude compared to the 11 observed dwarf satellite galaxies of the Milky Way [@excessCDMpower]. Several possible resolutions have been proposed to this apparent discrepancy, ranging from astrophysical mechanisms that suppress dwarf-galaxy formation in subgalactic halos (see, for example, Ref. [@AstroSol]) to features in the inflaton potential that suppress small-scale power and thus reduce the predicted number of subgalactic halos [@Kamion2000]. In this *Letter*, we show that if dark matter is produced by the out-of-equilibrium decay of a long-lived charged particle, then power will be suppressed on scales smaller than the horizon at the decay epoch. Unlike some other recent proposals, which suppress small-scale power by modifying the dark-matter particle properties [@OtherMods], ours modifies the dark-matter production mechanism. In the model we discuss here, prior to decay, the charged particles couple electromagnetically to the primordial plasma and participate in its acoustic oscillations. After decay, the photon-baryon fluid is coupled only gravitationally to the neutral dark matter. This generically leads to suppression of power for scales that enter the horizon prior to decay. This suppression, reduces the amount of halo substructure on galactic scales while preserving the successes of the standard hierarchical-clustering paradigm on larger scales. Apart from the changes to the model due to the decay process, we adopt the standard flat-geometry $\Lambda$CDM cosmological model with present-day dark-matter density (in units of the critical density) $\Omega_{d}=0.25$, baryon density $\Omega_{b}=0.05$, cosmological constant $\Omega_{\Lambda}=0.70$, Hubble parameter $H_{0}=72~{\rm km\,s^{-1} Mpc^{-1}}$, and spectral index $n=1$. In the standard $\Lambda$CDM model the initial curvature perturbations of the Universe, presumably produced by inflation or some inflation-like mechanism, are adiabatic (perturbations in the total density but not the relative density between species) and Gaussian with a nearly scale-invariant spectrum of amplitudes. These initial perturbations grow and react under the influence of gravity and other forces, with the exact nature of their behavior dependent upon the species in question. Because dark-matter particles are, by assumption, cold and collisionless the fractional dark-matter-density perturbation $\delta_{d} \equiv \delta\rho_{d}/\rho_{d}$ can only grow under the influence of gravity. The baryonic species, being charged, are tightly coupled by Coulomb scattering to the electrons, which are themselves tightly coupled to the photons via Thomson scattering. The baryons and photons can thus be described at early times as a single baryon-photon fluid, with the photons providing most of the pressure and inertia and the baryons providing only inertia. Gravity will tend to compress this baryon-photon fluid, while the radiation pressure will support it against this compression. The result is acoustic oscillations, and the baryon density perturbation $\delta_{b} \equiv \delta\rho_{b}/\rho_{b}$ and photon density perturbation $\delta_{\gamma} \equiv \delta\rho_{\gamma}/\rho_{\gamma}$ will oscillate in time for length scales inside the horizon (on length scales larger than the horizon the pressure can have no effect). At early times these perturbations are very small and linear perturbation theory can be applied. This allows an arbitrary density field to be decomposed into a set of independently evolving Fourier modes, labeled by a wavenumber $k$. Fig. \[fig:delta\] shows the growth of dark-matter perturbations under the influence of gravity, and the oscillatory behavior of the baryon perturbation for the same wavenumber. We choose to work in the synchronous gauge where the time slicing is fixed to surfaces of constant proper time so that particle decays proceed everywhere at the same rate. In the synchronous gauge the standard linearized evolution equations for perturbations in Fourier space are (e.g., [@Ma95]) $$\begin{aligned} \dot{\delta}_{d} = - \theta_{d}-\frac{1}{2}\dot{h} \, , \quad \dot{\theta}_{d} = -\frac{\dot{a}}{a}\theta_{d} \, , \label{eqn:dark_delta}\end{aligned}$$ $$\begin{aligned} \dot{\delta}_{b} = - \theta_{b}-\frac{1}{2}\dot{h} \, ,\end{aligned}$$ $$\begin{aligned} \dot{\theta}_{b} = -\frac{\dot{a}}{a}\theta_{b} &+ c_{s}^2k^2\delta_{b} + \frac{4\rho_{\gamma}}{3\rho_{b}} a n_{e} \sigma_{T} (\theta_{\gamma}-\theta_{b}) \, , \label{eqn:baryon_theta}\end{aligned}$$ $$\begin{aligned} \dot{\delta}_{\gamma} = -\frac{4}{3}\theta_{\gamma}-\frac{2}{3}\dot{h} \, ,\end{aligned}$$ and $$\begin{aligned} \dot{\theta}_{\gamma} = k^2 \left( \frac{1}{4}\delta_{\gamma} - \Theta_{\gamma} \right) + a n_{e}\sigma_{T}(\theta_{b}-\theta_{\gamma}) \, , \label{eqn:gamma_theta}\end{aligned}$$ where $\theta_{b}$, $\theta_{d}$, and $\theta_{\gamma}$ are the divergence of the baryon, dark-matter, and photon fluid velocities respectively and an overdot represents a derivative with respect to the conformal time $\eta$. Here $h$ is the trace of the spatial metric perturbations $h_{ij}$. Its evolution is described by the linearized Einstein equations, which close this system of linearized equations. The last terms on the right-hand-sides of Eqs. (\[eqn:baryon\_theta\]) and (\[eqn:gamma\_theta\]) account for Thomson scattering between baryons and photons, and are responsible for keeping them tightly coupled in the early Universe. In these equations $\sigma_{T}$ is the Thomson cross section, $n_{e}$ is the electron number density, and $c_{s}$ is the intrinsic sound speed of the baryons. During tight coupling the second moment $\Theta_{\gamma}$ of the photon distribution and other higher moments can be neglected, and the radiation can reliably be given the fluid description described above. We now show how Eqs. (\[eqn:dark\_delta\])–(\[eqn:gamma\_theta\]) are modified by the decay of a long-lived metastable charged particle to dark matter in the early Universe. We assume that the decay is of the form $q^{\pm} \rightarrow \ell^{\pm}d$, so the decay of each charged particle $q^{\pm}$ produces a dark-matter particle $d$ and a charged lepton $\ell^{\pm}$. Denoting the decaying charged component by the subscript ‘$q$’, the background density $\rho_{q}$ evolves according to the equation, $$\begin{aligned} \dot{\rho}_{q} = -3\frac{\dot{a}}{a}\rho_{q} - \frac{a}{\tau}\rho_{q} \, , \label{eqn:background_q}\end{aligned}$$ where $\tau$ is the lifetime of $q^{\pm}$. The first term just accounts for the normal $a^{-3}$ scaling of non-relativistic matter in an expanding universe, while the second leads to the expected exponential decay of the comoving density. For the dark matter we have $$\begin{aligned} \dot{\rho}_{d} = -3\frac{\dot{a}}{a}\rho_{d} + \lambda \frac{a}{\tau}\rho_{q} \, , \label{eqn:background_d}\end{aligned}$$ where $\lambda=m_{d}/m_{q}$ is the ratio of the masses of dark matter particle the charged particle. The energy density in photons evolves according to $$\begin{aligned} \dot{\rho}_{\gamma} = -4\frac{\dot{a}}{a}\rho_{\gamma} + (1-\lambda)\frac{a}{\tau}\rho_{q} \, . \label{eqn:background_gamma}\end{aligned}$$ This last equation follows from the assumption that the produced lepton initiates an electromagnetic cascade which rapidly (compared to the expansion timescale) thermalizes with the photon distribution. In practice the last term on the right-hand-side of Eq. (\[eqn:background\_gamma\]) is negligibly small because the decay takes place during the radiation dominated era when $\rho_{\gamma} \gg \rho_{q}$. Furthermore, limits on the magnitude of $\mu$-distortions to the blackbody spectrum of the CMB constrain $|1-\lambda|$ to be a small number, as we discuss below. Using covariant generalizations of Eqs. (\[eqn:background\_q\])–(\[eqn:background\_gamma\]) we can derive how Eqs. (\[eqn:dark\_delta\])–(\[eqn:gamma\_theta\]) are modified by the transfer of energy and momentum from the ‘$q$’ component to the dark matter during the decay process. Since the charged ‘$q$’ component and the baryons are tightly coupled via Coulomb scattering they share a common velocity $\theta_{\beta} = \theta_{b}=\theta_{q}$. This makes it useful to describe them in terms of a total charged-species component with energy density $\rho_{\beta}=\rho_{b}+\rho_{q}$, which we denote here by the subscript ‘$\beta$’. Because in the synchronous gauge the decay proceeds everywhere at the same rate this description is even more useful as $\delta_{\beta}=\delta_{b}=\delta_{q}$ is maintained at all times for adiabatic initial conditions. In terms of these ‘$\beta$’ variables, then, we have $$\begin{aligned} \dot{\delta}_{d} = - \theta_{d}-\frac{1}{2}\dot{h} + \lambda \frac{\rho_{q}}{\rho_{d}}\frac{a}{\tau}(\delta_{\beta}-\delta_{d}) \, , \label{eqn:deltadot_d_2}\end{aligned}$$ $$\begin{aligned} \dot{\theta}_{d} = -\frac{\dot{a}}{a}\theta_{d} + \lambda \frac{\rho_{q}}{\rho_{d}}\frac{a}{\tau}(\theta_{\beta}-\theta_{d}) \, , \label{eqn:thetadot_d_2}\end{aligned}$$ $$\begin{aligned} \dot{\delta}_{\beta} = - \theta_{\beta}-\frac{1}{2}\dot{h} \, , \label{eqn:delta_beta_2}\end{aligned}$$ $$\begin{aligned} \dot{\theta}_{\beta} = -\frac{\dot{a}}{a}\theta_{\beta} &+ c_{s}^2k^2\delta_{\beta} + \frac{4\rho_{\gamma}}{3\rho_{\beta}} a n_{e} \sigma_{T} (\theta_{\gamma}-\theta_{\beta}) \, , \label{eqn:theta_beta_2}\end{aligned}$$ $$\begin{aligned} \dot{\delta}_{\gamma} = -\frac{4}{3}\theta_{\gamma}-\frac{2}{3}\dot{h} + (1-\lambda) \frac{\rho_{q}}{\rho_{\gamma}}\frac{a}{\tau}(\delta_{\beta}-\delta_{\gamma})\, , \label{eqn:deltadot_gamma_2}\end{aligned}$$ and $$\begin{aligned} \dot{\theta}_{\gamma} = k^2 \left( \frac{1}{4}\delta_{\gamma} - \Theta_{\gamma} \right) &+ a n_{e}\sigma_{T}(\theta_{\beta}-\theta_{\gamma}) \nonumber \\ &+ (1-\lambda) \frac{\rho_{q}}{\rho_{\gamma}}\frac{a}{\tau}\left(\frac{3}{4}\theta_{\beta}-\theta_{\gamma}\right) \, . \label{eqn:thetadot_gamma_2}\end{aligned}$$ We now describe how small-scale modes that enter the horizon prior to decay are suppressed relative to those modes that enter the horizon after decay. Due to the Thomson collision terms the ‘$\beta$’ component and the photons will be tightly coupled as a ‘$\beta$’-photon fluid at early times and this fluid will support acoustic oscillations. Furthermore, Eqs. (\[eqn:deltadot\_d\_2\]) and (\[eqn:thetadot\_d\_2\]) show that the dark-matter perturbations are strongly sourced by the perturbations of the ‘$\beta$’ component prior to decay, when the ratio $\rho_{q}/\rho_{d}$ is large. Dark-matter modes that enter the horizon prior to decay will thus track the oscillations of the ‘$\beta$’-photon fluid rather than simply growing under the influence of gravity. After decay, when the ratio $\rho_{q}/\rho_{d}$ is small, the source term shuts off and dark-matter modes that enter the horizon undergo the standard growing evolution. In Fig. \[fig:delta\_tau\] we follow the evolution of the dark-matter perturbations through the epoch of decay. We modified [CMBFAST]{} [@CMBfast] to carry out these calculations. In order to suppress power on subgalactic scales the decay lifetime must be roughly the age of the Universe when the mass enclosed in the Hubble volume is equal to a galaxy mass; this occurs when $\tau \sim$ years. In Fig. \[fig:pow\] we plot the linear power spectrum of matter density fluctuations at the present day for a charged-particle lifetime $\tau = 3.5~{\rm yr}$ assuming a scale-invariant primordial power spectrum. We see that power is suppressed on scales smaller than $k^{-1} \sim 0.3~ {\rm Mpc}$ relative to the standard $\Lambda$CDM power spectrum. Suppression of power on these length scales reduces the expected number of subgalactic halos, bringing the predictions in line with observation [@Kamion2000] without violating constraints from the Lyman-alpha forest [@White2000]. Of course, the model reproduces the successes of the standard $\Lambda$CDM model on larger scales and in the CMB. The requirements of the charged-particle species are that it have a comoving mass density equal to the dark-matter density today and have a lifetime of $\tau \sim$ 3.5 yr. In order to satisfy the constraint to the CMB chemical potential [@Fixsen1996], the fractional mass difference between the charged and neutral particles must be $\Delta m/m < 3.6 \times 10^{-3}$, and in order for the decay to be allowed kinematically the mass difference must be greater than the electron mass. One possibility is the SuperWIMP scenario of Ref. [@Feng2003] in which a charged particle may decay to an exclusively gravitationally interacting particle. For example, in supersymmetric models, the decay of a selectron to an electron and gravitino $\widetilde{e} \rightarrow e\,\widetilde{G}$ with $m_{\widetilde{e}} \approx m_{\widetilde{G}} > 122~{\rm TeV}$ would satisfy these constraints, as would the decay of a KK-electron to an electron and KK-graviton $e^{1} \rightarrow e\,G^{1}$ with $m_{e^{1}} \approx m_{G^{1}} > 72~{\rm TeV}$ in the case of the single universal extra dimension Kaluza-Klein (KK) model discussed in Refs. [@Appel2001; @Feng2003]. Such masses are larger than the unitary bound for thermal production [@Griest1990], but might be accommodated through nonthermal mechanisms or if the next-to-lightest partner is a squark which might then interact more strongly and thus evade this bound. There may also be viable scenarios involving nearly-degenerate charged and neutral higgsinos. It should be noted that the recent WMAP evidence for early star formation [@Kogut2003] argues against the suppression of small-scale power, but these results are not yet conclusive. If it does turn out that traditional astrophysical mechanisms can explain the dearth of dwarf galaxies, then our arguments can be turned around to provide constraints to an otherwise inaccessible region of the parameter space for decaying dark matter [@SigKam]. Finally, if the mechanism we propose here is realized in nature, then the dearth of small-scale power, along with the detection of a non-zero CMB chemical potential, would be a powerful probe of the particle spectrum of the new physics responsible for dark matter. KS acknowledges the support of a Canadian NSERC Postgraduate Scholarship. This work was supported in part by NASA NAG5-9821 and DoE DE-FG03-92-ER40701. [1]{} P. de Bernardis et al., Nature (London) [**404**]{}, 955 (2000); S. Hanany et al., Astrophys. J. Lett. [**545**]{}, L5 (2000); N. W. Halverson et al., Astrophys. J. [**568**]{}, 38 (2002); B. S. Mason et al., Astrophys. J. [**591**]{}, 540 (2003); A. Benoit et al., Astron. Astrophys. [**399**]{}, L2 (2003); J. H. Goldstein et al., astro-ph/0212517; D. N. Spergel et al., Astrophys. J. Suppl. [**148**]{}, 175 (2003). J. A. Peacock et al., Nature (London) [**410**]{}, 169 (2001); W. J. Percival et al., [Mon. Not. Roy. Astron. Soc.]{} [**327**]{}, 1297 (2001); M. Tegmark et al., astro-ph/0310725. G. Kauffman, S. D. M. White, and B. Guiderdoni, [Mon. Not. Roy. Astron. Soc.]{} [**264**]{}, 201 (1993); A. A. Klypin et al., [Astrophys. J.]{} [**522**]{}, 82 (1999); B. Moore et al., [Astrophys. J. Lett.]{} [**524**]{}, L19 (1999). A. J. Benson et al., [Mon. Not. Roy. Astron. Soc.]{} [**333**]{}, 177 (2002); R. S. Somerville, [Astrophys. J.]{} [**572**]{}, L23 (2002); L. Verde, S. P. Oh, and R. Jimenez, [Mon. Not. Roy. Astron. Soc.]{} [**336**]{} 541 (2002). M. Kamionkowski and A. R. Liddle, [Phys. Rev. Lett.]{} [**84**]{}, 4525 (2000). C. Boehm, P. Fayet, and R. Schaeffer Phys. Lett. B [**518**]{}, 8 (2001); X. Chen, M. Kamionkowski, and X. Zhang, [Phys. Rev. D]{} [**64**]{}, 021302 (2001); X. Chen, S. Hannestad, and R. J. Scherrer, [Phys. Rev. D]{} [**65**]{}, 123515 (2002); C. Boehm et al., astro-ph/0309652; D. N. Spergel and P. J. Steinhardt, [Phys. Rev. Lett.]{} [**84**]{}, 3760 (2000). C.-P. Ma and E. Bertschinger, [Astrophys. J.]{} [**455**]{}, 7 (1995). U. Seljak and M. Zaldarriaga, [Astrophys. J.]{} [**469**]{} 437 (1996). M. White and R. A. C. Croft, [Astrophys. J.]{} [**539**]{} 497 (2000). D. J. Fixsen et al., [Astrophys. J.]{} [**473**]{}, 576 (1996). J. L. Feng, A. Rajaraman, and F. Takayama, [Phys. Rev. Lett.]{} [**91**]{}, 011302 (2003); [Phys. Rev. D]{} [**68**]{}, 063504 (2003). T. Appelquist, H.-C. Cheng, and B. A. Dobrescu, [Phys. Rev. D]{} [**64**]{}, 035002 (2001). K. Griest and M. Kamionkowski [Phys. Rev. Lett.]{} [**64**]{} 615 (1990). K. Sigurdson and M. Kamionkowski, in preparation. A. Kogut et al., Astrophys. J. Suppl. [**148**]{}, 161 (2003).
--- abstract: 'The purpose of this note is to use the scaling principle to study the boundary behaviour of some conformal invariants on planar domains. The focus is on the Aumann–Carathéodory rigidity constant, the higher order curvatures of the Carathéodory metric and two conformal metrics that have been recently defined.' address: - 'ADS: Department of Mathematics, Indian Institute of Science, Bangalore 560012, India' - 'KV: Department of Mathematics, Indian Institute of Science, Bangalore 560 012, India' author: - Amar Deep Sarkar and Kaushal Verma title: Boundary behaviour of some conformal invariants on planar domains --- Introduction ============ The scaling principle in several complex variables provides a unified paradigm to address a broad array of questions ranging from the boundary behaviour of biholomorphic invariants to the classification of domains with non-compact automorphism group. In brief, the idea is to blow up a small neighbourhood of a smooth boundary point, say $p$ of a given domain $D \subset \mathbb C^n$ by a family of non-isotropic dilations to obtain a limit domain which is usually easier to deal with. The choice of the dilations is dictated by the Levi geometry of the boundary near $p$ and the interesting point here is that the limit domain is not necessarily unique. For planar domains, this method is particularly simple and the limit domain always turns out to be a half space if $p$ is a smooth boundary point. The purpose of this note is to use the scaling principle to understand the boundary behaviour of some conformal invariants associated to a planar domain. We will focus on the Aumann–Carathéodory rigidity constant [@MainAumannCaratheodory], the higher order curvatures of the Carathéodory metric [@BurbeaPaper], a conformal metric arising from holomorphic quadratic differentials [@SugawaMetric] and finally, the Hurwitz metric [@TheHurwitzMetric]. These analytic objects have nothing to do with one another, except of course that they are all conformal invariants, and it is precisely this disparity that makes them particularly useful to emphasize the broad utility of the scaling principle as a technique even on planar domains. While each of these invariants requires a different set of conditions on $D$ to be defined in general, we will assume that $D \subset \mathbb C$ is bounded – all the invariants are well defined in this case and so is $\lambda_D(z) \vert dz \vert$, the hyperbolic metric on $D$. Assuming this will not entail any great loss of generality but will instead assist in conveying the spirit of what is intended with a certain degree of uniformity. Any additional hypotheses on $D$ that are required will be explicitly mentioned. Let $\psi$ be a $C^2$-smooth local defining function for $\partial D$ near $p \in \partial D$ and let $ \lambda_D(z) \vert dz \vert $ denotes the hyperbolic metric on $ D $. In what follows, $\mathbb D \subset \mathbb C$ will denote the unit disc. The question is to determine the asymptotic behaviour of these invariants near $p$. Each of the subsequent paragraphs contain a brief description of these invariants followed by the corresponding results and the proofs will follow in subsequent sections after a description of the scaling principle for planar domains. *Higher order curvatures of the Carathéodory metric*: ----------------------------------------------------- Suita [@SuitaI] showed that the density $c_D(z)$ of the Carathéodory metric is real analytic and that its curvature $$\kappa(z) = - c_D^{-2}(z) \Delta \log c_D(z)$$ is at most $-4$ for all $z \in D$. In higher dimensions, this metric is not smooth in general. For $j, k \ge 0$, let $\partial^{j \overline k} c_D$ denote the partial derivatives $\partial^{j+k} c_D/\partial z_j \partial \overline z_k$. Write $((a_{jk}))_{j, k \ge 0}^n$ for the $(n+1) \times (n+1)$ matrix whose $(j, k)$-th entry is $a_{jk}$. For $n \ge 1$, Burbea [@BurbeaPaper] defined the $n$-th order curvature of the Carathéodory metric $c_D(z) \vert dz \vert$ by $$\kappa_n(z: D) = -4c_D(z)^{-(n+1)^2} J^D_n(z)$$ where $J^D_n(z) = \det ((\partial^{j \overline k} c_D))_{j, k \ge 0}^n$. Note that $$\kappa(z) = \kappa_1(z : D)$$ which can be seen by expanding $J_1(z)$. Furthermore, if $f : D \rightarrow D'$ is a conformal equivalence between planar domains $D, D'$, then the equality $$c_D(z) = c_{D'}(f(z)) \vert f'(z) \vert$$ upon repeated differentiation shows that the mixed partials of $c_D(z)$ can be expressed as a combination of the mixed partials of $c_{D'}(f(z))$ where the coefficients are rational functions of the derivatives of $f$ – the denominators of these rational functions only involve $f'(z)$ which is non-vanishing in $D$. By using elementary row and column operations, it follows that $$J^D_n(z) = J^{D'}_n(f(z)) \vert f'(z) \vert^{n+1}$$ and this implies that $\kappa_n(z: D)$ is a conformal invariant for every $n \ge 1$. If $D$ is equivalent to $\mathbb D$, a calculation shows that $$\kappa_n(z: D) = -4 \left( \prod_{k=1}^n k ! \right)^2$$ for each $z \in D$. For a smoothly bounded (and hence of finite connectivity) $D$, Burbea [@BurbeaPaper] showed, among other things, that $$\kappa_n(z: D) \le -4 \left( \prod_{k=1}^n k ! \right)^2$$ for each $z \in D$. This can be strengthened as follows: \[T:HigherCurvature\] Let $D \subset \mathbb C$ be a smoothly bounded domain. For every $p \in \partial D$ $$\kappa_n(z: D) \rightarrow -4 \left( \prod_{k=1}^n k ! \right)^2$$ as $z \rightarrow p$. *The Aumann–Carathéodory rigidity constant*: -------------------------------------------- Recall that the Carathéodory metric $c_D(z) \vert dz \vert$ is defined by $$c_D(z) = \sup \left\{ \vert f'(z) \vert : f : D \rightarrow \mathbb D \; \text{holomorphic and} \; f(z) = 0 \right\}.$$ Let $D$ be non-simply connected and fix $a \in D$. Aumann–Carathéodory [@MainAumannCaratheodory] showed that there is a constant $\Omega(D, a)$, $0\le \Omega(D, a) < 1$, such that if $f$ is any holomorphic self-mapping of $D$ fixing $a$ and $f$ is [*not*]{} an automorphism of $D$, then $\vert f'(a) \vert \le \Omega(D, a)$. For an annulus $\mathcal A$, this constant was explicitly computed by Minda [@AumannCaratheodoryRigidityConstant] and a key ingredient was to realize that $$\Omega(\mathcal A, a) = c_{\mathcal A}(a)/\lambda_{\mathcal A}(a).$$ The explicit formula for $\Omega(\mathcal A, a)$ also showed that $\Omega(\mathcal A, a) \rightarrow 1$ as $a \rightarrow \partial \mathcal A$. For non-simply connected domains $D$ with higher connectivity, [@AumannCaratheodoryRigidityConstant] also shows that $$c_{D}(a)/\lambda_{D}(a) \le \Omega(D, a) < 1.$$ Continuing this line of thought further, Minda in [@HyperbolicMetricCoveringAumannCaratheodoryConstant] considers a pair of bounded domains $D, D'$ with base points $a \in D, b \in D'$ and the associated ratio $$\Omega(a, b) = \sup \{ \left(f^{\ast}(\lambda_{D'})/\lambda_D\right)(a): f \in \mathcal N(D, D'), f(a) = b \}$$ where $\mathcal N(D, D')$ is the class of holomorphic maps $f : D \rightarrow D'$ that are [*not*]{} coverings. Note that $$\left(f^{\ast}(\lambda_{D'})/\lambda_D\right)(a) = \left( \lambda_{D'}(b) \;\vert f'(a) \vert \right) / \lambda_D(a).$$ Among other things, Theorems 6 and 7 of [@HyperbolicMetricCoveringAumannCaratheodoryConstant] respectively show that $$c_D(a)/\lambda_D(a) \le \Omega(a, b) < 1$$ and $$\limsup_{a \rightarrow \partial D} \Omega(a, b) = 1.$$ Note that the first result shows that the lower bound for $\Omega(a, b)$ is independent of $b$ while the second one, which requires $\partial D$ to satisfy an additional geometric condition, is a statement about the boundary behaviour of $\Omega(a, b)$. Here is a result that supplements these statements and emphasizes their local nature: \[T:AumannCaratheodoryConstant\] Let $D, D' \subset \mathbb C$ be a bounded domains and $p \in \partial D$ a $C^2$-smooth boundary point. Then $\Omega(D, z) \rightarrow 1$ as $z \rightarrow p$. Furthermore, for every fixed $w \in D'$, $\Omega(z, w) \rightarrow 1$ as $z \rightarrow p$. *Holomorphic quadratic differentials and a conformal metric*: ------------------------------------------------------------- We begin by recalling a construction due to Sugawa [@SugawaMetric]. Let $R$ be a Riemann surface and $\phi$ a holomorphic $(m, n)$ form on it. In local coordinates $(U_{\alpha}, z_{\alpha})$, $\phi = \phi_{\alpha}(z_{{\alpha}}) dz^2_{{\alpha}}$ where $\phi_{\alpha} : U_{\alpha} \rightarrow \mathbb C$ is a family of holomorphic functions satisfying $$\phi_{\alpha}(z_{\alpha}) = \phi_{\beta} (z_{\beta}) \left(\frac{d z_{\beta}}{d z_{\alpha}}\right)^m \left(\frac{d \overline z_{\beta}}{d \overline z_{\alpha}}\right)^n$$ on the intersection $U_{\alpha} \cap U_{\beta}$. For holomorphic $(2,0)$ forms, this reduces to $$\phi_{\alpha}(z_{\alpha}) = \phi_{\beta} (z_{\beta}) \left(\frac{d z_{\beta}}{d z_{\alpha}}\right)^2$$ and this in turn implies that $$\Vert \phi \Vert_1 = \int_R \vert \phi \vert$$ is well defined. Consider the space $$A(R) = \left\lbrace \phi = \phi(z) \;dz^2 \; \text{a holomorphic}\; (2,0)\; \text{form on}\; R \; \text{with} \; \Vert \phi \Vert_1 < \infty \right \rbrace$$ of integrable holomorphic $(2,0)$ forms on $R$. Fix $z \in R$ and for each local coordinate $(U_{{\alpha}}, \phi_{{\alpha}})$ containing it, let $$q_{R, {\alpha}}(z) = \sup \left\lbrace \vert \phi_{{\alpha}}(z) \vert^{1/2} : \phi \in A(R) \; \text{with} \; \Vert \phi \Vert_1 \le \pi \right\rbrace.$$ Theorem 2.1 of [@SugawaMetric] shows that if $R$ is non-exceptional, then for each $z_0 \in U_{{\alpha}}$ there is a unique extremal differential $\phi \in A(R)$ ($\phi = \phi_{{\alpha}}(z_{{\alpha}}) dz^2_{{\alpha}}$ in $U_{{\alpha}}$) with $\Vert \phi \Vert_1 = \pi$ such that $$q_{R, {\alpha}}(z_0) = \vert \phi_{{\alpha}}(z_0) \vert^{1/2}.$$ If $(U_{\beta}, \phi_{\beta})$ is another coordinate system around $z$, then the corresponding extremal differential $\phi_{\beta}$ is related to $\phi_{\alpha}$ as $$\overline{w'(z_0)} \;\phi_{\beta} = w'(z_0) \; \phi_{\alpha}$$ where $w = \phi_{\beta} \circ \phi_{{\alpha}}^{-1}$. Hence $\vert \phi_{{\alpha}} \vert$ is intrinsically defined and this leads to the conformal metric $q_R(z) \vert dz \vert$ with $q_R(z) = q_{{\alpha}}(z)$ for some (and hence every) chart $U_{{\alpha}}$ containing $z$. It is also shown in [@SugawaMetric] that the density $q_R(z)$ is continuous, $\log q_R$ is subharmonic (or identically $-\infty$ on $R$) and $q_{\mathbb D}(z) = 1/(1 - \vert z \vert^2)$ – therefore, this reduces to the hyperbolic metric on the unit disc. In addition, [@SugawaMetric] provides an estimate for this metric on an annulus. We will focus on the case of bounded domains. \[T:SugawaMetric\] Let $D \subset {\mathbb}C$ be a bounded domain and $p \in \partial D$ a $C^2$-smooth boundary point. Then $$q_D(z) \approx 1/{\rm dist} (z, \partial D)$$ for $z$ close to $p$. Here and in what follows, we use the standard convention that $A \approx B$ means that there is a constant $C > 1$ such that $A/B, B/A$ are both bounded above by $C$. In particular, this statement shows that the metric $q_D(z) \vert dz \vert$ is comparable to the quasi-hyperbolic metric near $C^2$-smooth points. Thus, if $D$ is globally $C^2$-smooth, then $q_D(z) \vert dz \vert$ is comparable to the quasi-hyperbolic metric everywhere on $D$. *The Hurwitz metric*: --------------------- The other conformal metric that we will discuss here is the Hurwitz metric that has been recently defined by Minda [@TheHurwitzMetric]. We begin by recalling its construction which is reminiscent of that for the Kobayashi metric but differs from it in the choice of holomorphic maps which are considered: for a domain $D \subset \mathbb C$ and $a \in D$, let $\mathcal{O}(a, D)$ be the collection of all holomorphic maps $f : \mathbb{D} \rightarrow D$ such that $f(0) = a$ and $f'(0) > 0$. Let $\mathcal{O}^{\ast}(a, D) \subset \mathcal{O}(a, D)$ be the subset of all those $f \in \mathcal{O}(a, D)$ such that $f(z) \not= a$ for all $z$ in the punctured disc $\mathbb{D}^{\ast}$. Set $$r_D(a) = \sup \left\lbrace f'(0) : f \in \mathcal{O}^{\ast}(a, D) \right\rbrace.$$ The Hurwitz metric on $D$ is $\eta_D(z) \vert dz \vert$ where $$\eta_D(a) = 1/r_D(a).$$ Of the several basic properties of this conformal metric that were explored in [@TheHurwitzMetric], we recall the following two: first, for a given $a \in D$, let $\gamma \subset D^{\ast} = D\setminus \{a\}$ be a small positively oriented loop that goes around $a$ once. This loop generates an infinite cyclic subgroup of $\pi_1(D^{\ast})$ to which there is an associated holomorphic covering $G : \mathbb{D}^{\ast} \rightarrow D^{\ast}$. This map $G$ extends holomorphically to $G : \mathbb{D} \rightarrow D$ with $G(0) = a$ and $G'(0) \not= 0$. This covering depends only on the free homotopy class of $\gamma$ and is unique up to precomposition of a rotation around the origin. Hence, it is possible to arrange $G'(0) > 0$. Minda calls this the Hurwitz covering associated with $a \in D$. Using this it follows that every $f \in \mathcal{O}^{\ast}(a, D)$ lifts to $\tilde f : \mathbb{D}^{\ast} \rightarrow \mathbb{D}^{\ast}$. This map extends to a self map of $\mathbb{D}$ and the Schwarz lemma shows that $\vert f'(0) \vert \le G'(0)$. The conclusion is that the extremals for this metric can be described in terms of the Hurwitz coverings. \[T:TheHurwitzMetric\] Let $D \subset \mathbb{C}$ be bounded. Then $\eta_D(z)$ is continuous. Furthermore, if $p \in \partial D$ is a $C^2$-smooth boundary point, then $$\eta_D(z) \approx 1/{\rm dist} (z, \partial D)$$ for $z$ close to $p$. A consequence of Theorem 1.3 and 1.4 is that both $q_D(z) \vert dz \vert$ and $\eta_D(z) \vert dz \vert$ are equivalent metrics near smooth boundary points. Finally, in section 7, we provide some estimates for the generalized curvatures of $ q_D(z) \vert dz \vert $ and $ \eta_D(z)\vert dz \vert $. Scaling of planar domains ========================= The scaling principle for planar domains has been described in detail in [@ScalingInHigherDimensionKrantzKimGreen]. A simplified version which suffices for the applications presented later can be described as follows:\ Let $ D $ be a domain in $ {\mathbb{C}}$ and $ p \in \partial D $ a $ C^2 $-smooth boundary point. This means that there is a neighborhood $ U $ of $ p $ and a $ C^2 $-smooth real function $ \psi $ such that $$U \cap D = \{\psi < 0\}, \quad U \cap \partial D = \{\psi = 0\}$$ and $$d \psi \neq 0 \quad \text{on} \quad U \cap \partial D.$$ Let $ p_j $ be a sequence of points in $ D $ converging to $ p $. Suppose $ \tau(z)\vert dz \vert $ is a conformal metric on $ D $ whose behaviour near $ p $ is to be studied. The affine maps $$\label{Eq:ScalingMap} T_j(z) = \frac{z - p_j}{-\psi(p_j)}$$ satisfy $ T_j(p_j) = 0 $ for all $ j $ and since $ \psi(p_j) \to 0$, it follows that the $ T_j $’s expand a fixed neighborhood of $ p $. To make this precise write $$\psi(z) = \psi(p) + 2Re\left( \frac{\partial \psi}{\partial z}(p)(z - p)\right) + o(\vert z - p\vert)$$ in a neighborhood of $ p $. Let $ K $ be a compact set in $ {\mathbb{C}}$. Since $ \psi(p_j) \to 0 $, it follows that $ T_j(U) $ is an increasing family of open sets that exhaust $ {\mathbb{C}}$ and hence $ K \subset T_j(U) $ for all large $ j $. The functions, by taking their Taylor series expansion at $ z = p_j $, $$\psi \circ T^{-1}_{j}(z) = \psi \left( p_j + z\left( -\psi(p_j)\right) \right) = \psi(p_j) + 2Re\left( \frac{\partial \psi}{\partial z}(p_j)z \right) (-\psi(p_j)) + \psi(p_j)^2 o(1)$$ are therefore well defined on $ K $ and the domains $ D_j' = T_j(U \cap D) $ are defined by $$\psi_j(z) = \frac{1}{-\psi(p_j)}\psi \circ T_j^{-1}(z) = -1 + 2Re\left( \frac{\partial \psi}{\partial z}(p_j)z \right) + (-\psi(p_j))o(1).$$ It can be seen that $ \psi_j(z) $ converges to $$\psi_{\infty}(z) = -1 + 2Re\left( \frac{\partial \psi}{\partial z}(p)z \right)$$ uniformly on $ K $ as $ j \to \infty $. At this stage, let us recall the Hausdorff metric on subsets of a metric space. Given a set $ S \subseteq {\mathbb{C}}^n $, let $ S_{\epsilon} $ denote the $ \epsilon $-neighborhood of $ S $ with respect to the standard Euclidean distance on $ {\mathbb{C}}^n $. The Hausdorff distance between compact sets $ X, Y \subset {\mathbb{C}}^n $ is given by $$d_H(X,Y) = \inf\{\epsilon > 0 : X \subset Y_{\epsilon}\,\, \text{and} \,\, Y \subset X_{\epsilon}\}.$$ It is known that this is a complete metric space on the space of compact subsets of $ {\mathbb{C}}^n $. To deal with non-compact but closed sets there is a topology arising from a family of local Hausdorff semi-norms. It is defined as follows: fix an $ R > 0 $ and for a closed set $ A \subset {\mathbb{C}}^n $, let $ A^R = A \cap \overline B(0, R)$ where $ B(0, R) $ is the ball centred at the origin with radius $ R $. Then, for $ A, B \subset {\mathbb{C}}^n$, set $$d_H^{(R)}(A, B) = d_H\left(A^R, B^R\right).$$ We will say that a sequence of closed sets $ A_n $ converges to a closed set $ A $ if there exists $ R_0 $ such that $$\lim_{n \to \infty}d_H\left(A_n^R, A^R\right) = 0$$ for all $ R \geq R_0 $. Since $ \psi_j \to \psi_{\infty} $ uniformly on every compact subset in $ {\mathbb{C}}$, it follows that the closed sets $ \overline{D_j'} = T_j(\overline{U \cap D})$ converge to the closure of the half-space $${\mathcal{H}}= \{z : -1 + 2Re\left( \frac{\partial \psi}{\partial z}(p)z \right) < 0 \} = \{z: Re(\overline{\omega}z-1)<0\}$$ where $\omega = (\partial\psi/\partial x)(p) + i(\partial\psi/\partial y)(p)$, in the Hausdorff sense as described above. As a consequence, every compact $ K \subset {\mathcal{H}}$ is eventually contained in $ D_j' $. Similarly, every compact $ K \subset {\mathbb{C}}\setminus \overline{{\mathcal{H}}} $ eventually has no intersection with $ \overline{D_j'} $. It can be seen that the same property holds for the domains $ D_j = T_j(D) $, i.e, they converge to the half-space $ {\mathcal{H}}$ in the Hausdorff sense. Now coming back to the metric $ \tau(z)\vert dz \vert $, the pull-backs $$\tau_j(z) \vert dz \vert = \tau_D(T_j^{-1}(z)) \vert (T_j^{-1})^{\prime}(z)\vert \vert dz \vert$$ are well-defined conformal metrics on $ D_j $ which satisfy $$\tau_j(0) \vert dz \vert = \tau_D(p_j)(-\psi(p_j))\vert dz \vert.$$ Therefore, to study $ \tau(p_j) $, it is enough to study $ \tau_j(0) $. This is exactly what will be done in the sequel. Proof of Theorem \[T:HigherCurvature\] ====================================== It is known (see [@InvariantMetricJarnicki section 19.3] for example) that if $ D \subset {\mathbb{C}}$ is bounded and $ p \in \partial D $ is a $ C^2 $-smooth boundary point, then $$\lim_{z \to p}\frac{c_{U \cap D}(z)}{c_D(z)} = 1$$ where $ U $ is a neighborhood of $ p $ such that $ U \cap D $ is simply connected. Here is a version of this statement that we will need: Let $ D \subset {\mathbb{C}}$ be a bounded domain and $ p \in \partial D $ a $ C^2 $-smooth boundary point. Let $ U $ be a neighborhood of $ p $ such that $ U \cap D $ is simply connected. Let $ \psi $ be a defining function of $\partial D $ near the point $ p $. Then $$\lim_{z \to p}c_{D}(z)(-\psi(z)) = c_{{\mathcal{H}}}(0).$$ Let $ \{p_j\} $ be a sequence in $ D $ which converges to $ p $. Consider the affine map $$T_j(z) = \frac{z - p_j}{-\psi(p_j)}$$ whose inverse is given by $$T_j^{-1}(z) = -\psi(p_j)z + p_j.$$ Let $D_j = T_j(D)$ and $ D'_j = T_j({U \cap D}) $. Note that $ \{D_j\} $ and $ D'_j $ converge to the half-space $ {\mathcal{H}}$ both in the Hausdorff and Carathéodory kernel sense. Let $z \in {\mathcal{H}}$. Then $z \in D'_j$ for $j $ large. Since $D'_j$ is a simply connected, there is a biholomorphic map $f_j : \mathbb{D} {\longrightarrow}D'_j$ with $f_j(0) = z$ and $f_j^{\prime}(0) > 0$. The domain $ D'_j $ converges to the half-space $ {\mathcal{H}}$ and therefore the Carathéodory kernel convergence theorem (see [@CaratheodoryKernelConvergence] for example), shows that $f_j$ admits a holomorphic limit $f : \mathbb{D} {\longrightarrow}{\mathcal{H}}$ which is a biholomorphism. Note that $f(0) = z$ and $f'(0) > 0$. We know that in the case of simply connected domains, the Carathéodory and hyperbolic metric coincide and so $$c_{D'_j}(z) = \lambda_{D'_j}(z)$$ for all large $ j $ and hence $$c_{{\mathcal{H}}}(z) = \lambda_{{\mathcal{H}}}(z).$$ It is known that $$\lambda_{D'_j}(z) = \frac{1}{f_j^{\prime}(0)} \,\, \mbox{and} \,\, \lambda_{{\mathcal{H}}}(z) = \frac{1}{f^{\prime}(0)}.$$ From this we conclude that $c_{D'_j}(z)$ converges to $c_{{\mathcal{H}}}(z)$ as $j \to \infty$. Under the biholomorphism $ T_j^{-1} $, the pull back metric $$(T_j^{-1})^*(c_{{U \cap D}})(z) = c_{D'_j}(z)$$ for all $ z \in D_j$. That is $$c_{{U \cap D}}(T_j^{-1}(z))\vert (T_j^{-1})^{\prime}(z)\vert = c_{D'_j}(z).$$ Putting $ z = 0 $, we obtain $$c_{{U \cap D}}(p_j)(-\psi(p_j)) = c_{D'_j}(0).$$ As we have seen above $ c_{D'_j}(z) $ converges to $ c_{{\mathcal{H}}}(z) $, for all $ z \in {\mathcal{H}}$, as $ j \to \infty $. Therefore, $ c_{{U \cap D}}(p_j)(-\psi(p_j)) $, which is equal to $ c_{D_j}(0) $, converges to $ c_{{\mathcal{H}}}(0) $ as $ j \to \infty $. Since $ \{p_j\} $ is an arbitrary sequence, we conclude that $$\lim_{z \to p}c_{{U \cap D}}(z)(-\psi(z)) = c_{{\mathcal{H}}}(0).$$ Since $$\lim_{z \to p}\frac{c_{U \cap D}(z)}{c_D(z)} = 1,$$ we get $$\lim_{z \to p}c_{D}(z)(-\psi(z)) = c_{{\mathcal{H}}}(0).$$ The Ahlfors map, the Szegö kernel and the Garabedian kernel of the half space $${\mathcal{H}}= \{z : Re(\bar \omega z - 1) < 0\}$$ at $ a \in {\mathcal{H}}$ are given by $$f_{{\mathcal{H}}}(z, a) = \vert \omega \vert\frac{z - a}{2 - \omega \bar a - \bar \omega z},$$ $$S_{{\mathcal{H}}}(z, a) = \frac{1}{2 \pi}\frac{\vert \omega \vert}{2 - \omega \bar a - \bar \omega z}$$ and $$L_{{\mathcal{H}}}(z, a) = \frac{1}{2 \pi}\frac{1}{z - a}$$ respectively. Let $ D $ be a $ C^{\infty} $-smooth bounded domain. Choose $ p \in \partial D $ and a sequence $ p_j $ in $ D $ that converges to $ p $. The sequence of scaled domains $ D_j = T_j(D) $, where $ T_j $ are as in \[Eq:ScalingMap\], converges to the half-space $ {\mathcal{H}}$ as before. Fix $ a \in {\mathcal{H}}$ and note that $ a \in D_j $ for $ j $ large. Note that $ 0 \in D_j $ for $ j \geq 1 $. Let $ f_j(z, a) $ be the Ahlfors map such that $ f_j(a, a) = 0 $, $ f_j^{\prime}(a, a) > 0 $ and suppose that $ S_j(z, a) $ and $ L_j(z, a) $ are the Szegö and Garabedian kernels for $ D_j $ respectively. \[Prop:ConvergenceAhlforsSzegoGarabeidian\] In this situation, the sequence of Ahlfors maps $ f_j(z, a) $ converges to $ f_{{\mathcal{H}}}(z, a) $ uniformly on compact subsets of $ {\mathcal{H}}$. The Szegő kernels $ S_j(z, a) $ converge to $ S_{{\mathcal{H}}}(z, a) $ uniformly on compact subsets of $ {\mathcal{H}}$. Moreover, $ S_j(z, w) $ converges to the $ S_{{\mathcal{H}}}(z, w) $ uniformly on every compact subset of $ {\mathcal{H}}\times {\mathcal{H}}$. Finally, the Garabedian kernels $ L_j(z, a) $ converges to $ L_{{\mathcal{H}}}(z, a) $ uniformly on compact subsets of $ {\mathcal{H}}\setminus \{a\} $. In the proof of the previous proposition we have seen that $ c_{D_j}(a) $ converges $ c_{{\mathcal{H}}}(a) $ as $ j \to \infty $. By definition $f_j^{\prime}(a, a) = c_{D_j}(a)$ and $ f_{{\mathcal{H}}}^{\prime}(a, a) = c_{{\mathcal{H}}}(a) $. Therefore, $f_j^{\prime}(a, a)$ converges to $f_{{\mathcal{H}}}^{\prime}(a, a)$ as $ j \to \infty $. Now, we shall show that $ f_j(z, a) $ converges to $f_{{\mathcal{H}}}(z, a)$ uniformly on compact subsets of ${\mathcal{H}}$. Since the sequence of the Ahlfors maps $\{f_j(z, a)\}$ forms a normal family of holomorphic functions and $ f_j(a, a) = 0 $, there exists a subsequence $\{f_{k_j}(z, a)\}$ of $\{f_j(z, a)\}$ that converges to a holomorphic function $f$ uniformly on every compact subset of the half-space ${\mathcal{H}}$. Then $f(a) = 0$ and, as $f_j^{\prime}(a)$ converges to $f_{{\mathcal{H}}}^{\prime}(a, a)$, so we have $f^{\prime}(a) = f_{{\mathcal{H}}}^{\prime}(a, a)$. Thus, we have $f: {\mathcal{H}}\longrightarrow \mathbb{D}$ such that $f(a) = 0$ and $f^{\prime}(a) = f_{{\mathcal{H}}}^{\prime}(a, a)$. By the uniqueness of the Ahlfors map, we conclude that $f(z) = f_{{\mathcal{H}}}(z, a)$ for all $ z \in {\mathcal{H}}$. Thus, from above, we see that every limiting function of the sequence $\{f_j(z, a)\}$ is equal to $f_{{\mathcal{H}}}(z, a)$. Hence, we conclude that $\{f_j(z, a)\}$ converges to $f_{{\mathcal{H}}}(z, a)$ uniformly on every compact subset of $ {\mathcal{H}}$. Next, we shall show that $ S_j(\zeta, z) $ converge to $ S_{{\mathcal{H}}}(\zeta, z) $ uniformly on compact subsets of ${\mathcal{H}}\times {\mathcal{H}}$. First, we show that ${S_j}(\zeta, z)$ is locally uniformly bounded. Let $z_0, \,\zeta_0 \in {\mathcal{H}}$ and choose $ r_0 > 0 $ such that the closed balls $ \overline{B}(z_0, r_0) $, $ \overline{B}(\zeta_0, r_0) \subset {\mathcal{H}}$. Since $ D_j $ converges to $ {\mathcal{H}}$, $ \overline{B}(z_0, r_0) $, $ \overline{B}(\zeta_0, r_0) \subset D_j $ for $ j $ large. By the monotonicity of the Carathéodory metric $${c_{D_j}}(z) \leq \frac{r_0}{r_0^2 - |z|^2}$$ for all $ z \in B(z_0,r_0)$, and $${c_{D_j}}(\zeta) \leq \frac{r_0}{r_0^2 - |\zeta|^2}$$ for all $ \zeta \in B(\zeta_0,r_0) $ and for $j$ large. Therefore, if $ 0 < r < r_0 $ and $ (z, \zeta) \in \overline{B}(z_0, r_0) \times \overline{B}(\zeta_0, r_0) $, it follows that $${c_{D_j}}(z) \leq \frac{r_0}{r_0^2 - r^2}$$ and $${c_{D_j}}(\zeta) \leq \frac{r_0}{r_0^2 - r^2}$$ for $j$ large. Using the fact that ${c_{D_j}}(z) = 2\pi{S_j}(z,z)$, we have $${S_j}(z, z) \leq \frac{1}{2 \pi} \frac{r_0}{r_0^2 - r^2}$$ for all $ z \in \overline{B}(z_0,r) $ and $${S_j}(\zeta, \zeta) \leq \frac{1}{2 \pi} \frac{r_0}{r_0^2 - r^2}$$ for all $ \zeta \in \overline{B}(\zeta_0,r) $ and for $j$ large. By the Cauchy-Schwarz inequality, $$| {S_j}(\zeta, z) |^2 \leq | {S_j}(\zeta, \zeta)| | {S_j}(z, z)|$$ which implies $$| {S_j}(\zeta, z) | \leq \frac{1}{2 \pi} \frac{r_0}{r_0^2 - r^2}$$ for all $ (\zeta, z) \in \overline{B}(\zeta_0,r) \times \overline{B}(z_0,r) $, and for all $ j \geq 1 $. This shows that ${S_j}(\zeta, z)$ is locally uniformly bounded. Hence the sequence $\{{S_j}(\zeta,z)\}$, as a holomorphic function in the two variables $\zeta, \overline{z}$ is a normal family. Now, we claim that the sequence $\{{S_j}(\zeta,z)\}$ converges to a unique limit. Let $S$ be a limiting function of $\{{S_j}(\zeta,z)\}$. Since $S$ is holomorphic in $\zeta, \overline{z}$, the power series expansion of the difference $S - S_{{\mathcal{H}}}$ around the point $0$ has the form $$S(\zeta, z) - S_{{\mathcal{H}}}(\zeta, z) = \sum_{l,k =0}^{\infty}a_{l,k}\zeta^l\overline{z}^k.$$ Recall that $2\pi S_{j}(z,z) = {c_{D_j}}(z)$ converges to $c_{{\mathcal{H}}}(z)$ as $ j \to \infty $ and $2\pi S_{{\mathcal{H}}}(z,z) = c_{{\mathcal{H}}}(z)$. From this we infer that $S(z, z) = S_{{\mathcal{H}}}(z, z)$ for all $ z \in {\mathcal{H}}$ and hence $$\sum_{l,k =0}^{\infty}a_{l,k}z^l\overline{z}^k = 0.$$ By substituting $z = |z|e^{i\theta}$ in the above equation, we get $$\sum_{l,k =0}^{\infty}a_{l,k}|z|^{(l + k)}e^{i(l - k)\theta} = 0.$$ and hence $$\sum_{l +k = n}a_{l,k}e^{i(l - k)\theta} = 0$$ for all $ n \geq 1$. It follows that $a_{l,k} = 0$ for all $l, k \geq 1 $. Hence we have $$S(\zeta, z) = S_{{\mathcal{H}}}(\zeta, z)$$ for all $ \zeta, z \in {\mathcal{H}}$. So any limiting function of $\{{S_j}(\zeta,z)\}$ is equal to $ S_{{\mathcal{H}}}(\zeta,z) $. This shows that $\{{S_j}(\zeta,z)\}$ converges to $ S_{{\mathcal{H}}}(\zeta,z) $ uniformly on compact subsets of $ {\mathcal{H}}\times {\mathcal{H}}$. Finally, we show that the sequence of the Garabedian kernel functions $\{L_j(z, a)\}$ also converges to the Garabedian kernel function $ L_{{\mathcal{H}}}(z, a) $ of the half-space $ {\mathcal{H}}$ uniformly on every compact subset of ${\mathcal{H}}\setminus \{a\}$. This will be done by showing that $\{L_j(z, a)\}$ is a normal family and has a unique limiting function. To show that $\{L_j(z, a)\}$ is a normal family, it is enough to show that $L_j(z, a)$ is locally uniformly bounded on $ {\mathcal{H}}\setminus \{a\} $. Since $ z = a $ is a zero of $ f_{{\mathcal{H}}}(z, a) $, it follows that for an arbitrary compact set $ K \subset {\mathcal{H}}\setminus \{a\} $, the infimum of $ \vert f_{{\mathcal{H}}}(z, a) \vert $ on $ K $ is positive. Let $ m > 0 $ be this infimum. As $ f_j(z, a) \to f_{{\mathcal{H}}}(z, a)$ uniformly on $ K $, $$\vert f_j(z, a) \vert > \frac{m}{2}$$ and hence $$\frac{1}{\vert f_j(z, a) \vert} < \frac{2}{m}$$ for all $ z \in K $ and $ j $ large. Since $ S_j(z, a) $ converges to $ S_{{\mathcal{H}}}(z, a) $ uniformly on $ K $, there exists an $ M > 0 $ such that $ \vert S_j(z, a) \vert \leq M $ for $ j $ large. As $$L_j(z, a) = \frac{S_j(z, a)}{f_j(z, a)}$$ for all $z \in D_j \setminus \{a\}$, we obtain $$|L_j(\zeta, a)| \leq \frac{2M}{m}$$ for all $\zeta \in K$ and for $j$ large. This shows that $L_j(z, a)$ is locally uniformly bounded on $ {\mathcal{H}}\setminus \{a\} $, and hence a normal family. Finally, any limit of $ L_j(z, a) $ must be $ S_{{\mathcal{H}}}\big/ f_{{\mathcal{H}}}(z,a) $ on $ {\mathcal{H}}\setminus \{a\} $ and hence $ L_j(z, a) $ converges to $ L_{{\mathcal{H}}}(z, a) $ uniformly on compact subsets of $ {\mathcal{H}}\setminus \{a\} $. This generalizes the main result of [@SuitaI] to the case of a sequence of domains $ D_j $ that converges to $ {\mathcal{H}}$ as described above. \[Pr:UniformConvergenceOfCaraParDerCara\] Let $\{D_j\}$ be the sequence of domains that converge to the half-space ${\mathcal{H}}$ in the Hausdorff sense as in the previous proposition. Then the sequence of Carathéodory metrics $c_{D_j}$ of the domains $D_j$ converges to the Carathéodory metric $c_{{\mathcal{H}}}$ of the half-space $ {\mathcal{H}}$ uniformly on every compact subset of ${\mathcal{H}}$. Moreover, all the partial derivatives of $c_{D_j}$ converge to the corresponding partial derivatives of $c_{{\mathcal{H}}}$, and the sequence of curvatures $ \kappa(z : D_j) $ converges to $ \kappa(z : {\mathcal{H}}) = -4 $, which is the curvature of the half-space $ {\mathcal{H}}$, uniformly on every compact subset of $ {\mathcal{H}}$. Since $$c_{D_j}(z) = 2 \pi S_j(z, z)$$ and $ S_j(z, z) $ converges to $ S_{{\mathcal{H}}}(z, z) $ uniformly on compact subsets of $ {\mathcal{H}}$, we conclude that the sequence of Carathéodory metrics $c_{D_j}$ of the domains $D_j$ also converges to the Carathéodory metric $c_{{\mathcal{H}}}$ of the half-space $ {\mathcal{H}}$ uniformly on every compact subset of ${\mathcal{H}}$. To show that the derivatives of $ c_{D_j} $ converge to the corresponding derivatives of $ c_{{\mathcal{H}}} $, it is enough to show the convergence in a neighborhood of a point in ${\mathcal{H}}$. Let $ D^2 = D^2((z_0, \zeta_0); (r_1, r_2)) $ be a bidisk around the point $ (z_0, \zeta_0) \in {\mathcal{H}}\times {\mathcal{H}}$ which is relatively compact in ${\mathcal{H}}\times {\mathcal{H}}$. Then $ D^2 $ is relatively compact in $D_j \times D_j$ for all large $j$. Let $C_1 = \{z : |z -z_0| = r_1\}$ and $C_2 = \{\zeta : |\zeta -\zeta_0| = r_2\}$. Since $S_j(z, \zeta)$ is holomorphic in $z$ and antiholomorphic in $\zeta$, the Cauchy integral yields $$\frac{\partial^{m +n}S_j(z, \zeta)}{\partial z^m\partial \bar{z}^n } = \frac{m!n!}{(2\pi i)^2} \int_{C_1} \int_{C_2}\frac{S_j(\xi_1, \xi_2)}{(\xi - z)^{m + 1}(\overline{\xi - \zeta})^{n + 1}}d\xi_1 d\xi_2$$ and an application of the maximum modulus principle shows that $$\left|\frac{\partial^{m +n}S_j(z, \zeta)}{\partial z^m\partial \bar{z}^n }\right| \leq \frac{m!n!}{4\pi^2}\sup_{(\xi_1, \xi_2) \in D^2} |S_j(\xi_1, \xi_2)|\frac{1}{r_1^m r_2^n}.$$ Now applying the above inequality for the function $S_j - S_{{\mathcal{H}}}$, we get $$\left|\frac{\partial^{m +n}}{\partial z^m\partial \bar{z}^n }( S_j - S_{{\mathcal{H}}} )(z, \zeta)\right| \leq \frac{m!n!}{4\pi^2}\sup_{(\xi_1, \xi_2) \in D^2} | S_j(\xi_1, \xi_2) - S_{{\mathcal{H}}}(\xi_1, \xi_2) |\frac{1}{r_1^m r_2^n}.$$ Since $S_j(z, z) \to S_{\mathcal{H}}(z, z)$ uniformly on $ D^2 $, all the partial derivatives of $S_j(z, z)$ converge to the corresponding partial derivatives of $S_{\mathcal{H}}(z, z)$ uniformly on every compact subset of $D^2$. Recall that the curvature of the Carathéodory metric $ c_{D_j} $ is given by $$\kappa(z : c_{D_j}) = -c_{D_j}(z)^{-2}\Delta \log c_{D_j}(z)$$ which upon simplification gives $$-\Delta \log c_{D_j} = 4c_{D_j}^{-4}\left( \partial^{0\bar{1}}c_{D_j}\partial^{1\bar{0}}c_{D_j} - c_{D_j}\partial^{1\bar{1}}c_{D_j} \right)$$ where $ \partial^{i \bar j}c_{D_j} = \partial^{i + j}c_D\big/\partial^iz \partial^j \bar z $ for $ i, j = 0, 1 $. Since all the partial derivatives of $ c_{D_j} $ converge uniformly to the corresponding partial derivatives of $ c_{{\mathcal{H}}} $, $$-\Delta \log c_{D_j} = 4c_{D_j}^{-4}\left( \partial^{0\bar{1}}c_{D_j}\partial^{1\bar{0}}c_{D_j} - c_{D_j}\partial^{1\bar{1}}c_{D_j} \right)$$ converges to $$-\Delta \log c_{{\mathcal{H}}} = 4c_{{\mathcal{H}}}^{-4}\left( \partial^{0\bar{1}}c_{{\mathcal{H}}}\partial^{1\bar{0}}c_{{\mathcal{H}}} - c_{{\mathcal{H}}}\partial^{1\bar{1}}c_{{\mathcal{H}}} \right)$$ as $ j \to \infty $. Hence the sequence of curvatures $$\kappa(z : c_{D_j}) = 4c_{D_j}^{-4}\left( \partial^{0\bar{1}}c_{D_j}\partial^{1\bar{0}}c_{D_j} - c_{D_j}\partial^{1\bar{1}}c_{D_j} \right)$$ converges to $ \kappa(z,c_{{\mathcal{H}}}) = -4$ as $ j \to \infty $. Note that the higher order curvatures of the Carathéodory metrics of the domains $ D_j $ are given by $$\kappa_n(z : c_{D_j}) =c_{D_j}(z)^{-(n + 1)^2}\big/ J^{D_j}_n(z).$$ By appealing to the convergence $c_{D_j}$ and its partial derivatives to $c_{{\mathcal{H}}}$ and its corresponding partial derivatives uniformly on every compact subset of $ {\mathcal{H}}$, we infer that $\kappa_n(z, c_{D_j})$ converges to $\kappa_n(z : c_{{\mathcal{H}}}) = - 4 \left(\prod_{k = 1}^n k!\right)^2$ uniformly on every compact subset of ${\mathcal{H}}$. The fact that $$\kappa_n(z : c_{{\mathcal{H}}}) = - 4 \left(\prod_{k = 1}^n k!\right)^2$$ is a calculation using the Carathéodory metric on $ {\mathcal{H}}$. Proof of Theorem \[T:AumannCaratheodoryConstant\] ================================================= Scale $ D $ near $ p $ as explained earlier along a sequence $ p_j \rightarrow p $. Let $ D_j = T_j(D) $ as before and let $ \Omega_j $ be Aumann-Carathéodory constant of $ D_j $ and $ D' $. Then $ \Omega_j(0, w) = \Omega(p_j, w) $ for all $ j $. So, it suffices to study the behaviour of $ \Omega_j(0, w) $ as $ j \rightarrow \infty $. Let $ z \in D_j $ and $ \Omega_{j}(z, w) $ be Aumann-Carathéodory constant at $ z $ and $ w $. We shall show that $ \Omega_{j}(z, w) $ converges to $1$ as $ j \to \infty $. Let $ c_{D_j}$ and $ \lambda_{D_j}$ be Carathéodory and hyperbolic metric on $D_j$ respectively, for all $j\geq 1$. Using the following inequality $$\Omega_{0} \leq \Omega \leq 1,$$ we have $$\label{Eq:RatioCaraHyper} \frac{ c_{D_j}(z) }{ \lambda_{D_j}(z) }\leq \Omega_{j}(z, w) \leq 1.$$ By Proposition \[Pr:UniformConvergenceOfCaraParDerCara\], $c_{D_j}(z)$ converges to $c_{{\mathcal{H}}}(z)$ uniformly on every compact subset of ${\mathcal{H}}$ as $ j \to \infty $. Again, using the scaling technique, we also have that the hyperbolic metric $\lambda_{D_j}(z) $ converges to $ \lambda_{{\mathcal{H}}}(z)$ on uniformly every compact subsets of ${\mathcal{H}}$ as $ j \to \infty $. In case of simply connected domains, in particular for the half-space $ {\mathcal{H}}$, the hyperbolic metric and Carathéodory metric coincide, so we have $ c_{{\mathcal{H}}}(z) = \lambda_{{\mathcal{H}}}(z) $. Consequently, by (\[Eq:RatioCaraHyper\]), we conclude that $ \Omega_{j}(z, w) $ converges to $1$, as $ j \to \infty $, uniformly on every compact subset of $ {\mathcal{H}}$. In particular, we have $ \Omega_{j}(0, w) \to 1 $ as $ j \to \infty $. This completes the proof. Let $D \subset {\mathbb{C}}$ be a domain and $ p \in \partial D $ is a $ C^2 $-smooth boundary point. Then the Aumann-Carathéodory rigidity constant $$\Omega_D(z) \rightarrow 1$$ as $ z \rightarrow p $. Proof of Theorem \[T:SugawaMetric\] =================================== For a fixed $ p \in D $, let $ \phi_D(\zeta, p) $ be the extremal holomorphic differential for the metric $ q_D(z) \vert dz \vert $. Recall Lemma 2.2 from [@SugawaMetric] that relates how $ \phi_D(\zeta, p) $ is transformed by a biholomorphic map. Let $ F : D \longrightarrow D' $ be a biholomorphic map. Then $$\phi_{D'}(F(\zeta), F(p)) \left(F^{\prime}(\zeta)\right)\frac{\overline{F^{\prime }(p)}}{F^{\prime}(p)} = \phi_D(\zeta, p).$$ \[L:ExtremalFunctionSugawaMetricHalfSpace\] Let ${\mathcal{H}}= \{z \in {\mathbb{C}}: Re(\overline{\omega}z -1) < 0\}$ be a half-space, and let $\omega_0 \in {\mathcal{H}}$. Then the extremal function of $ q_{{\mathcal{H}}}(z) \vert dz \vert $ of ${\mathcal{H}}$ at $\omega_0 \in {\mathcal{H}}$ is $$\phi_{{\mathcal{H}}}(z, \omega_0) = |\omega|^2 \frac{(2 - \omega\overline{\omega_0} - \overline{\omega}\omega_0)^2}{(2 - \omega\overline{\omega_0} - \overline{\omega}z)^4}$$ for all $z \in {\mathcal{H}}$. The extremal function for $q_{\mathbb{D}}$ at the point $z = 0$ is $$\label{Eq: HurAtZeroDisk} \phi_{\mathbb{D}}(\zeta, 0) = 1.$$ for all $\zeta \in \mathbb{D}$. Also $$f(z) = \frac{|\omega|(z - \omega_0)}{2 - \omega\overline{\omega_0} - \overline{\omega}z}$$ is the Riemann map between $ {\mathcal{H}}$ and the unit disk $ {\mathbb{D}}$. By computing the derivative of the map $ f $ , we get $$f^{\prime}(z) = |\omega| \frac{(2 - \omega\overline{\omega_0} - \overline{\omega}\omega_0)}{(2 - \omega\overline{\omega_0} - \overline{\omega}z)^2}$$ for all $ z \in {\mathcal{H}}$. By substituting the value of $f^{\prime}(z)$, $f^{\prime}(0)$ and $\overline{f^{\prime}(0)}$ in the transformation formula, we get $$\label{Eq:TansForwithPhiD} \phi_{{\mathcal{H}}}(z, \omega) = |\omega|^2 \frac{(2 - \omega\overline{\omega_0} - \overline{\omega}\omega_0)^2}{(2 - \omega\overline{\omega_0} - \overline{\omega}z_0)^4}\phi_{{\mathbb{D}}}(f(z), f(\omega_0)).$$ Note that $f(\omega_0) = 0$, therefore by (\[Eq: HurAtZeroDisk\]), we have $\phi_{{\mathbb{D}}}(f(z), f(\omega_0)) = 1$. So from (\[Eq:TansForwithPhiD\]) $$\phi_{{\mathcal{H}}}(z, \omega_0) = |\omega|^2 \frac{(2 - \omega\overline{\omega_0} - \overline{\omega}\omega_0)^2}{(2 - \omega\overline{\omega_0} - \overline{\omega}z)^4}$$ for all $z \in {\mathcal{H}}$. \[L:ConvergenceOfIntegral\] Let the sequence of domains $ D_j $ converges to $ {\mathcal{H}}$ as before. Let $ z_j $ be a sequence in $ {\mathcal{H}}$ that converges to $ z_0 \in {\mathcal{H}}$. Let $ \phi_{{\mathcal{H}},j} $ be the extremal function of $ q_{{\mathcal{H}}}(z)\vert dz \vert $ at $ z_j $ for all $ j $ and $ \phi_{{\mathcal{H}}} $ be the extremal function for $ q_{{\mathcal{H}}}(z)\vert dz \vert $ at $ z_0 $. Then $$\int_{D_j}|\phi_j| \to \int_{{\mathcal{H}}}\vert \phi_{{\mathcal{H}}}\vert = \pi$$ as $j \to \infty$. By Lemma \[L:ExtremalFunctionSugawaMetricHalfSpace\], we see that the extremal functions of $ q_{{\mathcal{H}}}(z) \vert dz \vert $ of the half-space ${\mathcal{H}}$ at the points $z_j$ and $z_0$ are given by $$\phi_{{\mathcal{H}}, j}(z, z_j) = |\omega|^2 \frac{(2 - \omega\overline{z_j} - \overline{\omega}z_j)^2}{(2 - \omega\overline{z_j} - \overline{\omega}z)^4}$$ and $$\phi_{{\mathcal{H}}}(z, z_0) = |\omega|^2 \frac{(2 - \omega\overline{z_0} - \overline{\omega}z_0)^2}{(2 - \omega\overline{z_0} - \overline{\omega}z)^4}$$ respectively, for all $z \in {\mathcal{H}}$ and for all $j \geq 1$. By substituting $Z_j = \frac{2 - \omega\overline{z_j}}{\overline{\omega}}$ and $Z_0 = \frac{2 - \omega\overline{z_0}}{\overline{\omega}}$ in the above equations, we can rewrite the above equations as $$\phi_{{\mathcal{H}}, j}(z,z_j) = \frac{|\omega|^2}{\overline{\omega}^4}\frac{(2 - \omega\overline{z_j} - \overline{\omega}z_j )^2}{(Z_j - z)^4}$$ and $${\phi_{\mathcal{H}}}(z,z_0) = \frac{|\omega|^2}{\overline{\omega}^4}\frac{(2 - \omega\overline{z_0} - \overline{\omega}z_0 )^2}{(Z_0 - z)^4}.$$ respectively, for all $z \in {\mathcal{H}}$ and for all $j \geq 1$. Note that $ Z_0 \notin {\mathcal{H}}$ and hence $ Z_j \notin {\mathcal{H}}$ for $ j $ large. Define $$\phi_j(z) = {\chi_{D_j}}(z)|\phi_{{\mathcal{H}},j}(z, z_j)|$$ and $$\phi(z) = {\chi_{\mathcal{H}}}(z) |{\phi_{\mathcal{H}}}(z, z_0)|$$ Here, $\chi_{A}$, for $A \subset {\mathbb{C}}$, its characteristic function. Note that $\phi_j$ and $ \phi $ are measurable functions on $ {\mathbb{C}}$ and $ \phi_j \to \phi $ point-wise almost everywhere on ${\mathbb{C}}$. Next, we shall show that there exists a measurable function $g$ on ${\mathbb{C}}$ satisfying $$\vert \phi_j \vert \leq g$$ for all $j\geq 1$ and $$\int_{{\mathbb{C}}}|g| < \infty.$$ First, we note that $$|Z_j - z| \geq \frac{R}{2}$$ for all $j \geq 1$. Again, by the triangle inequality, we have $$\left|\frac{|Z_0 - z|}{|Z_j - z|} - 1 \right| \leq \frac{|Z_0 - Z_j|}{|Z_j - z|} \leq 1$$ for all $j \geq 1$ and for all $z \in {\mathbb{C}}\setminus B(Z_0, R)$. Therefore $$\frac{1}{|Z_j - z|} \leq \frac{2}{|Z_0 - z|}$$ for all $z \in {\mathbb{C}}\setminus B(Z_0, R)$ and for all $j \geq 1$ and this implies that $$\frac{1}{|\omega^2|} \frac{|2 - \omega\overline{z_j} - \overline{\omega}z_j |^2}{|Z_j - z|^4} \leq \frac{16}{|\omega^2|}\frac{|2 - \omega\overline{z_j} - \overline{\omega}z_j |^2}{|Z_0 - z|^4}$$ for all $z \in {\mathbb{C}}\setminus B(Z_0, R)$ and for all $j \geq 1$. Note that there exists $ K > 0 $ such that $$|2 - \omega\overline{z_j} - \overline{\omega}z_j |^2 \leq K$$ for all $j \geq 1$ since $ \{z_j \} $ converges. Hence $$|\phi_j(z)| \leq \frac{16 K}{|\omega^2|}\frac{1}{|Z_0 - z|^4}$$ for all $z \in {\mathbb{C}}\setminus B(Z_0, R)$ and for all $j \geq 1$. If we set $$g(z) = \begin{cases} \frac{16 K}{|\omega^2|}\frac{1}{|Z_0 - z|^4}, &\mbox{if} \,\, z \in {\mathbb{C}}\setminus B(Z_0, R)\\ 0 & \mbox{if}\,\, z \in {\bar{B}(Z_0, R)}\end{cases}$$ we get that $$|\phi_j(z)| \leq g(z)$$ for all $z \in {\mathbb{C}}$ and for all $j \geq 1$. Now note that $$\begin{split} \int_{{\mathbb{C}}}g &= \int_{{\mathbb{C}}\setminus B(Z_0, R)} \frac{16K}{|\omega|^2}\frac{1}{|Z_0 - z|^4} = \frac{16K}{|\overline{\omega}^2|}\int_{{\mathbb{C} \setminus {\bar{B}(Z_0, R)}}} \left|\frac{1}{(Z_0 - \zeta)^4}\right| \\ &= \frac{16K}{|\overline{\omega}^2|} \int_{R}^{\infty} \int_0^{2\pi} \frac{1}{r^3} = \frac{16K}{|\overline{\omega}^2|} \frac{1}{R^2} < \infty. \end{split}$$ The dominated convergence theorem shows that $$\int_{{\mathbb{C}}}\phi_j \to \int_{{\mathbb{C}}}\phi_{{\mathcal{H}}}$$ as $ j \to \infty $. However, by construction $$\int_{{\mathbb{C}}}\phi_j = \int_{D_j}\phi_{{\mathcal{H}}, j}(z,z_j)$$ and $$\int_{{\mathbb{C}}}\phi = \int_{{\mathcal{H}}}\phi_{{\mathcal{H}}}(z, z_0)$$ and by definition. Hence $$\int_{{\mathcal{H}}}\phi_{{\mathcal{H}}}(z, z_0) = \pi$$ and this completes the proof. \[Pr:ConvergenceOfTheSugawaMetricUnderScaling\] Let $\{D_j\}$ be the sequence of domains that converges to the half-space ${\mathcal{H}}$ as in Proposition \[Prop:ConvergenceAhlforsSzegoGarabeidian\]. Then $q_{D_j}$ converges to $q_{{\mathcal{H}}}$ uniformly on compact subsets of $ {\mathcal{H}}$. If possible, assume that $q_{D_j}$ does not converge to $q_{{\mathcal{H}}}$ uniformly on every compact subset of ${\mathcal{H}}$. Then there exists a compact subset $K$ of ${\mathcal{H}}$ – without loss of generality, we may assume that $K \subset D_j$ for all $j \geq 1$ – $\epsilon_{0} >0 $, a sequence of integers $\{ k_j \}$ and points $\{ z_{k_j}\}\subset K$ such that $$|q_{D_{k_j}}(z_{k_j})- q_{\mathcal{H}}(z_{k_j})|>\epsilon_0.$$ Since $K$ is compact, we assume that $z_{k_j}$ converges to a point $z_0 \in K$. Using the continuity of $ q_{{\mathcal{H}}}(z) \vert dz \vert $, we have $$|q_{\mathcal{H}}(z_{k_j})- q_{\mathcal{H}}(z_0)|< \epsilon_0\big/2$$ for $j$ large, which implies $$|q_{D_{k_j}}(z_{k_j})- q_{\mathcal{H}}(z_0)|> \epsilon_0\big/2$$ by the triangle inequality. There exist extremal functions $\phi_{k_j} \in A(D_{k_j})$ with $ \Vert \phi_{k_j} \Vert \leq \pi $ such that $$q_{D_{k_j}}^2 = \phi_{k_j}(z_{k_j}).$$ We claim that the collection $\{\phi_{k_j}\}$ is a normal family. To show this, it is enough to show $\{\phi_{k_j}\}$ is locally uniformly bounded. Let $\zeta_0 \in {\mathcal{H}}$ and $R>0$ such that $\overline{B}(\zeta_0, 2R) \subset {\mathcal{H}}$. Then $\overline{B}(\zeta_0, 2R) \subset D_j$ for all $j$ large. By the mean value theorem, we have $$\begin{split} |\phi_{k_j}(\zeta)| = \frac{1}{2\pi} \left|\int_{0}^{2\pi}\phi_{k_j}(\zeta + re^{\theta})d\theta\right| \leq \frac{1}{2\pi} \int_{0}^{2\pi}|\phi_{k_j}(\zeta + re^{\theta})|d\theta \end{split}$$ for $ 0 < r < R $. This implies $$\begin{split} \int_0^R|\phi_{k_j}(\zeta)|rdr &\leq \frac{1}{2\pi} \int_0^R\int_{0}^{2\pi}|\phi_{k_j}(\zeta + re^{\theta})|r d\theta dr \leq \frac{1}{ 2\pi } \int_0^R\int_{0}^{2\pi}|\phi_{k_j}(\zeta + re^{\theta})|r d\theta dr, \end{split}$$ that is, $$\begin{split} |\phi_{k_j}(\zeta)| &= \frac{1}{\pi R^2} \int\int_{B(\zeta, R)}|\phi_{k_j}(z)| \leq \frac{1}{\pi R^2} \int\int_{D_j}|\phi_{k_j}(z)| = \frac{\pi}{\pi R^2} = \frac{1}{R^2} \end{split}$$ for all $\zeta \in B(\zeta_0, R)$. This proves that the family $\{\phi_{k_j}\}$ is locally uniformly bounded on $B(\zeta_0, R)$ and hence a normal family. Next, we shall show that $$\limsup_j q_{D_j}(z_j) \leq q_{\mathcal{H}}(z_0).$$ First, we note that the finiteness of the integral $\int_{D_j}|\phi_{k_j}| = \pi$ implies that no limiting function of the sequence $\{\phi_{k_j}\}$ diverges to infinity uniformly on any compact subset of ${\mathcal{H}}$. Now, by definition, we see that there exists a subsequence $\{\phi_{l_{k_j}}\}$ of the sequence $\{\phi_{k_j}\}$ such that $|\phi_{l_{k_j}}(z_0)|$ converges to $\limsup_{j}|\phi_{k_j}(z_0)|$ as $ j \to \infty$. Let $\phi$ be a limiting function of $\{\phi_{l_{k_j}}\}$. For simplicity, we may assume that $\{\phi_{l_{k_j}}\}$ converges to $\phi$ uniformly on every compact subset of ${\mathcal{H}}$. We claim that $\phi \in A({\mathcal{H}})$ and $ \Vert \phi \Vert \leq \pi $. That is, we need to show $$\int_{{\mathcal{H}}}|\phi| \leq \pi.$$ To show this we take an arbitrary compact subset $K$ of ${\mathcal{H}}$. Then $K \subset D_j$ and, by the triangle inequality, $$\int_{K}|\phi| \leq \int_{K} |\phi - \phi_{l_{k_j}}| + \int_{K}|\phi_{l_{k_j}}|$$ for $j $ large. This implies, $$\int_{K}|\phi| \leq \int_{K} |\phi - \phi_{l_{k_j}}| + \int_{D_j}|\phi_{l_{k_j}}|$$ for $j$ large. Since $\int_{D_j}|\phi_{l_{k_j}}| = \pi$ and $\{\phi_{l_{k_j}}\}$ converges to $\phi$ uniformly on $K$, $\int_{K} |\phi - \phi_{l_{k_j}}|$ converges to $0$ as $ j \to \infty$. Hence $$\int_{{\mathcal{H}}}|\phi| \leq \pi.$$ This shows that $\phi(z) $ is a candidate in the family that defines $ q_{{\mathcal{H}}}(z) $. By definition of $ q_{{\mathcal{H}}}(z) \vert dz \vert $, we have $$\phi(z_0) \leq q_{\mathcal{H}}^2(z_0)$$ which implies $$\limsup_j\phi_{k_j}(z_j) \leq q_{\mathcal{H}}^2(z_0).$$ Now, by substituting $q_{D_j}^2(z_j)$ in place of $\phi_{k_j}(z_j)$ above, we obtain $$\limsup_j q_{D_j}^2(z_j) \leq q_{\mathcal{H}}^2(z_0)$$ or $$\label{Eqn: Eqsup} \limsup_j q_{D_j}(z_j) \leq q_{\mathcal{H}}(z_0).$$ Next, we show $$q_{\mathcal{H}}(z_0) \leq \liminf_j q_{D_j}(z_0),$$ and this will lead to a contradiction to our assumption. As we have seen in Lemma \[L:ExtremalFunctionSugawaMetricHalfSpace\], the extremal functions of $ q_{{\mathcal{H}}}(z) \vert dz \vert $ at the points $z_j$ and $z_0$ of the half-space ${\mathcal{H}}$ are given by rational functions with poles at $Z_j = (2 - \omega\overline{z_j})\big/\overline{\omega}$ and $Z_0 = (2 - \omega\overline{z_0})\big/\overline{\omega}$ respectively. The Hausdorff convergence of $ D_j $ to $ {\mathcal{H}}$ as $ j \to \infty $ implies that there exists $R>0$ such that $Z_j \in B(Z_0, R\big/2)$ and $\overline{B}(Z_0, R) \subset {\mathbb{C}}\setminus \overline{D_j}$ for $j$ large. This ensures $$\psi_j(z) = \pi \phi_j(z)\big / M_j$$ is a well-defined holomorphic function on $ D_j $, where $ \phi_j $ is an extremal function of $ q_{{\mathcal{H}}}(z) \vert dz \vert $ at $ z_j $ and $$M_j = \int_{D_j} |\phi_j|.$$ By Lemma \[L:ConvergenceOfIntegral\], $ M_j < \infty $ for $j$ large. From the definition of the function $\psi_j$ it follows that $\int_{D_j} |\psi_{\mathcal{H}}| = \pi$ for all $j \geq 1$. Therefore, $$\psi_j \in A_0(D_j) = \left\{ |\phi(z)|^2 : \phi \in A(D_j) \, \text{with} \, ||\phi||_1 \leq \pi \right \}$$ and hence $$\pi \psi_{{\mathcal{H}}}(z_j)\big/ M_j = \psi_j(z_j) \leq q_{D_j}^2(z_j).$$ Since $M_j$ converges to $\pi$ as $ j \to \infty$, Lemma \[L:ConvergenceOfIntegral\] and by the formulae of the extremal functions of $ {\mathcal{H}}$, it is seen that $\phi_j(z_j)$ converges to $\phi(z_0)$, the extremal function of the half-space at the point $ z_0 $, as $j \to \infty$. Therefore, taking liminf both sides, $$q_{\mathcal{H}}^2(z_0) \leq \liminf_j q_{D_j}^2(z_j).$$ That is, $$\label{Eqn: Eqinf} q_{\mathcal{H}}(z_0) \leq \liminf_j q_{D_j}(z_j).$$ By (\[Eqn: Eqsup\]) and (\[Eqn: Eqinf\]), we conclude that $$\lim_{j \to \infty} q_{D_j}(z_j) = q_{\mathcal{H}}(z_0)$$ which contradicts the assumption that $$|q_{D_{k_j}}(z_{k_j})- q_{\mathcal{H}}(z_0)|> \epsilon_0\big /2.$$ \[C:AsymptoticqD\] Let $ D \subset {\mathbb{C}}$ be a bounded domain. Suppose $ p \in \partial D $ is a $ C^2 $-smooth boundary point. Then $$\lim_{z \to p}q_D(z)(-\psi(z)) = q_{{\mathcal{H}}}(0) = \vert \omega \vert \big/2.$$ Let $ D_j = T_j(D) $ where $ T_j(z) = (z - p_j)\big/(-\psi(p_j)) $. By Proposition \[Pr:ConvergenceOfTheSugawaMetricUnderScaling\] $ q_{D_j} \to q_{{\mathcal{H}}} $ on compact subsets of $ {\mathcal{H}}$. In particular, $q_{D_j}(0) \to q_{{\mathcal{H}}}(0) $ as $ j \to \infty $. But $ q_{D_j}(0) = q_{D}(p_j) (-\psi(p_j)) $ and hence $ q_{D_j}(0) = q_{D}(p_j) (-\psi(p_j)) \to q_{{\mathcal{H}}}(0) $. Since $ p_j $ is an arbitrary sequence converging to $ p $, hence it follows that $$\lim_{z \to p}q_D(z)(-\psi(z)) = q_{{\mathcal{H}}}(0) = \vert \omega \vert \big / 2.$$ As a consequence of the Corollary \[C:AsymptoticqD\], we have $$q_D(z) \approx 1\big /{\rm dist}(z, \partial D).$$ Proof of Theorem \[T:TheHurwitzMetric\] ======================================= The continuity of $ \eta_D $ is a consequence of the following observation. \[T:contHurM\] Let $D \subset \mathbb{C}$ be a domain. Fix $a\in D$ and let $\{a_n\}$ be a sequence in $ D$ converging to $a$. Let $G_n,\, G$ be the normalized Hurwitz coverings at $a_n$ and $a$ respectively.Then $G_n$ converges to $G$ uniformly on compact subsets of $\mathbb{D}$. The proof of this requires the following lemmas. \[L:PiPosCon\] Let $ y_0 \in {\mathbb{D}}^* $ and $ \{y_n\} $ a sequence in $ {\mathbb{D}}^* $ converging to $ y_0 $. Let $\pi_n : \mathbb{D} \longrightarrow \mathbb{D}^{*}$ and $\pi_0 : \mathbb{D} \longrightarrow \mathbb{D}^{*}$ be the unique normalized coverings satisfying $\pi_n(0) = y_n$, $\pi_0(0) = y_0$ and $\pi^{\prime}_n(0) > 0$, $\pi^{\prime}_0(0) > 0$. Then $\{\pi_n \}$ converges to $\pi_0$ uniformly on compact subsets of $ {\mathbb{D}}$. Note that the hyperbolic density $\lambda_{\mathbb{D}^{*}}$ is continuous and hence $\lambda_{\mathbb{D}^{*}}(y_n) \to \lambda_{\mathbb{D}^{*}}(y)$. Also $\lambda_{\mathbb{D}^{*}} > 0 $ and satisfies $$\lambda_{\mathbb{D}^{*}}(y_n) = 1 \big/ \pi^{\prime}_n(0),$$ $$\lambda_{\mathbb{D}^{*}}(y_0) = 1\big/\pi^{\prime}_0(0).$$ Let $ \pi_{\infty}$ be the limit point of the family $\{\pi_n\}$ which is a normal. Then $ \pi_{\infty} : {\mathbb{D}}\longrightarrow {\mathbb{D}}^*$ and $ \pi_{\infty } (0) =y_0 \in {\mathbb{D}}^* $. If the image $ \pi_{\infty}({\mathbb{D}}^*) $ intersects $ \partial {\mathbb{D}}$ then $ \pi_{\infty} $ must be constant, i.e., $ \pi_{\infty}(z) = e^{i \theta_0} $ for some $ \theta_0 $. This can not happen since $ \pi_{\infty}(0) = y_0 \in {\mathbb{D}}^* $. If the image $ \pi_{\infty}({\mathbb{D}}^*) $ contains the origin, Hurwitz’s theorem shows that $ \pi_n $ must also contains the origin for $ n $ large. Again this is not possible since $ \pi_n({\mathbb{D}}) = {\mathbb{D}}^* $. It follows that $ \pi_{\infty} : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ with $ \pi_{\infty}(0) = y_0 $. Let $ \tilde\pi_{\infty} : {\mathbb{D}}\longrightarrow {\mathbb{D}}$ be the lift of $ \pi_{\infty} $ such that $ \tilde\pi_{\infty}(0) = 0 $. By differentiating the identity, $$\pi_0 \circ \tilde\pi_{\infty} = \pi_{\infty},$$ we obtain $$\pi_0^{\prime}(\tilde\pi_{\infty}(z)) \tilde\pi_{\infty}^{\prime}(z) = \pi_{\infty}^{\prime}(z)$$ and evaluating at $ z = 0 $ gives $$\pi^{\prime}(0) \tilde \pi_{\infty}^{\prime}(0) = \pi_{\infty}^{\prime}(0).$$ Now observe that $ \pi_n^{\prime}(0) \to \pi_{\infty}^{\prime}(0) $ and since $1 \big/ \pi_{n}^{\prime} (0) = \lambda_{\mathbb{D}^{*}}(y_n) \to \lambda_{\mathbb{D}^{*}}(y_0) $ it follows $ \pi_{\infty}^{\prime}(0) = 1 \big/ \lambda_{\mathbb{D}^{*}}(y_0) = \pi_0^{\prime}(0) > 0 $. Therefore, $ \tilde \pi_{\infty}^{\prime}(0) = 1 $ and hence $ \pi_{\infty}^{\prime}(z) = z $ by the Schwarz lemma. As a result, $ \pi_{\infty} = \pi_0 $ on $ {\mathbb{D}}$. If follows that $ \{ \pi_n \} $ has unique limit, namely $ \pi_0 $. \[L:UniformConvergenceOfPunCoveringMpas\] Let $\pi_n : \mathbb{D} \longrightarrow \mathbb{D}^{*}$ be a sequence of holomorphic coverings. Let $\pi$ be a non-constant limit point of the family $\{\pi_n\}$. Then $\pi : \mathbb{D} \longrightarrow \mathbb{D}^{*}$ is a covering. Suppose that $ \pi_{n_k} \to \pi $ uniformly on compact subsets of $ {\mathbb{D}}$. As in the previous lemma, $ \pi : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ since $ \pi $ is assumed to be non-constant. In particular, $ \pi_{n_k}(0) \to \pi(0) \in {\mathbb{D}}^*$. Let $$\phi_n(z) = e^{i \theta_n}z$$ where $ \theta_n = Arg(\pi_{k_n}^{\prime}(0)) $ and consider the compositions $$\tilde \pi_{k_n} = \pi_{k_n} \circ \phi_n$$ which are holomorphic covering of $ {\mathbb{D}}^* $ satisfying $ \tilde \pi_{k_n}^{\prime}(0) > 0$ and $ \tilde \pi_{k_n}(0) = \pi_{k_n}(0) \to \pi(0) \in {\mathbb{D}}^* $. By Lemma \[L:PiPosCon\], $ \tilde \pi_{k_n} \to \tilde \pi $ where $ \tilde \pi : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ is a holomorphic covering with $ \tilde \pi(0) = \pi(0) $ and $ \tilde \pi^{\prime}(0) > 0 $. By passing to a subsequence, $ \phi_n(z) \to \phi(z) $ whree $ \phi(z) = e^{i\theta_0}z $ for some $ \theta_0 $. As a result, $$\tilde \pi = \pi \circ \phi$$ and this shows that $ \pi : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ is a holomorphic covering. \[L:HyperbolicCoveringConvergenc\] Let $D \subset {\mathbb{C}}$ be bounded. Fix $ a \in D $ and let $\{a_n\}$ be a sequence in $ D $ converging to $a$. Set $ D_n = D\setminus \{x_n\}$ and $ D_0 = D\setminus \{a\} $. Fix a base point $p \in D \setminus \{a, a_1, a_2, \dots \}$. Let $\pi_n : \mathbb{D} \longrightarrow D_n $ and $\pi_0 : \mathbb{D} \longrightarrow D_0 $ be the unique normalized coverings such that $\pi_n(0) = \pi_0(0) = p $ and $\pi^{\prime}_n(0), \, \pi^{\prime}_0(0) > 0$. Then $\pi_n \to \pi_{0}$ uniformly on compact subsets of $\mathbb{D}$. Move $ p $ to $ \infty $ by $ T(z) = 1\big/(z - p) $ and let ${\tilde D_n}= T(D_n)$, $\tilde D_0 = T(D_0)$. Then $${\tilde \pi_n}= T \circ {\pi_n}: {\mathbb{D}}\longrightarrow {\tilde D_n}$$ and $${\tilde \pi}= T \circ \pi : {\mathbb{D}}\longrightarrow \tilde D_0$$ are coverings that satisfy $$\lim_{z \to 0} z{\tilde \pi_n}(z) = \lim_{z \to 0} \frac{z}{{\pi_n}(z) - {\pi_n}(0)} = \frac{1}{{\pi_n}^{\prime}(0)} > 0$$ and $$\lim_{z \to 0} z{\tilde \pi}(z) = \lim_{z \to 0} \frac{z}{\pi(z) - \pi(0)} = \frac{1}{\pi^{\prime}(0)} > 0.$$ It is evident that the domains $ {\tilde D_n}$ converge to $ \tilde D_0 $ in the Carathéodory kernel sense and Hejhal’s result [@Hejhal] shows that $${\tilde \pi}_n \to \tilde \pi_0$$ uniformly on compact subsets of $ {\mathbb{D}}$. This completes the proof. Fix a point $a \in D$ and let $ a_n \to a $. Let $ G_n, \, G$ be the normalized Hurwitz coverings at $ a_n, a $ respectively. Since $ D $ is bounded, $ \{ G_n \} $ is a normal family. Assume that $ G_n \to \tilde G $ uniformly on compact subsets of $ {\mathbb{D}}$. By Theorem 6.4 of [@TheHurwitzMetric], $$\label{Eq:bilipchitzCondition} 1\big/8 \delta_D(z) \leq \eta_D(z) \leq 2\big/ \delta_D(z)$$ where $ \delta_D(z) = {\rm dist}(z, \partial D) $. By definition, $ \eta_D(a_n) = 1\big/G_n^{\prime}(0) $ which gives $$2 \delta_D(a_n) \leq G_n^{\prime}(0) \leq 8 \delta_D(a_n)$$ for all $ n $ and hence $$2 \delta_D(a) \leq \tilde G^{\prime}(0) \leq 8 \delta_D(a).$$ Since $ \delta_D(a_n),\, \delta_D(a) $ have uniform positive lower and upper bounds, it follows that $ G_n^{\prime}(0) $ and $ \tilde G^{\prime}(0) $ admits uniform lower and upper bounds as well. We will now use the following fact which is a consequence of the inverse function theorem. [***Claim***]{}: Let $ f : \Omega \longrightarrow {\mathbb{C}}$ be holomorphic and suppose that $ f^{\prime}(z_0) \neq 0 $ for some $ z_0 \in \Omega $. Then there exists $ \delta > 0 $ such that $ f : B(z_0, \delta) \longrightarrow G_n^{\prime}(0) $ is biholomorphic and $ B(f(z_0), \delta \vert f^{\prime}(z_0) \vert \big/2) \subset f(B(z_0, \delta))$. To indicate a short proof of this claim, let $ \delta > 0 $ be such that $$\label{Eq:DerIneq} \vert f^{\prime}(z) - f^{\prime}(z_0) \vert < \vert f^{\prime }(z_0) \vert\big/2$$ for all $ \vert z - z_0 \vert < \delta $. Then $ g(z) = f(z) - f^{\prime}(z_0)z $ satisfies $$\vert g^{\prime}(z) \vert \leq 1\big/2 \vert f^{\prime}(z_0) \vert$$ and hence $$\label{Eq:LipCon} \vert f(z_2) - f(z_1) - f^{\prime}(z_0)z(z_2 - z_1) = \vert g(z_2) - g(z_1) \vert \leq \vert f^{\prime}(z_0)\vert \vert z_2 - z_1 \vert \big/2.$$ This shows that $ f $ is injective on $ B(z_0, \delta) $. Finally, if $ w \in B(f(z_0), \delta \vert f^{\prime}(z_0)\vert \big/2) $, then $$z_k = z_{k - 1} - \frac{w - f^{\prime}(z_{k - 1})}{f^{\prime}(z_0)}$$ defines a Cauchy sequence which is compactly contained in $ B(z_0, \delta) $. It converges to $ \tilde z \in B(z_0, \delta) $ such that $ f(\tilde z) = w $. This shows that $ B(f(z_0), \delta \vert f^{\prime}(z_0) \vert \big/2) \subset f(B(z_0, \delta))$. Let $ m, \delta > 0 $ be such that $$\vert \tilde G^{\prime}(z) - \tilde G^{\prime}(0) \vert < m \big/2 < m \leq \frac{\vert \tilde G^{\prime}(0) \vert}{2}$$ for $ \vert z \vert < \delta $. Since $ G_n $ converges to $ \tilde G $ uniformly on compact subsets of $ {\mathbb{D}}$, $$\vert G_n^{\prime}(0) \vert\big/2 \geq \vert \tilde G^{\prime}(0) \vert\big/2 - \vert G_n^{\prime}(0) - \tilde G^{\prime}(0) \vert\big/2 \geq m - \tau$$ for $ n $ large. Here $ 0 < \tau < m $. On the other hand, $$\vert G_n^{\prime}(z) - G_n^{\prime}(0) \vert \leq \vert \tilde G^{\prime}(z) - \tilde G^{\prime}(0) \vert + \vert G_n^{\prime}(z) - \tilde G^{\prime}(z) \vert + \vert G_n^{\prime}(0) - \tilde G^{\prime}(0) \vert \leq m \big/2 + \epsilon + \epsilon$$ for $ \vert z \vert < \delta $ and $ n $ large enough. Therefore, if $ 2 \epsilon + \tau < m\big/2 $, then $$\left\vert G_n^{\prime}(z) - G_n^{\prime}(0) \right \vert \leq m \big/2 + 2 \epsilon < m - \tau < \vert G_n^{\prime}(0) \vert \big/2$$ for $ \vert z \vert < \delta $ and $ n $ large enough. It follows from the claim that there is a ball of uniform radius, say $ \eta > 0 $ which is contained in the images $ G_n(B(0, \delta)) $. Choose $ p \in B(0, \eta) $. This will serve as a base point in the following way. Let $ \pi_n : {\mathbb{D}}\longrightarrow D_n = D \setminus \{ a_n \} $ be holomorphic coverings such $ \pi_n(0) = p $. Then there exist holomorphic coverings $ \tilde \pi_n : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ such that $$\begin{tikzcd} \mathbb{D} \arrow{r}{\tilde \pi_n} \arrow[swap]{dr}{\pi_n} & \mathbb{D} ^{*} \arrow{d}{G_n} \\ & D_n \end{tikzcd}$$ commutes, i.e., $ G_n \circ \tilde \pi_n = \pi_n $. By locally inverting $ G_n $ near the origin, $$\tilde \pi_n(0) = G_n^{-1} \circ \pi_n(0) = G_n^{-1}(p).$$ The family $ \{\tilde \pi_n\} $ is normal and admits a convergent subsequence. Let $ \tilde \pi_0 $ be a limit of $ \tilde \pi_{k_n} $. The image $ \tilde \pi_0({\mathbb{D}}) $ can not intersect $ \partial {\mathbb{D}}$ as otherwise $ \tilde \pi_0(z) = e^{i \theta_0} $ for some $ \theta_0 $. This contradicts the fact that $ \tilde \pi_{k_n}(0) = G_{k_n}^{-1}(p)$ is compactly contained in $ {\mathbb{D}}$. If $ \tilde \pi_0({\mathbb{D}}) $ were to contain the origin, then $ \tilde \pi_0(z) \equiv 0 $ as otherwise $ \tilde \pi_n({\mathbb{D}}) $ would also contain the origin by Hurwitz’s theorem. The conclusion of all this is that $ \tilde \pi_0 : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ is non-constant and hence a covering by Lemma \[L:UniformConvergenceOfPunCoveringMpas\]. By Lemma \[L:HyperbolicCoveringConvergenc\], $ \pi_n \to \pi_0 $ where $ \pi_0 : {\mathbb{D}}\longrightarrow {\mathbb{D}}_a = D \setminus \{a \}$ is a covering and hence by passing to the limits in $$G_n \circ \tilde \pi_n = \pi_n$$ we get $$\tilde G \circ \tilde \pi_n = \pi_0.$$ This shows that $ \tilde G : {\mathbb{D}}^* \longrightarrow D_a$ is a covering. As noted earlier $ \tilde G^{\prime}(0) > 0 $ and this means that $ \tilde G = G $, the normalized Hurwitz covering at a. \[L:HurCoverScalingConvergence\] Let $ D_j $ be the sequence of domains that converges to $ {\mathcal{H}}$ as in Proposition \[Prop:ConvergenceAhlforsSzegoGarabeidian\]. Let $ z_j $ be a sequence converging to $ a \in {\mathcal{H}}$ satisfying $ z_j \in D_j $ for all $ j $ and let $ G_j $ be the normalized Hurwitz covering of $ D_j$ at the point $ z_j $ and $ G $ be the normalized Hurwitz covering of $ {\mathcal{H}}$ at $ a $. Then $ G_j $ converges to $ G $ uniformly on compact subsets of $ {\mathbb{D}}$. Since $ D_j $ converges to $ {\mathcal{H}}$ in the Hausdorff sense and $ z_j \to a $, there exist $ r > 0 $ and a point $ p \in {\mathcal{H}}$ such that $p \in B(z_j, r) \subset D_j $ for all large $ j $ – for simplicity we may assume for all $ j $ – and $p \in B(a, r) \subset {\mathcal{H}}$ satisfying $ p \neq z_j $ and $ p \neq a $ and $ G_j $ is locally invertible in a common neighborhood containing $ p $ for all $ j $. Let $ \pi_j : {\mathbb{D}}\longrightarrow D_j \setminus \{z_j\}$ be the holomorphic coverings such that $ \pi_j(0) = p$ and $ \pi^{\prime}(0) > 0 $. Let $ \pi_{0 j} : {\mathbb{D}}\longrightarrow {\mathbb{D}}^* $ be holomorphic coverings such that $ \pi_j = G_j \circ \pi_{0 j} $. Since $ D_j \setminus \{z_j\} $ converges to $ {\mathcal{H}}\setminus \{a\} $ in the Carathéodory kernel sense, by Hejhal’s result [@Hejhal], $ \pi_j $ converges to $ \pi $ uniformly on compact subsets of $ {\mathbb{D}}$, where $ \pi : {\mathbb{D}}\longrightarrow {\mathcal{H}}\setminus \{a\} $ is the holomorphic covering such that $ \pi(0) = p $ and $ \pi^{\prime}(0) > 0 $. Since $ D_j \to {\mathcal{H}}$ in the Hausdorff sense, any compact set $ K $ with empty intersection with $ \overline {\mathcal{H}}$ will have no intersection with $ D_j $ for $ j $ large and as a result the family $ \{G_j\} $ is normal. Also note that $ \pi_{0 j}({\mathbb{D}}) = {\mathbb{D}}^* $ for all $ j $, and this implies that $ \{\pi_{0 j}\}$ is a normal family. Let $ G_0 $ and $ \pi_0 $ limits of $ \{G_j\} $ and $ \{\pi_j\} $ respectively. Now, together with the fact that $ \pi_0 $ is non-constant – guaranteed by its construction – and the identity $ \pi_j = G_j \circ \pi_{0 j}$, we have $ \pi = G_0 \circ \pi_0 $. This implies that $ G_0 : {\mathbb{D}}^* \longrightarrow {\mathcal{H}}_a = {\mathcal{H}}\setminus \{a \} $ is a covering since $ \pi_0 $ is a covering which follows by Lemma \[L:UniformConvergenceOfPunCoveringMpas\]. Since $ G_j(0) = z_j $ and $ G_j^{\prime}(0) > m > 0 $ – for some constant $ m > 0 $ which can be obtained using the bilipschitz condition of the Hurwitz metric, for instance, see inequality  (\[Eq:bilipchitzCondition\]) – it follows $ G_0(0) = a $ and $ G_0^{\prime}(0) \geq m $. This shows that $ G_0 $ is the normalized Hurwitz covering, in other words $ G_0 = G $. Thus, we showed that any limit of $ G_j $ is equal to $ G $ and this proves that $ G_j $ converges to $ G $ uniformly on compact subsets of $ {\mathbb{D}}$. \[L:ConvergenceOfTheHurwitzMetricScaling\] Let $\{D_j\}$ be the sequence of domains that converges to the half-space ${\mathcal{H}}$ as in Proposition \[Prop:ConvergenceAhlforsSzegoGarabeidian\]. Then $\eta_{D_j}$ converges to $\eta_{{\mathcal{H}}}$ uniformly on compact subsets of ${\mathcal{H}}$. Let $K \subset \mathcal{H}$ be a compact subset. Without loss of generality, we may assume that $K \subset D_j$ for all $j$. If possible assume that $\eta_{D_j}$ does not converges to $\eta_{\mathcal{H} }$ uniformly on $K$. Then there exist $\epsilon_{0} >0 $, a sequence of integers $\{ k_j \}$ and sequence of points $\{ z_{k_j}\}\subset K \subset D_{k_j}$ such that $$\vert \eta_{D_{k_j}}(z_{k_j})- \eta_{\mathcal{H}}(z_{k_j}) \vert >\epsilon_0.$$ Since $K$ is compact, for simplicity, we assume $z_{k_j}$ converges to a point $z_0 \in K$. Using the continuity of the Hurwitz metric, we have $$\vert \eta_{\mathcal{H}}(z_{k_j})- \eta_{\mathcal{H}}(z_0) \vert < \epsilon_0\big/2$$ for $j$ large. By the triangle inequality $$\vert \eta_{D_{k_j}}(z_{k_j})- \eta_{\mathcal{H}}(z_0) \vert > \epsilon_0\big/2.$$ Since the domains $D_{k_j}$ are bounded there exist normalized Hurwitz coverings $ G_{k_j} $ of $ D_{k_j} $ at $ z_{k_j} $. Using the convergence of $ D_{k_j}\setminus \{z_{k_j}\} $ to $ {\mathcal{H}}\setminus \{z_0\} $ in the Carathéodory kernel sense we have by Lemma \[L:HurCoverScalingConvergence\], $ G_{k_j} $ converges to the normalized Hurwitz covering map $ G $ of $ {\mathcal{H}}$ at the point $ z_0 $ uniformly on every compact subset of $ {\mathcal{H}}$ as $ j \to \infty $. But we have $$\eta_{{\mathcal{H}}}(z_0) = 1\big/G^{\prime}(0).$$ From above we obtain that $ \eta_{D_{k_j}}(z_{k_j}) $ converges to $ \eta_{{\mathcal{H}}}(z_0) $ as $ j \to \infty $. But from our assumption, we have $$\vert \eta_{D_{k_j}}(z_{k_j})- \eta_{\mathcal{H}}(z_0) \vert > \epsilon_0\big/2$$ and this is a contradiction. Therefore, $ \eta_{D_{k_j}}$ converges to $\eta_{\mathcal{H}}$ uniformly on compact subsets of $\mathcal{H}$. \[C:AsymptoticHurwitzMetric\] Let $ D \subset {\mathbb{C}}$ be a bounded domain. Suppose $ p \in \partial D $ is a $ C^2 $-smooth boundary point. Then $$\lim_{z \to p}\eta_D(z)(-\psi(z)) = \eta_{{\mathcal{H}}}(0)= \vert \omega \vert \big/2.$$ By Proposition \[Pr:ConvergenceOfTheSugawaMetricUnderScaling\], if $ D_j $ is the sequence of scaled domains of $ D $ under the affine transformation $ T_j(z) = (z - p_j)\big/(-\psi(p_j)) $, for all $ z \in D $, where $ \psi $ is a local defining function of $ \partial D $ at $ z = p $ and $ p_j $ is a sequence of points in $ D $ converging to $ p $, then the corresponding sequence of Hurwitz metrics $ \eta_{D_j} $ of $ D_j $ converges to $ \eta_{{\mathcal{H}}} $ uniformly on every compact subset of $ {\mathcal{H}}$ as $ j \to \infty $. In particular, $\eta_{D_j}(0) $ converges to $ \eta_{{\mathcal{H}}}(0) $ as $ j \to \infty $. Again, we know that $ \eta_{D_j}(z) = \eta_{D}(T_j^{-1}(z)) \vert (T_j^{-1})^{\prime}(z)\vert $. From this it follows that $ \eta_{D_j}(0) = \eta_{D}(p_j) (-\psi(p_j)) $, and consequently, $ \eta_{D}(p_j) (-\psi(p_j)) $ converges to $ \eta_{{\mathcal{H}}}(0) $ as $ j \to \infty $. Since $ p_j $ is an arbitrary sequence converging to $ p $, hence we have the following $$\lim_{z \to p}\eta_D(z)(-\psi(z)) = \eta_{{\mathcal{H}}}(0) = \vert \omega \vert \big/2.$$ From Theorem \[T:contHurM\], it follows that the Hurwitz metric is continuous and as a consequence of Corollary \[C:AsymptoticHurwitzMetric\], we have $$\eta_D(z) \approx 1\big/{\rm dist}(z, \partial D).$$ A curvature calculation ======================= For a smooth conformal metric $ \rho(z)\vert dz \vert $ on a planar domain $ D $, the curvature $$K_{\rho} = -\rho^{-2} \Delta \log \rho$$ is a well defined conformal invariant. It $ \rho $ is only continuous, Heins [@Heins] introduced the notion of generalized curvatures as follows: For $ a \in D $ and $ r > 0 $, let $$T(\rho, a, r) = \left. -\frac{4}{r^2}\left\{\frac{1}{2 \pi}\int_0^{2\pi} \log \rho(a + r e^{i \theta})d\theta - \log \rho(a)\right\}\right/ \rho^2(a).$$ The $ \liminf_{r \to 0} T(\rho, a , r)$ and $ \limsup_{r \to 0} T(\rho, a , r)$ are called generalized upper and lower curvature of $ \rho(z)\vert dz \vert $. Our aim is to give some estimates for the quantities $ T(\rho, a , r) $ for $ r > 0 $ and $ \rho = q_D $ or $ \eta_D $. Let $ T_j : D \longrightarrow D_j $ be given by $$T_j(z) = \frac{z - p_j}{- \psi(p_j)}$$ where $ D $ and $ p_j \to p $ as before. Then $$T\left(\rho, p_j , r\vert \psi(p_j) \vert \right) = T\left((T_j)_*\rho, 0 , r\right)$$ for $ r > 0 $ small enough. Here $ (T_j)_*\rho $ is the push-forward of the metric $ \rho $. Computing, $$\begin{aligned} &T\left((T_j)_*\rho, 0 , r\right)\\ &= \left. -\frac{4}{r^2} \left\{\frac{1}{2 \pi}\int_0^{2\pi} \log((T_j)_*\rho)( re^{i\theta}d) d\theta - \log((T_j)_*\rho)(0)\right\}\right/((T_j)_*\rho)^2(0)\\ &= \left.-\frac{4}{r^2} \left\{\frac{1}{2 \pi}\int_0^{2\pi} \log\rho (T_j^{-1}(re^{i\theta}))\vert (T_j^{-1})^{\prime}(re^{i\theta}) \vert d\theta - \log\rho ((T_j^{-1})(0))\vert (T_j^{-1})^{\prime}(0) \vert\right\}\right/\rho ((T_j^{-1})(0))^2\vert (T_j^{-1})^{\prime}(0) \vert^2. \end{aligned}$$ Since $ (T_j^{-1})^{\prime}(z) = -\psi(p_j) $ for all $ z \in D $, we have $$T\left((T_j)_*\rho, 0 , r\right) = \left. -\frac{4}{r^2\vert \psi(p_j) \vert^2} \left\{\frac{1}{2 \pi}\int_0^{2\pi} \log\rho (p_j + r\vert \psi(p_j) \vert e^{i\theta}) d\theta - \log\rho(p_j )\right\}\right/\rho^2(p_j ).$$ This implies $$T\left(\rho, p_j , r\vert \psi(p_j) \vert \right) = T\left((T_j)_*\rho, 0 , r\right).$$ Fix $ r_0 $ small enough and let $ \epsilon > 0 $ arbitrary. Then $$-4 - \epsilon < T\left(\rho, p_j , r_0\vert \psi(p_j) \vert \right) < -4 + \epsilon.$$ for $ j $ large. Consequently, for a fixed $ r $ with $ 0 < r < r_0 $, $$\lim_{j \to \infty}T(\rho, p_j, \vert \psi(p_j) \vert r) = -4$$ From the lemma above, it is enough to show the inequality for $ T\left((T_j)_*\rho, 0 , r_0\right) $. Recall that the scaled Sugawa metric and Hurwitz metric, i.e., $ q_{D_j} $ and $ \eta_{D_j} $ converge to $ q_{{\mathcal{H}}} $ and $ \eta_{{\mathcal{H}}} $ respectively. The convergence is uniform on compact subsets of $ {\mathcal{H}}$ and note that $ q_{{\mathcal{H}}} = \eta_{{\mathcal{H}}} $. By writing $ \rho_j $ for $ q_{D_j} $ or $ \eta_{D_j} $, and $ \rho_{{\mathcal{H}}} $ for $ q_{{\mathcal{H}}} = \eta_{{\mathcal{H}}} $, we get $$T(\rho_{D_j}, 0, r) \to T(\rho_{{\mathcal{H}}}, 0, r)$$ for a fixed $ r $ as $ j \to \infty $. That is $$T(\rho_{{\mathcal{H}}}, 0, r) - \epsilon\big/2 < T(\rho_{D_j}, 0, r) < T(\rho_{{\mathcal{H}}}, 0, r) + \epsilon\big/2$$ for $ j $ large. Also recall that since the curvature of $ \rho_{{\mathcal{H}}} $ is equal to $ -4 $, there exists $ r_1 > 0 $ such that $$-4 - \epsilon\big/2 < T(\rho_{{\mathcal{H}}}, 0, r) < -4 + \epsilon\big/2$$ whenever $ r < r_1 $. Now, choose $ r_0 = r_1/2 $, then there exists $ j_0 $ depending on $ r_0 $ such that $$-4 - \epsilon < T\left(\rho_{D_j}, 0 , r_0\right) < -4 + \epsilon.$$ for all $ j \geq j_0 $. This completes the proof.
--- abstract: 'Planets interact with their host stars through gravity, radiation and magnetic fields, and for those giant planets that orbit their stars within $\sim$10 stellar radii ($\sim$0.1 AU for a sun-like star), star-planet interactions (SPI) are observable with a wide variety of photometric, spectroscopic and spectropolarimetric studies. At such close distances, the planet orbits within the sub-alfvénic radius of the star in which the transfer of energy and angular momentum between the two bodies is particularly efficient. The magnetic interactions appear as enhanced stellar activity modulated by the planet as it orbits the star rather than only by stellar rotation. These SPI effects are informative for the study of the internal dynamics and atmospheric evolution of exoplanets. The nature of magnetic SPI is modeled to be strongly affected by both the stellar and planetary magnetic fields, possibly influencing the magnetic activity of both, as well as affecting the irradiation and even the migration of the planet and rotational evolution of the star. As phase-resolved observational techniques are applied to a large statistical sample of hot Jupiter systems, extensions to other tightly orbiting stellar systems, such as smaller planets close to M dwarfs become possible. In these systems, star-planet separations of tens of stellar radii begin to coincide with the radiative habitable zone where planetary magnetic fields are likely a necessary condition for surface habitability.' author: - 'Evgenya L. Shkolnik' - Joe Llama bibliography: - 'chapter\_revision\_refs.bib' title: 'Signatures of star-planet interactions' --- Introduction ============= Giant planets located $<0.1$ AU from their parent star comprise $\sim$7% of the confirmed exoplanets, primarily around FGK stars[^1]. At such small orbital separations, these hot Jupiters (HJ) provide a laboratory to study the tidal and magnetic interactions between the planet and the star that does not exist in our own solar system. These interactions can be observed because they scale as $a^{-3}$ and $a^{-2}$, respectively, where $a$ is the separation between the two bodies. Although HJs are rare around M dwarfs (only five known), statistics from the *Kepler* survey have revealed that M stars host on average 0.24 Earth-sized planets in the habitable zone [@dres15]. We can apply the techniques trained on HJs around FGK stars on to small planet + M dwarf systems. [@cuntz2000] first suggested that close-in planets may increase and modulate their host star’s activity levels through tidal and magnetic interactions as such effects are readily observed in the comparable cases of tightly orbiting RS CVn binary systems (e.g. @piskunov1996 [@shkolnik2005a]). Variable excess stellar activity with the period of the planet’s orbit rather than with star’s rotation period, indicates a magnetic interaction with the planet, while a period of half the orbit’s indicates a tidal interaction. This suggestion has spurred the search for such interactions as a means of studying the angular momentum evolution of HJ systems and as detecting the magnetic fields of exoplanets (e.g. @cuntz2004 [@saar2004; @lanza2009; @shkolnik2003; @shkolnik2005; @shkolnik2008; @cohen2009]). Exoplanetary magnetic fields provide a probe into a planet’s internal dynamics and constraints on its atmospheric mass loss. This fundamental physical property of exoplanets would most directly be detected through the radio emission produced by electron cyclotron maser instability (see review by @treumann2006 and Chapter 9.6 of this book). Such emission has been detected from all of the solar system’s gas giants and the Earth resulting from an interaction between the planetary magnetosphere and the solar wind. There are no detections to date of radio emission from exoplanets although searches have been typically less sensitive at higher emission frequencies than predicted for exoplanets (e.g. @farrell1999 [@bastian2000; @lanza2009; @lazio2009; @jardine2008; @vidotto2012] and see review by @lazio2016). Even though a radio detection of a planet’s magnetic field ($B_p$) remains elusive, there have been reported detections through magnetic star-planet interactions (SPI). Nearly twenty studies of HJ systems, varying in wavelengths and observing strategy, have independently come to the conclusion that a giant exoplanet in a short-period orbit can induce activity on the photosphere and upper atmosphere of its parent star. This makes the host star’s magnetic activity a probe of the planet’s magnetic field. Due to their proximity to their parent star, magnetic SPI in HJ systems can be detected because these exoplanets typically lie within the Alfvén radius of their parent star ($\lesssim 10R_\star$ or $\lesssim 0.1$ AU for a sun-like star). At these small separations, the Alfvén speed is larger than the stellar wind speed, allowing for direct magnetic interaction with the stellar surface. If the giant planet is magnetized, then the magnetosphere of the planet may interact with the stellar corona throughout its orbit, potentially through magnetic reconnection, the propagation of Alfvén waves within the stellar wind, and the generation of electron beams that may strike the base of the stellar corona. In the case of characterizing habitable zone planets, the current favored targets are low-mass stars where the habitable zone is located much closer to the parent star compared to the Earth-Sun separation, making the planet easier to detect and study. Low-mass stars are typically much more magnetically active than solar type stars. It is therefore vital that we understand how this increase in magnetic activity impacts the potential habitability of a planet orbiting close to a low-mass star and what defenses the planet has against it. In order to sustain its atmosphere, a planet around a low-mass star must be able to withstand enhanced atmospheric erosion from extreme stellar wind and also from the impact of coronal mass ejections. Both of these reasons necessitate the push towards the detection and characterization of magnetic SPI in M dwarf planetary systems. The need to understand magnetic SPI is also driving the modeling effort forward. There have been considerable efforts towards modeling the space weather environments surrounding close-in giant exoplanets and star-planet interactions. The magnetized stellar winds may interact with the close-in exoplanet through the stars’ outflows and magnetospheres, and potentially lead to observable SPI. Observing the stellar winds of stars other than the Sun is difficult and there are very few observational constraints on the winds of low-mass stars [@wood2005]. Star-planet interactions can be simulated by using hydrodynamical (HD) or magnetohydrodynamical (MHD) numerical models. The modeling efforts have not only focused on studying individual systems, but have also been extended to more general scenarios to help aid the interpretation of statistical studies. MHD models for star-planet interactions require a dynamic model for the stellar corona and wind, and also a model for the planet, which acts as an additional, time-dependent boundary condition in the simulation [@cohen2011]. The standard approach to modeling SPI involves adapting 3D MHD models originally developed for the solar corona and wind. The BATS-R-US (@powell1999 [@toth2012]) global MHD model forms part of the Space Weather Modeling Framework (@toth2005) and is capable of accurately reproducing the large-scale density and temperature structure of the solar corona and has been adapted to model the winds of other stars. This MHD model uses a stellar magnetogram (or solar synoptic map) as input along with other properties of the host star, including the stellar coronal base density ($\rho$), surface temperature ($T$), mass ($M_\star$), radius ($R_\star$) and rotation period ($P_\star$). The model then self-consistently solves the ideal MHD equations for the stellar corona and wind, which in turn allows the conditions experienced by an exoplanet to be studied (e.g., @cohen2009 [@cohen2011; @cohen2014; @vidotto2009; @vidotto2013; @vidotto2014; @doNascimento]). In this chapter, we discuss the observational evidence of magnetic SPI in FGK and M stars plus the array of models produced to explain and characterize this diagnostic, albeit complex, physical phenomenon. Planet induced and orbit phased stellar emission ================================================ Although no tidally induced variability has yet been reported, magnetic SPI has seen a blossoming of data and modeling over the past 15 years. The strongest evidence for magnetic SPI is excess stellar activity modulated in phase with a planet as it orbits a star with a rotation period significantly different from the planet’s orbital period. Such signatures were first reported by [@shkolnik2003] who observed periodic chromospheric activity through Ca II H & K variability of HJ host HD 179949 modulated on the planet’s orbital period of 3.092 d [@butler2006] rather than the stellar rotation period of 7 days [@fares2012]. Those data consisted of nightly high-resolution ($\lambda/\Delta\lambda\approx$110,000), high signal-to-noise (a few hundred per pixel) spectroscopy acquired over several epochs (Figure \[fig:hd179949\_spi\], @shkolnik2005 [@shkolnik2008; @gurdemir2012]). ![Integrated Ca II K residual flux of HJ host HD 179949 as a function of orbital phase where $\phi$=0 is the sub-planetary point (or inferior conjunction). The symbols are results from six individual epochs of observation collected from 2001 to 2006 by [@shkolnik2003; @shkolnik2005a; @shkolnik2008] and [@gurdemir2012]. The spot model shown is fit to the 2001-2005 data and shows a persistent region of excess chromospheric activity peaking at the planet’s orbital phase of $\sim$0.75. []{data-label="fig:hd179949_spi"}](hd179949_combined-01.png){width="\textwidth"} In addition to HD 179949, several other stars with HJs exhibit this kind of Ca II H & K modulation, including $\upsilon$ And, $\tau$ Boo, and HD 189733. [@shkolnik2008] reported that this signature is present roughly $\sim$75% of the time. During other epochs only rotationally modulated spotting for these stars is observed. This is interpreted as variations in the stellar magnetic field configuration leading to weaker (or no) magnetic SPI with the planet’s field. Simulations of magnetic SPI using magnetogram data of the varying solar magnetic fields confirm this to be a likely explanation of the intermittent effect [@cran07]. As another example, the large scale magnetic field of the planet-host star HD 189733 has been observed over multiple years using Zeeman-Doppler Imaging (ZDI) and the field shows structural evolution between observations [@moutou2007; @fares2010; @fares2013]. In this case, the SPI diagnostics in the HD 189733 system must vary with the stellar magnetic field. Scaling law to measure planetary magnetic field strengths ========================================================= In the solar system, there is a strong correlation between the magnetic moment of a body and the ratio of its mass to its rotation period (Figure \[fig:magmom\_ss\]). Analogously, a similar relationship has emerged for exoplanets. Figure \[fig:magmom\_exo\] shows $M_p\sin i/P_{orb}$ against the stellar magnetic activity measure, $<$MADK$>$, the average of the Mean Absolute Deviation of Ca II K line variability per observing run. Note that the planet is assumed to be tidally locked such that $P_{\rm orb}$ equals the rotation period of the planet. [@lanza2009] and [@lanza2012] provided a straightforward formalism with which to scale the expected power emitted from magnetic SPI (P$_{SPI}$): P$_{SPI} \propto$ B$_*^{4/3}$ B$_p^{2/3} R_p^2 v$ where B$_\star$ and B$_p$ are the magnetic field strengths of the star and planet, respectively, $R_p$ is the planet’s radius, and $v$ is the relative velocity between the two bodies. This implies that systems with stars that are tidally locked to their HJs, i.e. stellar rotation period equals the orbital period as is the case for HJ host $\tau$ Boo, should produce weak P$_{SPI}$ (Figure \[fig:magmom\_exo\]; @shkolnik2008 [@walker2008; @fares2013]). The strength of the planetary magnetic field for tidally locked planets has been a subject of debate but scaling laws presented by [@chri09] and others reviewed in [@chri10] predict that the planet’s field strength depends primarily on the internal heat flux, and not on electrical conductivity nor the rotation speed. This same energy scaling can simultaneously explain the observed field strengths of Jupiter, Earth and rapidly rotating low-mass stars. Using the formalism of Lanza (2009) above with the measured stellar magnetic fields from spectropolarimetric observation of these targets (e.g., @donati2008 [@fares2009; @fares2013; @jeffers2014; @hussain2016; @mengel2017]), it is possible to use this correlation to estimate the *relative* magnetic field strengths of the planets, exhibiting a range of planetary magnetic field strengths of the HJs in these systems. For example, the HJ around HD 179949 appears to have a field strength seven times that of the HJ around HD 189733. ![The magnetic moment for the six magnetized solar system planets, plus Ganymede, plotted against the ratio of body mass to rotation period. The power law fit is $y\propto x^{1.21}$. Data are from Tholen et al. (2000) and Kivelson et al. (2002).[]{data-label="fig:magmom_ss"}](magmom_ss.pdf){width="80.00000%"} ![$M_p\sin i/P_{\rm orb}$, which is proportional to the planet’s magnetic moment (Figure \[fig:magmom\_ss\]), plotted against the mean night-to-night Ca II K chromospheric activity (assuming the planet is tidally locked, $P_{\rm orb} = P_{{\rm rot},p}$). The green squares show systems where SPI has been detected. Note that $\tau$ Boo, for which $P_{\rm orb} = P_{{\rm rot},\star}$ does not follow the trend. This is evidence in support of a model (@lanza2009) where near-zero relative motion of the planet through the stellar magnetosphere produces minimal magnetic SPI effects [@shkolnik2008].[]{data-label="fig:magmom_exo"}](magmom_exo.pdf){width="80.00000%"} There are stars for which no planet phased activity is reported, e.g. HD 209458 [@shkolnik2008] and WASP 18 [@miller2015; @pillitteri2014b]. In these cases, the central stars are particularly inactive with very weak fields and measurable SPI is not unexpected according to Lanza’s formalism as both the star and the planet require strong enough magnetic fields for an observable interaction. In addition, in many cases, the data collected were of too low S/N to detect any induced modulations caused by the planet and/or lacked phase coverage of the planetary orbit making it difficult to disentangle planet induced activity from stellar rotational modulation. The star may also have a highly variable magnetic field. If the stellar magnetic field is highly complex in structure then it may be that the magnetic field lines simply do not reach the orbit of the planet [@lanza2009]. Finally, the planet itself may have a weak magnetic field or no field at all. Models and observations of planet induced variability at many wavelengths ========================================================================= In addition to Ca II H & K observations, planet phased modulation has been reported in broadband optical photometry from space for $\tau$ Boo [@walker2008] and CoRoT-2 [@pagano2009] and in X-ray for HD 17156 [@maggio2015]. Tentative evidence of planet phased X-ray modulation of HD 179949 was reported by [@scandariato2013]. They find an activity modulation period of $\sim4$ days (with a false alarm probability of 2%), which is longer than the orbital period of 3.1 days, but may be tracing the synodic period of the planet with respect to the star ($P_{syn}$=4.7–5.6 days for $P_{rot}$=7–9 days). Clearer planet phased X-ray and far-UV modulation has also been reported for HD 189733 [@pillitteri2011; @pillitteri2015]. From a modeling perspective, the combined flux at all wavelengths is needed to asses the total power emitted from such an interaction adding value to these higher energy observations. Ideally, simultaneous observations across optical, UV and X-ray activity indicators would be scheduled but has proven to be challenging to accomplish. From this and other perspectives discussed below, statistical studies of a large sample of monitored stars for planet phased stellar activity is the necessary path forward. The HD 189733 system is one of the most studied as it is a bright K2V dwarf at a distance of 19.3 pc, hosts a transiting hot Jupiter at a distance of only 0.03 AU [@bouchy2005], and exhibits planet induced Ca II H & K variations [@shkolnik2005; @shkolnik2008]. It has been the subject of multiple searches for X-ray flares that coincide with the orbit of the planet. Transit observations of HD 189733b, with phase coverage from $\phi=0.52 - 0.65$ have shown that the X-ray spectrum softened in strict correspondence with the transit event, followed by a flaring event when the planet was at $\phi=0.54$ [@pillitteri2010; @pillitteri2011; @Pillitteri2014]. This phase offset for the beginning of the flare event corresponds to a location of $77^\circ$ forward of the sub-planetary point, as is also the case for the HD 179949 system. This phased emission is best interpreted as the observational signature of an active spot on the surface of the star that is connected to, and co-moving with, the planet [@Pillitteri2014]. Such a hot spot has been analytically derived by modeling the link between an exoplanet and the star [@lanza2012]. These authors calculated that if the planet is sufficiently close to the star (as is the case for hot Jupiters) the magnetic field lines that connect the star to the planet would produce such a phase offset owing to the relative orbital motion of the planet. Simulations are also helping to understand SPI and planet phased emission through modeling studies aimed not at reproducing individual systems but rather at the general conditions that favor SPI. The first generation of SPI models focused primarily on recovering the phase offset between the sub-planetary point and the chromospheric hot spot rather than explaining the spot’s energy dissipation [@mcivor2006; @preusse2006; @lanza2008]. The next generation of models explicitly included the planet and were able to show that the power generated in a reconnection event between the stellar corona and the planet can reproduce the observed hot spots [@lanza2009; @cohen2011; @lanza2013]. An investigation by @cohen2011 using the MHD code BATS-R-US showed that HD 189733b orbited in-and-out of the variable Alfvén radius and that when the planet was within the Alfvén radius its magnetosphere would reconnect with the stellar coronal field resulting in enhanced flaring from the host star. In their simulations the planet was implemented as an additional boundary condition representing HD 189733b’s density, temperature, and magnetic field strength. They found that SPI varies during the planetary orbit and is highly dependent on the relative orientation of the stellar and planetary magnetic fields. A recent study by @matsakos2015 was aimed at categorizing various types of SPI using the 3D MHD PLUTO code [@pluto2007; @pluto2012]. They ran 12 models in total, detailed in Table 2 of [@matsakos2015]. Since they were seeking to explore the parameter regime over which the observational signature of SPI changes, they chose to explore various parameters for the planet and star, rather than adopting the parameters for a known system. They classify star-planet interactions into four types illustrated in Figure \[fig:matsakos\]. Types III and IV describe scenarios where an accretion stream forms between the planet and the star. For these interactions, the authors find that the ram pressure from the stellar wind must be greater than the magnetic and tidal pressures from the planet. The accretion stream arises through Kelvin-Helmholtz and Rayleigh-Taylor instabilities and is triggered by the interaction between the stellar wind and the denser planetary material. These simulations showed that the location where the accretion stream typically impacts the stellar surface is dependent on the parameters of the system but is typically $\sim45-90^\circ$ in front of the planet. This finding is in good agreement with the observed SPI phase offsets discussed above. ![The four types of star-planet interaction as described in @matsakos2015. In Type I, the ram and magnetic pressure of the stellar wind is greater than the planetary outflow, confining the material and leading to the formation of a bow shock (e.g., @vidotto2010 [@llama2011]). In Type II, the planetary outflow is stronger than in Type I resulting in material being swept back into a tail. The interactions of Types III and IV, the ram pressure of the stellar wind is greater than the tidal pressure of the planet, resulting in the formation of a tail behind the planet and an accretion stream onto the star. The accretion stream typically impacts the stellar surface $\sim90^\circ$ ahead of the sub-planetary point, in agreement with observations of magnetic SPI.[]{data-label="fig:matsakos"}](matsakos.png){width="\textwidth"} Statistical studies of magnetic SPI =================================== As the number of known exoplanets continuously rises, statistical studies are becoming an effective way to study the properties of exoplanetary systems. An efficient strategy with which to study planet induced stellar emission is by analyzing single-epoch observations of a statistical sample in search of a significant difference in emission properties of stars with and without close-in giant planets. From a sample of stars with Ca II H & K observations, [@hart10] showed a correlation between planet surface gravities and the stellar log R$^{\prime}_{HK}$ activity parameter for 23 systems with planets of M$_p$ $>$ 0.1 M$_J$ , $a$ $<$ 0.1 AU orbiting stars with 4200 K $<$ T$_{\rm eff} < $6200 K, with a weaker correlation with planet mass. In another study of 210 systems, [@krej12] found statistically significant evidence that the equivalent width of the Ca II K line emission and log R$^{\prime}_{HK}$ of the host star correlate with smaller semi-major axis and larger mass of the exoplanet, as would be expected for magnetic and tidal SPI. The efficiency of extracting data from large photometric catalogs has made studying stellar activity of many more planet hosts possible in both the ultraviolet (UV) and X-ray. A study of 72 exoplanet systems by @poppenhaeger2010 showed no significant correlation between the fractional luminosity $(L_X/L_{\rm bol})$ with planet properties. They did, however, report a correlation of stellar X-ray luminosity with the ratio of planet mass to semi-major axis $(M_p\sin i/a)$, suggesting that massive, close-in planets tend to orbit more X-ray luminous stars. They attributed this correlation to biases of the radial velocity (RV) planet detection method, which favors smaller and further-out planets to be detected around less active, and thus X-ray faint, stars. A study of both RV and transit detected planets by [@shko13] of the far-UV (FUV) emission as observed by the Galaxy Evolution Explorer (GALEX) also searched for evidence of increased stellar activity due to SPI in $\sim$300 FGK planet hosts. This investigation found no clear correlations with $a$ or $M_p$, yet reported tentative evidence for close-in massive planets (i.e. higher $M_p$/$a$) orbiting more FUV-active stars than those with far-out and/or smaller planets, in agreement with past X-ray and Ca II results (Figure \[fig:shko13\]). There may be less potential for detection bias in this case as transit-detected planets orbit stars with a more normal distribution of stellar activity than those with planets discovered with the RV method. To confirm this, a sample of transiting small and distant planets still needs to be identified. ![The residual fractional FUV luminosity (i.e. photospheric flux removed leaving only stellar upper-atmospheric emission) as a function of the ratio of the planet mass to semi-major axis, a measure of star-planet interaction strength [@shko13].[]{data-label="fig:shko13"}](shkolnik2013.pdf){width="80.00000%"} The first statistcal SPI test for lower mass (K and M) systems was reported by [@fran16] in which they measured a weak positive correlation between the fractional N V luminosity, a transition region FUV emission line, with $M_p/a$ for the most massive planet in the system. They found tentative evidence that the presence of short-period planets (ranging in M$_p$sin$i$ from 3.5 to 615 M$_{Earth}$) enhances the transition region activity on low-mass stars, possibly through the interaction of their magnetospheres (Figure \[fig:fran16\]). ![Fractional N V (at 1240Å) luminosity from a sample of 11 K and M dwarf planet hosts is weakly correlated with a measure of the star-planet interaction strength $M_p/a$, where $M_p$ is the mass of the most massive planet in the system (in Earth masses) and $a$ is the semi-major axis (in AU). The Pearson coefficient and statistical likelihood of a null correlation is shown at the top. This provides tentative evidence that the presence of short-period planets enhances the transition region activity on low-mass stars, possibly through the interaction of their magnetospheres (@fran16). []{data-label="fig:fran16"}](france_2016_updated.pdf){width="80.00000%"} @cohen2015 modeled the interaction between an M-dwarf and a non-magnetized planet like Venus. Their work shows very different results for the localized space-weather environments for the planet for sub- and super-Alfvénic stellar wind conditions. The authors postulate that these dynamic differences would lead to additional heating and additional energy being deposited into the atmosphere of the planet. In all their simulations they find that the stellar wind penetrates much deeper into the atmosphere than for the magnetized planets simulated in @cohen2014, suggesting that for planets orbiting M dwarfs a magnetosphere may be necessary to shield the planet’s atmosphere. @vidotto2014 modeled the stellar wind of six M stars ranging from spectral type M0 to M2.5 to study the angular momentum of the host star and the rotational evolution of the star. They found the stellar wind to be highly structured at the orbital separation of the planet, and found that the planetary magnetospheric radii could vary by up to 20% in a single orbit. This will result in high variability in the strength of SPI signatures as the planet orbits through regions of closed and open magnetic field, implying that a larger, statistical study may be the most efficient path forward, especially for M dwarfs. Planetary Effects on Stellar Angular Momentum Evolution ======================================================= As the evidence continues to mount that star-planet interactions measurably increase stellar activity, and now for a wider range of planetary systems, there remains an ambiguity in the larger statistical, single-epoch studies as to whether or not this effect is caused by magnetic SPI, tidal SPI or planet search selection biases. Although no tidal SPI has been observed as stellar activity modulated by half the planet’s orbital period (@cuntz2000), there may be other effects due to the presence of the planets or planet formation process on the angular momentum evolution of the stars, which might increase the stellar rotation through tidal spin-up or decrease the efficiency of stellar magnetic breaking [@lanza2010b; @cohen2011]. In both cases, the star would be more active than expected for its mass and age. For main sequence FGK stars, the magnetized stellar wind acts as a brake on the stellar rotation, decreasing the global stellar activity rate as the star ages. This well observed process has given rise to the so-called “age-rotation-activity” relationship. However, the presence of a short-period giant planet may affect the star’s angular momentum. Under this scenario, the age-activity relation will systematically underestimate the star’s age, potentially making “gyrochronology” inapplicable to these systems. This poses an issue for evolutionary studies of exoplanets and their host stars, including planet migration models and planet atmospheric evolution. Several studies have found that stars hosting giant planets rotate faster than the evolutionary models predict. This increase in rotation rate is thought to be the direct consequence of tidal spin-up of the star by the planet. Additional evidence for the tidal spin-up of stars by giant planets has been found using two hot Jupiter systems by @schro2011 and @pillitteri2011. These studies searched for X-ray emission from M dwarf companions to the active planet hosts CoRoT-2 and HD 189733. Both systems showed no X-ray emission indicating the age of the systems to be $> 2$Gyr; however, the rotation-age relation places these systems between 100-300 Myr for CoRoT-2 and 600 Myr for HD 189733. A study by @lanza2010b showed that tides alone cannot spin-up the star to the levels seen in CoRoT-2 and HD 189733. Rather, his study postulated that the excess rotation is a consequence of interactions between the planetary magnetic field and the stellar coronal field. He proposed that these interactions would result in a magnetic field topology where the majority of the field lines are closed. This configuration would therefore limit the efficiency of the stellar wind to spin-down the star through angular momentum loss. By computing a simple linear force-free model, Lanza (2010) was able to compute the radial extension of the stellar corona and its angular momentum loss. He found that stars that host hot Jupiters show a much slower angular momentum loss rate than similar stars without a short-period giant planet, similar to [@cohen2011]. In order to disentangle the possible causes of the observed increased stellar activity of HJ hosts observed from single-epoch observations, it is necessary to monitor the activity throughout the planet’s orbit and over the stellar rotation period. Such studies can better characterize the star’s variability, generate firmer statistical results of any planet induced activity, and assess the underlining physical processes involved. The first and only attempt to date of this was reported by [@shkolnik2008] in which they monitored 13 HJ systems (all FGK stars) in search of orbit phased variability and then found a correlation between the median activity levels modulated by the planet and the $M_p\sin i/P_{orb}$ (Figures \[fig:magmom\_exo\] and \[fig:magmom\_ss\]). In the case of multi-planet systems, the planet with the largest $M_p\sin i/P_{orb}$ should have the strongest SPI effects. Summary ======= Detecting exoplanetary magnetic fields enables us to probe the internal structures of the planets and to place better constraints on their atmospheric mass loss through erosion from the stellar wind. Searching for the observational signatures of magnetic SPI in the form of planet induced stellar activity has proved to be the most successful method to date for detecting magnetic fields of hot Jupiters. Single-epoch statistical studies in search SPI signatures show that indeed there are significant differences in the activity levels between stars with close-in giant planets compared to those without. However, the cause of this remains ambiguous with four possible explanations. - Induced stellar activity in the form of interactions between the stellar and planetary magnetic fields. - The inhibition of magnetic breaking and thus faster than expected stellar rotation and increased stellar activity. - Tidal spin-up of the star due the presence of the close-in planet. - Lastly, the selection biases of planet hunting techniques. These potential underlying causes of such a result highlight the need for further monitoring campaigns across planetary orbit and stellar rotation periods to clearly identify planet-induced excess stellar activity. The vast majority of SPI studies, both individual monitoring as well as larger single-epoch statistical studies, have concentrated on main sequence FGK stars as they are the dominant hosts of hot Jupiters. These stars have the advantage of being relatively quiescent compared to M dwarfs, and thus teasing out signals produced by magnetic SPI from intrinsic stellar activity is simpler. But they also have the disadvantage of lower stellar magnetic field strengths compared to M dwarfs, lowering the power produced by the interaction. The modeling of magnetic SPI, especially with realistic stellar magnetic maps from ZDI surveys, continues to advance and aid in the interpretation of observed planet phased enhanced activity across the main sequence. Additional models enable quantitative predictions of the radio flux density for stars displaying signatures of SPI. Radio detections of at least a few of these systems will help calibrate the relative field strengths, and provide for the first time, true magnetic field strengths for hot Jupiters. Ongoing and future studies of magnetic SPI in a large sample of systems are necessary for improved statistics and distributions of magnetic fields of exoplanets. Extensions of these techniques to other tightly orbiting stellar systems, such as smaller planets close to M dwarfs, are challenging but possible. In these systems, star-planet separations of tens of stellar radii begin to coincide with the radiative habitable zone where planetary magnetic fields are likely a necessary condition for surface habitability. As more close-in planets around relatively bright M dwarfs are discovered by missions such as TESS, the search for magnetic star-planet interactions will be extended to these low-mass stars. [^1]: <http://www.exoplanets.org>, accessed 2/15/2017
--- abstract: 'Healthcare is one of the largest business segments in the world and is a critical area for future growth. In order to ensure efficient access to medical and patient-related information, hospitals have invested heavily in improving clinical mobile technologies and spread their use among doctors. Notwithstanding the benefits of mobile technologies towards a more efficient and personalized delivery of care procedures, there are also indications that their use may have a negative impact on patient-centeredness and often places many cognitive and physical demands on doctors, making them prone to make medical errors. To tackle this issue, in this paper we present the main outcomes of the project TESTMED, which aimed at realizing a clinical system that provides operational support to doctors using mobile technologies for delivering care to patients, in a bid to minimize medical errors. The system exploits concepts from Business Process Management on how to manage a specific class of care procedures, called clinical guidelines, and how to support their execution and mobile orchestration among doctors. As a viable solution for doctors’ interaction with the system, we investigated the use of vocal and touch interfaces. User evaluation results indicate a good usability of the system.' author: - Andrea Marrella - Massimo Mecella - Mahmoud Sharf - Tiziana Catarci bibliography: - 'biblio.bib' date: 'Received: date / Accepted: date' subtitle: | Process-aware Enactment of Clinical Guidelines\ through Multimodal Interfaces title: The TESTMED Project Experience ---
--- abstract: | We use the method of thermal QCD sum rules to investigate the effects of temperature on the neutron electric dipole moment $d_n$ induced by the vacuum $\bar{\theta}$-angle. Then, we analyze and discuss the thermal behaviour of the ratio $\mid {d_n \over \bar{\theta}}\mid $ in connection with the restoration of the CP-invariance at finite temperature. author: - | M. Chabab$^{1.2}\thanks{e-mail: mchabab@ucam.ac.ma}$, N. El Biaze$^1$ and R. Markazi$^1$\ \ title: | \ \ Note on the Thermal Behavior of the Neutron Electric Dipole Moment from QCD Sum Rules --- 22.5cm16.8cm-.4cm-.9cm = 6pt plus 2pt minus 1pt addtoreset[equation]{}[section]{} =18.6pt plus 0.2pt minus 0.1pt addtoreset[equation]{}[section]{} Introduction ============ The CP symmetry is, without doubt, one of the fundamental symmetries in nature. Its breaking still carries a cloud of mystery in particle physics and cosmology. Indeed, CP symmetry is intimately related to theories of interactions between elementary particles and represents a cornerstone in constructing grand unified and supersymmetric models. It is also necessary to explain the matter-antimatter asymmetry observed in universe. The first experimental evidence of CP violation was discovered in the $K-\bar{K}$ mixing and kaon decays [@C]. According to the CPT theorem, CP violation implies T violation. The latter is tested through the measurement of the neutron electric dipole moment (NEDM)$d_n$. The upper experimental limit gives confidence that the NEDM can be another manifestation of CP breaking. To investigate the CP violation phenomenon many theoretical models were proposed. In the standard model of electroweak interactions, CP violation is parametrized by a single phase in the Cabbibo Kobayashi Maskawa (CKM) quark mixing matrix [@CKM]. Other models exhibiting a CP violation are given by extensions of the standard model; among them, the minimal supersymmetric standard model $MSSM$ includes in general soft complex parameters which provide new additional sources of CP violation [@DMV; @BU]. CP violation can be also investigated in the strong interactions context through QCD framework. In fact, the QCD effective lagrangian contains an additional CP-odd four dimensional operator embedded in the following topological term: $$L_{\theta}=\theta {\alpha_s\over 8 \pi}G_{\mu\nu}\tilde{G}^{\mu\nu},$$ where $G_{\mu\nu}$ is the gluonic field strength, $\tilde{G}^{\mu\nu}$ is its dual and $\alpha_s$ is the strong coupling constant. The $G_{\mu\nu}\tilde{G}^{\mu\nu}$ quantity is a total derivative, consequently it can contribute to the physical observables only through non perturbative effects. The NEDM is related to the $\bar \theta$-angle by the following relation : $$d_n\sim {e\over M_n}({m_q\over M_n})\bar \theta \sim \{ \begin{array}{c} 2.7\times 10^{-16}\overline{\theta }\qquad \cite{Baluni}\\ 5.2\times 10^{-16}\overline{\theta }\qquad \cite{cvvw} \end{array}$$ and consequently, according to the experimental measurements $d_n<1.1\times 10^{-25}ecm$ [@data], the $\bar \theta $ parameter must be less than $2\times10^{-10}$ [@peccei2]. The well known strong CP problem consists in explaining the smallness of $\bar{\theta}$. In this regard, several scenarios were suggested. The most known one was proposed by Peccei and Quinn [@PQ] and consists in implementing an extra $U_A(1)$ symmetry which permits a dynamical suppression of the undesired $\theta $-term. This is possible due to the fact that the axial current $J_5^\mu$ is related to the gluonic field strength through the following relation $\partial_\mu J_5^\mu={\alpha_s\over8 \pi}G_{\mu\nu}\tilde{G}^{\mu\nu}$. The breakdown of the $U_A(1)$ symmetry gives arise to a very light pseudogoldstone boson called axion. This particle may well be important to the puzzle of dark matter and might constitute the missing mass of the universe [@LS]. Motivated by: (a) the direct relation between the $\bar \theta$-angle and NEDM $d_n $, as it was demonstrated firstly in [@cvvw] via the chiral perturbation theory and recently in [@PR; @PR1] within QCD sum rules formalism; (b) the possibility to restore some broken symmetries by increasing the temperature; we shall use the QCD sum rules at $T\ne 0$ [@BS] to derive thermal dependence of the ratio $\mid{ d_n\over \bar{\theta}}\mid$. Then we study its thermal behaviour at low temperatures and discuss the consequences of temperature effects on the restoration of the broken CP symmetry. This paper is organized as follows: Section 2 is devoted to the calculations of the NEDM induced by the $\bar {\theta}$ parameter from QCD sum rules. In section 3, we show how one introduces temperature in QCD sum rules calculations. We end this paper with a discussion and qualitative analysis of the thermal effects on the CP symmetry. NEDM from QCD sum rules ======================== In the two later decades, QCD sum rules à la SVZ [@SVZ] were applied successfully to the investigation of hadronic properties at low energies. In order to derive the NEDM through this approach, many calculations were performed in the literature [@CHM; @KW]. One of them, which turns out to be more practical for our study, has been obtained recently in [@PR; @PR1]. It consists in considering a lagrangian containing the following P and CP violating operators: $$L_{P,CP}=-\theta_q m_* \sum_f \bar{q}_f i\gamma_5 q_f +\theta {\alpha_s\over 8 \pi}G_{\mu\nu}\tilde{G}^{\mu\nu}.$$ $\theta_q$ and $\theta$ are respectively two angles coming from the chiral and the topological terms and $m_*$ is the quark reduced mass given by $m_*$=$m_um_d \over{m_u +m_d} $. The authors of [@PR1] start from the two points correlation function in QCD background with a nonvanishing $\theta$ and in the presence of a constant external electomagnetic field $ F^{\mu\nu}$: $$\Pi(q^2) = i \int d^4x e^{iqx}<0|T\{\eta(x)\bar{\eta}(0)\}|0>_{\theta,F} .$$ $\eta(x)$ is the interpolating current which in the case of the neutron reads as [@I]: $$\eta =2\epsilon_{abc}\{(d^T_aC\gamma_5u_b)d_c+\beta(d^T_aCu_b)\gamma_5d_c\},$$ where $\beta$ is a mixing parameter. Using the operator product expansion (OPE), they have first performed the calculation of $\Pi(q^2)$ as a function of matrix elements and Wilson coefficients and then have confronted the QCD expression of $\Pi(q^2)$ to its phenomenological parametrisation. $\Pi(q^2)$ can be expanded in terms of the electromagnetic charge as[@CHM]: $$\Pi(q^2)=\Pi^{(0)}(q^2)+e \Pi^{(1)}(q^2,F^{\mu\nu})+ O(e^2).$$ The first term $\Pi^{(0)}(q^2)$ is the nucleon propagator which include only the CP-even parameters [@SVZ1; @IS], while the second term $\Pi^{(1)}(q^2,F^{\mu\nu})$ is the polarization tensor which may be expanded through Wilson OPE as: $\sum C_n<0|\bar{q}\Gamma q|0>_{\theta,F}$, where $\Gamma$ is an arbitrary Lorentz structure and $C_n$ are the Wilson coefficient functions calculable in perturbation theory [@SVZ1]. From this expansion, one keeps only the CP-odd contribution piece. By considering the anomalous axial current, one obtains the following $\theta$ dependence of $<0|\bar{q}\Gamma q|0>_{\theta}$ matrix elements [@PR]: $$m_q <0|\bar{q}\Gamma q|0>_{\theta}= i m_*\theta <0|\bar{q}\Gamma q|0> ,$$ where $m_q$ and $m_*$ are respectively the quark and reduced masses. The electromagnetic dependence of these matrix elements can be parametrized through the implementation of the $\kappa$, $\chi $ and $\xi$ susceptibilities defined as [@IS]:\ $$\begin{tabular}{lc} $<0|\bar{q}\sigma^{\mu\nu} q|0>_F= \chi F^{\mu\nu} <0|\bar{q}q|0>$ \\ $g<0|\bar{q}G^{\mu\nu} q|0>_{F}= \kappa F^{\mu\nu} <0|\bar{q}q|0> $ & \\ $2g<0|\bar{q}\tilde{G}^{\mu\nu} q|0>_{F}= \xi F^{\mu\nu} <0|\bar{q}q|0>. $ & \end{tabular}$$ Putting altogether the above ingredients and after a straightforward calculation [@PR1], the following expression of $\Pi^{(1)}(q^2,F^{\mu\nu})$ for the neutron is derived:\ $$\begin{aligned} \Pi(-q^2)&=&-{\bar{\theta}m_* \over {64\pi^2}}<0|\bar{q}q|0>\{\tilde{F}\sigma,\hat q\}[\chi(\beta+1)^2(4e_d-e_u) \ln({\Lambda^2\over -q^2})\nonumber\\ && -4(\beta-1)^2e_d(1+{1\over4} (2\kappa+\xi))(\ln({-q^2\over \mu_{IR}^2})-1){1\over -q^2}\nonumber\\ &&-{\xi\over 2}((4\beta^2-4\beta+2)e_d+(3\beta^2+2\beta+1)e_u){1\over -q^2}...],\end{aligned}$$ where $\bar{\theta}=\theta+\theta_q$ is the physical phase and $\hat q=q_\mu\gamma^\mu$.\ The QCD expression (2.7) will be confronted to the phenomenological parametrisation $\Pi^{Phen}$$(-q^2)$ written in terms of the Neutron hadronic properties. The latter is given by:\ $$\Pi^{Phen}(-q^2)=\{\tilde{F}\sigma,\hat q\} ({\lambda^2d_nm_n\over(q^2-m_n^2)^2} +{A\over (q^2-m_n^2)}+...),$$ where $m_n$ is the neutron mass, $e_q$ is the quark charge. A and $\lambda^2$, which originate from the phenomenological side of the sum rule, represent respectively a constant of dimension 2 and the neutron coupling constant to the interpolating current $\eta(x)$. This coupling is defined via a spinor $v$ as $<0|\eta(x)|n>=\lambda v$. QCD sum rules at finite temperature ==================================== The introduction of finite temperature effects may provide more precision to the phenomenological values of hadronic observables. Within the framework of QCD sum rules, the T-evolution of the correlation functions appear as a thermal average of the local operators in the Wilson expansion[@BS; @BC; @M]. Hence, at nonzero temperature and in the approximation of the non interacting gas of bosons (pions), the vacuum condensates can be written as : $$<O^i>_T=<O^i>+\int{d^3p\over 2\epsilon(2\pi)^3}<\pi(p)|O^i|\pi(p)>n_B({\epsilon\over T})$$ where $\epsilon=\sqrt{p^2+m^2_\pi}$, $n_B={1\over{e^x-1}} $is the Bose-Einstein distribution and $<O^i>$ is the standard vacuum condensate (i.e. at T=0). In the low temperature region, the effects of heavier resonances $(\Gamma= K, \eta,.. etc)$ can be neglected due to their distibution functions $\sim e^{- m_\Gamma \over T}$[@K]. To compute the pion matrix elements, we apply the soft pion theorem given by: $$<\pi(p)|O^i|\pi(p)>=-{1\over f^2_\pi}<0|[F^a_5,[F^a_5,O^i]]|0>+ O({m^2_\pi \over \Lambda^2}),$$ where $ \Lambda$ is a hadron scale and $F^a_5$ is the isovector axial charge: $$F^a_5=\int d^3x \bar{q}(x)\gamma_0\gamma_5{\tau^a\over2}q(x).$$ Direct application of the above formula to the quark and gluon condensates shows that [@GL; @K]:\ (i) Only $<\bar{q}q>$ is sensitive to temperature. Its behaviour at finite T is given by: $$<\bar{q}q>_T\simeq (1-{\varphi(T)\over8})<\bar{q}q>,$$ where $\varphi(T)={T^2\over f^2_\pi}B({m_\pi\over T})$ with $B(z)= {6\over\pi^2}\int_z^\infty dy {\sqrt{y^2-z^2}\over{e^y-1}}$ and $f_\pi$ is the pion decay constant ($f_\pi\simeq 93 MeV$). The variation with temperature of the quark condensate $<\bar{q}q>_T$ results in two different asymptotic behaviours, namely:\ $<\bar{q}q>_T\simeq (1-{T^2\over {8f^2_\pi}})<\bar{q}q>$ for ${m_\pi\over T}\ll 1$, and $<\bar{q}q>_T\simeq (1-{T^2\over {8f^2_\pi}}e^{-m_\pi \over T})<\bar{q}q>$ for ${m_\pi\over T}\gg 1$.\ (ii) The gluon condensate is nearly constant at low temperature and a T dependence occurs only at order $T^8$. As usual, the determination of the ratio ${d_n \over \bar{\theta}}$ sum rules at non zero temperature is now easily performed through two steps. In the first step, we apply Borel operator to both expressions of the Neutron correlation function shown in Eqs. (2.7) and (2.8), where finite temperature effects were introduced as discussed above. Next step, by invoking the quark-hadron duality principle, we deduce the following relation of the $\bar { \theta}$ induced NEDM: $${d_n\over \bar{\theta}}(T)=-{M^2m_* \over 16\pi^2}{1\over \lambda_n^2(T)M_n(T)}(1-{\varphi(T)\over 8})<\bar{q}q>[4\chi(4e_u-e_d)-{\xi\over 2M^2}(4e_u+8e_d)]e^{M_n^2 \over M^2},$$ where M represents the Borel parameter. Note that we have neglected the single pole contribution entering via the constant A, as suggested in [@PR].\ The expression (3.5) is derived with $\beta=1$ which is more appropriate for us since it suppresses the infrared divergences. In fact, the Ioffe choice $(\beta=-1)$ which is rather more useful for the CP even case, removes the leading order contribution in the sum rules (2.7). The coupling constant $\lambda_n^2(T)$ and the neutron mass $M_n(T)$ which appear in (3.5) were determined from the thermal QCD sum rules. For the former, we consider the $\hat q $ sum rules in [@K; @J] with $\beta=1$ and then we extract the following explicit expression of $\lambda_n^2(T)$: $$\lambda_n^2(T)=\{{3\over{8{(2\pi)}^4}}M^6+{3\over{16{(2\pi)}^2}}M^2<{\alpha _s\over\pi} G^2>\}\{1-(1+{g^2_{\pi NN}f^2_\pi\over M_n^2}){\varphi(T) \over 16}\}e^{M^2_n(0)\over M^2}$$ Within the pion gas approximation, Eletsky has demonstrated in [@E] that inclusion of the contribution coming from the pion-nucleon scattering in the nucleon sum rules is mandatory. The latter enters Eq.(3.6) through the coupling constant $g_{\pi NN}$, whose values lie within the range 13.5-14.3 [@PROC].\ Numerical analysis is performed with the following input parameters: the Borel mass has been chosen within the values $M^2=0.55-0.7GeV^2$ which correspond to the optimal range (Borel window) in the $ d_n\over \bar{\theta}$ sum rule at $T=0$ [@PR1]. For the $\chi$ and $\xi$ susceptibilities we take $\chi=-5.7\pm 0.6 GeV^{-2}$ [@BK] and $\xi=-0.74\pm 0.2$ [@KW]. As to the vacuum condensates appearing in (3.5), we fix $<\bar{q}q>$ and $<G^2>$ to their standard values [@SVZ]. Discussion and Conclusion ========================= In the two above sections, we have established the relation between the NEDM and $ \bar{\theta} $ angle at non zero temperature from QCD sum rules. Since the ratio ${ d_ n \over \bar{\theta}}$ is expressed in terms of the pion parameters $f_\pi$, $m_\pi$ and of $g_{\pi NN}$, we briefly recall the main features of their thermal behaviour. Various studies performed either within the framework of the chiral perturbation theory and/or QCD sum rules at low temperature have shown the following features:\ (i) The existence of QCD phase transition temperature $T_c$ which signals both QCD deconfinement and chiral symmetry restoration [@BS; @G].\ (ii) $f_\pi$ and $g_{\pi NN}$ have very small variation with temperature up to $T_c$. So, we shall assume them as constants below $T_c$. However, they vanish if the temperature passes through the critical value $T_c$ [@DVL].\ (iii) The thermal mass shift of the neutron and the pion is absent at order $O(T^2)$ [@K; @BC]. $\delta M_n$ shows up only at the next order $T^4$, but its value is negligible [@E].\ By taking into account the above properties, we plot the ratio defined in Eq. (3.5) as a function of T. From the figure, we learn that the ratio $\mid{ d_ n \over \bar{\theta} }\mid$ survives at finite temperature and it decreases smoothly with T (about 16$\%$ variation for temperature values up to 200 MeV). This means that either the NEDM value decreases or $\bar{\theta}$ increases. Consequently, for a fixed value of $\bar{\theta}$ the NEDM decreases but it does not exhibit any critical behaviour. Furthermore, if we start from a non vanishing $ \bar{\theta} $ value at $T=0$, it is not possible to remove it at finite temperature. We also note that $ \mid{d_n\over \bar{\theta}}\mid$ grows as $M^2$ or $\chi$ susceptibility increases. It also grows with quark condensate rising. However this ratio is insensitive to both the $\xi$ susceptibility and the coupling constant $g_{\pi NN}$. We notice that for higher temperatures, the curve $\mid{d_n\over \bar{\theta}}\mid=f({T\over T_c})$ exhibits a brutal increase justified by the fact that for temperatures beyond the critical value $T_c$, at which the chiral symmetry is restored, the constants $f_\pi$ and $g_{\pi NN}$ become zero and consequently from Eq(3.5) the ratio $ {d_n\over \bar{\theta}}$ behaves as a non vanishing constant. The large difference between the values of the ratio for $T<T_c$ and $T>T_c$ maybe a consequence of the fact that we have neglected other contributions to the the spectral function, like the scattering process $ N+ \pi \to \Delta $. These contributions , which are of the order $T^4$, are negligible in the low temperture region but become substantial for $T\ge T_c$. Moreover, this difference may also originate from the use of soft pion approximation which is valid essentially for low $T$ ($T< T_c$). Therefore it is clear from this qualitative analysis, which is based on the soft pion approximation, that the temperature does not play a fundamental role in the suppression of the undesired $\theta$-term and hence the broken CP symmetry is not restored, as expected. This is not strange, in fact it was shown that more heat does not imply automatically more symmetry [@MS; @DMS]. Moreover, some exact symmetries can be broken by increasing temperature [@W; @MS]. The symmetry non restoration phenomenon, which means that a broken symmetry at T=0 remains broken even at high temperature, is essential for discrete symmetries, CP symmetry in particular. Indeed, the symmetry non restoration allows us to avoid wall domains inherited after the phase transition [@ZKO] and to explain the baryogenesis phenomenon in cosmology [@S]. Furthermore, it can be very useful for solving the monopole problem in grand unified theories [@DMS]. [**Acknowledgments**]{} We are deeply grateful to T. Lhallabi and E. H. Saidi for their encouragements and stimulating remarks. N. B. would like to thank the Abdus Salam ICTP for hospitality and Prof. Goran Senjanovic for very useful discussions.\ This work is supported by the program PARS-PHYS 27.372/98 CNR and the convention CNPRST/ICCTI 340.00. [99]{} J. H. Christensen, J. W. Cronin, V. L. Fitch and R. Turlay, Phys. Rev. Lett. [**13**]{}(1964)138. N. Cabibbo, Phys. Rev. Lett. [**10**]{}(1963) 531;\ M. Kobayashi and T. Maskawa, Prog. Theor. Phys. [**49**]{}(1973) 652. D. A. Demir, A. Masiero, and O. Vives, Phys. Rev. [**D61**]{}(2000) 075009. I. Bigi and N. G. Ural’tsev, Sov. Phys. JETP [**73**]{}(2)(1991) 198;\ I. Bigi, Surveys in High Energy Physics, [**12**]{}(1998) 269. V. Baluni, Phys. Rev [**D19**]{}(1979)2227. R. Crewther, P. di Vecchia, G. Veneziano, E. Witten, Phys. Lett [**B88**]{}(1979)123. R. M. Barnett and al, Phys. Rev [**D54**]{}(1996) 1. R. D. Peccei, hep-ph/9807516. R. Golub and S. K. Lamoreaux, hep-ph/9907282. R. D. Peccei and H.R. Quinn, Phys. Rev [**D16**]{}(1977) 1791. G. Lazarides and Q. Shafi, “Monopoles, Axions and Intermediate Mass Dark Matter”, hep-ph/0006202. M. Pospelov and A. Ritz, Nucl. Phys. [**B558**]{}(1999) 243. M. Pospelov and A. Ritz, Phys. Rev. Lett. [**83**]{}(1999) 2526. M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. [**B147**]{}(1979) 385. V.M. Khatsimovsky, I.B. khriplovich and A.S. Yelkhovsky, Ann. Phys. [**186**]{}(1988)1;\ C. T. Chan, E. M. Henly and T. Meissner, “Nucleon Electric Dipole Moments from QCD Sum Rules”, hep-ph/9905317. B. L. Ioffe, Nucl. Phys. [**B188**]{} (1981) 317;\ Y. Chung, H. G. Dosch, M. Kremer and D. Schall, Phys. Lett.[**B102**]{}(1981)175; Nucl. Phys. [**B197**]{}(1982)55 M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. [**B166**]{}(1980) 493. B.L. Ioffe and A. V. Smilga, Nucl. Phys. [**B232**]{} (1984) 109. M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. [**B166**]{}(1980) 493. A. I. Bochkarev and M. E. Shaposhnikov, Nucl. Phys. [**B268**]{}(1986)220. R. Barducci, R. Casalbuoni, S. de Curtis, R. Gatto and G. Pettini, Phys. Lett. [**B244**]{}(1990) 311. J. Gasser and H. Leutwyler, Phys. Lett. [**B184**]{}(1987) 83;\ H. Leutwyler, in the Proceedings of QCD 20 years later, achen,1992, ed. P.M. Zerwas and H.A. Kastrup (World scientific, Singapore, 1993). Y. Koike, Phys. Rev [**D48**]{} (1993) 2313;\ C. Adami and I. Zahed, Phys.Rev [**D45**]{}(1992) 4312;\ T. Hatsuda, Y. Koike and S.H. Lee, Nucl. Phys. [**B394**]{} (1993)221. S. Mallik and K. Mukherjee, Phys. Rev [**D58**]{} (1998) 096011;\ S. Mallik, Phys. Lett. [**B416**]{}(1998). H.G. Doch, M. Jamin and S. Narison, Phys. Lett. [**B220**]{}(1989) 251;\ M. Jamin, Z. Phys. [**C37**]{}(1988)625. V. L. Eletsky, Phys. Lett. [**B245**]{}(1990) 229; Phys. Lett. [**B352**]{}(1995) 440. Proceeding of the workshop “ A critical issue in the determination of the pion nucleon decay constant”, ed. Jan Blomgten, Phys. Scripta [**T87**]{} (2000)53. V. M. Belyaev and Y. I. Kogan, Sov. J. Nucl. Phys. [**40**]{}(1984) 659. I. I. Kogan and D. Wyler, Phys. Lett. [**B274**]{}(1992) 100. C. A. Dominguez, C. Van Gend and M. Loewe, Phys. Lett. [**B429**]{}(1998) 64;\ V.L. Eletky and I.I. Kogan, Phys. Rev [**D49**]{} (1994)3083. S. Gupta, hep-lat/0001011; A. Ali Khan et al., hep-lat/0008011. S. Weinberg, Phys. Rev [**D9**]{}(1974) 3357. R. N. Mohapatra and G. Senjanovic, Phys. Rev. [**D20**]{} (1979) 3390;\ G. Dvali, A. Melfo, G. Senjanovic, Phys.Rev. [**D54**]{} (1996)7857 and references therein. Ya. B. Zeldovich, I. Yu. Kobzarev and L. B. Okun, JETP. [**40**]{}(1974) 1;\ T. W. Kibble, J. Phys. [**A9**]{}(1976) 1987, Phys. Rep. [**67**]{}(1980) 183. A. Sakharov, JETP Lett. [**5**]{} (1967) 24. G. Dvali, A. Melfo and G. Senjanovic, Phys. Rev. Lett. [**75**]{}(1995) 4559. **Figure Captions** {#figure-captions .unnumbered} =================== Figure: Temperature dependence of the ratio $\mid{d_n \over \bar{\theta}}\mid$
--- abstract: 'We consider the initial value problem $u_t = \Delta \log u$, $u(x,0) = u_0(x)\ge 0$ in ${\mathbb R}^2$, corresponding to the Ricci flow, namely conformal evolution of the metric $u \, (dx_1^2 + dx_2^2)$ by Ricci curvature. It is well known that the maximal (complete) solution $u$ vanishes identically after time $T= \frac 1{4\pi} \int_{{\mathbb R}^2} u_0 $. Assuming that $u_0$ is compactly supported we describe precisely the Type II vanishing behavior of $u$ at time $T$: we show the existence of an inner region with exponentially fast vanishing profile, which is, up to proper scaling, a [*soliton cigar solution*]{}, and the existence of an outer region of persistence of a logarithmic cusp. This is the only Type II singularity which has been shown to exist, so far, in the Ricci Flow in any dimension. It recovers rigorously formal asymptotics derived by J.R. King [@K].' address: - 'Department of Mathematics, Columbia University, New York, USA' - 'Department of Mathematics, Columbia University, New York, USA' author: - 'Panagiota Daskalopoulos$^*$' - Natasa Sesum title: 'Type II extinction profile of maximal Solutions to the Ricci flow in ${\mathbb R}^2$' --- [^1] Introduction ============ We consider the Cauchy problem $$\begin{cases} \label{eqn-u} u_t = \Delta \log u & \mbox{in} \,\, {\mathbb R}^2 \times (0,T)\\ u(x,0) = u_0(x) & x \in {\mathbb R}^2 , \end{cases}$$ for the [*logarithmic fast diffusion*]{} equation in ${\mathbb R}^2$, with $T >0$ and initial data $u_0 $ non-negative, bounded and compactly supported. It has been observed by S. Angenent and L. Wu [@W1; @W2] that equation represents the evolution of the conformally equivalent metric $g_{ij} = u\, dx_i\, dx_j$ under the [*Ricci Flow*]{} $$\label{eqn-ricci} \frac{\partial g_{ij}}{\partial t} = -2 \, R_{ij}$$ which evolves $g_{ij}$ by its Ricci curvature. The equivalence easily follows from the observation that the conformal metric $g_{ij} = u\, I_{ij}$ has scalar curvature $R = - (\Delta \log u) /u$ and in two dimensions $R_{ij} = \frac 12 \, R\, g_{ij}$. Equation arises also in physical applications, as a model for long Van-der-Wals interactions in thin films of a fluid spreading on a solid surface, if certain nonlinear fourth order effects are neglected, see [@dG; @B; @BP]. It is shown in [@DD1] that given an initial data $u_0 \geq 0$ with $\int_{{\mathbb R}^2} u_0 \, dx < \infty$ and a constant $\gamma \geq 2 $, there exists a solution $u_\gamma$ of with $$\label{eqn-intc} \int_{{\mathbb R}^2} u_\lambda (x,t) \, dx = \int_{{\mathbb R}^2} u_0 \, dx - 2\pi \gamma \, t.$$ The solution $u_\gamma$ exists up to the exact time $T=T_\gamma$, which is determined in terms of the initial area and $\gamma$ by $T_\gamma= \frac{1}{2\pi\, \gamma} \, \int_{{\mathbb R}} u_0 \, dx.$ We restrict our attention to [*maximal solutions*]{} $u$ of , corresponding to the value $\gamma =2$ in , which vanish at time $$\label{eqn-mvt} T= \frac{1}{4 \pi} \, \int_{{\mathbb R}^2} u_0 (x) \, dx.$$ It is shown in [@DD1] and [@RVE] that if $u_0$ is compactly supported, then the maximal solution $u$ which extincts at time $T$ satisfies the asymptotic behavior $$\label{eqn-agc} u(x,t) = \frac {2 t}{|x|^2\log^2 |x|} \left ( 1 + o(1) \right ), \qquad \mbox{as \,\, $|x| \to \infty$}, \quad 0 \leq t < T.$$ This bound, of course, deteriorates as $t \to T$. Geometrically corresponds to the condition that the conformal metric is complete. The manifold can be visualized as a surface of finite area with an unbounded cusp. J.R. King [@K] has formally analyzed the extinction behavior of maximal solutions $u$ of , as $t \to T^-$. His analysis, for radially symmetric and compactly supported initial data, suggests the existence of two regions of different behavior: in the [*outer region*]{} $(T-t)\, \log r > T$ the “logarithmic cusp” exact solution $2 t\, /|x|^2 \, \log^2 |x|$ of equation $u_t = \Delta \log u$ persists. However, in the [*inner region*]{} $(T-t)\, \log r \leq T$ the solution vanishes exponentially fast and approaches, after an appropriate change of variables, one of the soliton solutions $U$ of equation $U_\tau = \Delta \log U$ on $-\infty < \tau < \infty$ given by $U(x,\tau) = 1/( \lambda |x|^2 + e^{4\lambda \tau})$, with $\tau=1/(T-t)$ and $\lambda=T/2$. This behavior was established rigorously in the radially symmetric case by the first author and M. del Pino in [@DD2]. The precise asymptotics of the Ricci flow neckpinch in the compact case, on $S^n$, has been established by Knopf and Angenent in [@AngKn]. Our goal in this paper is to remove the assumption of radial symmetry and establish the vanishing behavior of maximal solutions of for any non-negative compactly supported initial data. To state the inner region behavior in a precise manner, we perform the change of variables $$\label{eqn-rbu} \bar u(x,\tau) = \tau^2 \, u(x, t), \qquad \tau = \frac 1{T-t}$$ and $$\label{eqn-rtu} \qquad \tilde u(y,\tau) = \alpha(\tau) \, \bar u(\alpha(\tau)^{1/2} y,\tau),$$ with $$\label{eqn-atau} \alpha(\tau) = [\bar u(0,\tau)]^{-1} = [(T-t)^{-2} u(0,t)]^{-1}$$ so that $\tilde u(0,\tau)=1$. A direct computation shows that the rescaled solution $\tilde u$ satisfies the equation $$\label{eqn-tu} \tilde u_\tau = \Delta \log \tilde u + \frac {\alpha '(\tau)}{2\alpha (\tau)} \, \nabla ( y \cdot \tilde u) +\frac{2 \tilde u}{\tau}.$$ Then, following result holds: \[Mth1\] (Inner behavior) Assume that the initial data $u_0$ is non-negative, bounded and compactly supported. Then, $$\label{eqn-lim} \lim_{\tau \to \infty} \frac{ \alpha ' (\tau)}{2\, \alpha(\tau)} = T$$ and the rescaled solution $\tilde u$ defined by - converges, uniformly on compact subsets of ${\mathbb R}^2$, to the solution $$U(x) = \frac 1{ \frac T2 \, |y|^2 + 1}$$ of the steady state equation $$\Delta \log U + T\cdot \nabla ( y \cdot U)=0.$$ Since for any maximal solution $T= (1/{4\pi}) \int_{{\mathbb R}^2} u_0(x)\, dx$, this theorem shows, in particular, that the limit of the rescaled solution is uniquely determined by the area of the initial data. The uniqueness of the limit has not previously shown in [@DD2] even under the assumption of radial symmetry. To describe the vanishing behavior of $u$ in the outer region we first perform the cylindrical change of variables $$\label{eqn-rv} v(\zeta,\theta,t) = r^2\, u(r,\theta,t), \qquad \zeta =\log r$$ with $(r,\theta)$ denoting the polar coordinates. Equation $u_t=\Delta \log u$ in cylindrical coordinates takes the form $$\label{eqn-v} v_t = \Delta_c \log v$$ with $\Delta_c$ denoting the Laplacian in cylindrical coordinates defined as $$\Delta_c \log v=(\log v)_{\zeta\zeta} + (\log v)_{\theta\theta}.$$ We then perform a further scaling setting $$\label{eqn-rtv} \tilde v(\xi ,\theta, \tau) = \tau^2\, v (\tau \xi,\theta,t), \qquad \tau= \frac 1{T-t}.$$ A direct computation shows that $\tilde v$ satisfies the equation $$\label{eqn-tv} \tau \, \tilde v_{\tau} = \frac 1{\tau} (\log \tilde v)_{\xi\xi} + \tau \, (\log \tilde v)_{\theta\theta} + \xi \, \tilde v_{\xi} + 2\tilde v.$$ The extinction behavior of $u$ (or equivalently of $v$) in the outer region $\xi \geq T$, is described in the following result. \[Mth2\] (Outer behavior). Assume that the initial data $u_0$ is non-negative, bounded and compactly supported. Then, the rescaled solution $\tilde v$ defined by converges, as $\tau \to \infty$, to the $\theta-$independent steady state solution $V(\xi)$ of equation given by $$\label{eqn-dV} V(\xi) = \begin{cases} \frac{ 2 T}{\xi^2}, \qquad &\xi > T \\ 0, \qquad &\xi < T. \end{cases}$$ Moreover, the convergence is uniform on the set $(-\infty, \xi^-]\times [0,2\pi]$, for any $-\infty < \xi^- < T$, and on compact subsets of $(T, +\infty) \times [0,2\pi]$. Under the assumption of radial symmetry this result follows from the work of the first author and del Pino [@DD2]. The proof of Theorems \[Mth1\] and \[Mth2\] rely on sharp estimates on the geometric [*width*]{} $W$ and on the [*maximum curvature*]{} $R_{\max}$ of maximal solutions near their extinction time $T$ derived in [@DH] by the first author and R. Hamilton. In particular, it is found in [@DH] that the maximum curvature is proportional to $1/{(T-t)^2}$, which does not go along with the natural scaling of the problem which would entail blow-up of order $1/{(T-t)}$. One says that the vanishing behavior is [*of type II*]{}. The proof also makes an extensive use of the Harnack estimate on the curvature $R = -\Delta \log u /u$ shown by Hamilton and Yau in [@HY]. Although the result in [@HY] is shown only for a compact surface evolving by the Ricci flow, we shall observe in section \[sec-prelim\] that the result remains valid in our case as well. Finally, let us remark that the proof of the inner-region behavior is based on the classification of eternal complete solutions of the 2-dimensional Ricci flow, recently shown by the authors in [@DS]. Preliminaries {#sec-prelim} ============= In this section we will collect some preliminary results which will be used throughout the rest of the paper. For the convenience of the reader, we start with a brief description of the geometric estimates in [@DH] on which the proofs of Theorems \[Mth1\] and \[Mth2\] rely upon. Geometric Estimates. {#sec-ge} -------------------- In [@DH] the first author and R. Hamilton established upper and lower bounds on the geometric width $W(t) $ of the maximal solution $u$ of and on the maximum curvature $R_{\max}(t)= \max_{x \in {\mathbb R}^2} R(x,t)$, with $R= - (\Delta \log u) /u $. Let $F:{\mathbb R}^2\to[0,\infty)$ denote a proper function $F$, such that $F^{-1}(a)$ is compact for every $a\in [0,\infty)$. The width of $F$ is defined to be the supremum of the lengths of the level curves of $F$, namely $w(F) = \sup_c L\{F=c\}.$ The width $w$ of the metric $g$, as introduced in [@DH], is defined to be the infimum $$w(g) = \inf_F w(F).$$ The estimates in [@DH] depend on the time to collapse $T-t$. However, they do not scale in the usual way. More precisely: \[thm-DH1\] There exist positive constants $c$ and $C$ for which $$\label{eqn-w} c \, (T-t) \leq W(t) \leq C\, (T-t)$$ and $$\label{eqn-c} \frac{c}{(T-t)^2} \leq R_{\max}(t) \leq \frac{C}{(T-t)^2}$$ for all $0< t < T$. The Hamilton-Yau Harnack estimate. ---------------------------------- In [@HY] Hamilton and Yau established a Harnack estimate on the curvature $R$ of a compact surface evolving by the Ricci flow, in the case where the curvature $R$ changes sign. Since the proof in [@HY] uses only local quantities, the result and its proof can be carried over to the complete, non-compact case. \[thm-harnack\] For any constants $E$ and $L$ we can find positive constants $A, B, C, D$ such that for any complete solution to the Ricci flow on ${\mathbb R}^2$ which at the initial time $t=0$ satisfies $$\label{equation-lower-R} R \ge 1-E$$ and $$\frac{1}{R+E}\frac{\partial R}{\partial t} - \frac{|\nabla R|^2}{(R+E)^2} \ge -L$$ then, for all $t\ge 0$ we have $$\label{equation-harnack0} \frac{1}{R+E}\frac{\partial R}{\partial t} - \frac{|\nabla R|^2}{(R+E)^2} + F(\frac{|\nabla R|^2}{(R+E)^2}, R+E) \ge 0$$ where $$F(X,Y) = A + \sqrt{2B(X+Y) + C} + D\log Y.$$ Integrating the above estimate along paths we obtain: Under the assumptions of Theorem \[Mth1\], there exist uniform constants $E >0$ and $C_1, C_2 > 0$ so that for every $x_1, x_2 \in {\mathbb R}^2$ and $T/2 < t_1 < t_2$ we have $$\label{equation-harnack} \frac{1}{\sqrt{R(x_1,t_1)+E}} \ge \frac{1}{\sqrt{R(x_2,t_2)+E}}- C_1(t_2-t_1) - C_2\frac{{\mathrm{dist}}_{t_1}^2(x_1,x_2)}{t_2-t_1}.$$ By the Aronson-Benilán inequality $R \ge -1/t \ge -2/T = 1-E$ for all $t\in [T/2,T)$. Hence the estimate and the lower curvature bound on $R$ give $$\begin{aligned} \label{equation-partial} \frac{\partial R}{\partial t} &\ge& \frac{|\nabla R|^2}{R+E} - (2A+\sqrt{C})(R+E) - \sqrt{2B}|\nabla R| \nonumber \\&-&\sqrt{2B}(R+E)\sqrt{R+E} - D\log(R+E)(R+E)\nonumber \\ &\ge& \frac{|\nabla R|^2}{R+E} - A_1(R+E)\sqrt{R+E} - \frac{1}{4}\frac{|\nabla R|^2}{R+E} \\ &=& \frac{3}{4}\frac{|\nabla R|^2}{R+E} - A_1(R+E)\sqrt{R+E}\nonumber.\end{aligned}$$ Take any two points $x_1, x_2 \in {\mathbb R}^2$ and $T/2 \leq t_1 \le t_2 <T$ and let $\gamma$ be a curve connecting $x_1$ and $x_2$, such that $\gamma(t_1) = x_1$ and $\gamma(t_2) = x_2$. Since $$\frac{d}{dt}R(\gamma(t),t) = \frac{\partial R}{\partial t} + \langle \nabla R,\dot{\gamma}\rangle$$ using also (\[equation-partial\]) we find $$\begin{aligned} \frac{d}{dt}R(\gamma(t),t) &\ge& \frac{3}{4}\frac{|\nabla R|^2}{R+E} - A_1(R+E)\sqrt{R+E} - C|\dot{\gamma}|^2(R+E) - \frac{1}{4}\frac{|\nabla R|^2}{R+E} \\ &\ge& -A_3(R+E)^{3/2}(1 + |\dot{\gamma}|^2).\end{aligned}$$ Integrating the previous equation along the path $\gamma$, gives $$\frac{1}{\sqrt{R(x_1,t_1)+E}} \ge \frac{1}{\sqrt{R(x_2,t_2)+E}} - C(t_2-t_1) - \int_{t_1}^{t_2}|\dot{\gamma}|_{g(t)}^2dt.$$ Due to the bound $R \ge 1-E$ we have for $t\ge s$ $$|\dot{\gamma}|^2_{g(t)} \le (1-E) \, e^{s t}|\dot{\gamma}|^2_{g(s)}$$ and if we choose the curve $\gamma$ to be the minimal geodesic with respect to metric $g(t_1)$ connecting $x_1$ and $x_2$ we obtain $$\frac{1}{\sqrt{R(x_1,t_1)+E}} \ge \frac{1}{\sqrt{R(x_2,t_2)+E}} - C_1(t_2-t_1) - C_2\frac{{\mathrm{dist}}_{t_1}^2(x_1,x_2)}{t_2-t_1}$$ as desired. Monotonicity of Solutions. -------------------------- Our solution $u(x,t)$ to (\[eqn-u\]) has compactly supported initial data. The classical argument based on reflection, due to Alexandrov and Serrin, proves that such solutions enjoy the following monotonicity in the radial direction: \[lemma-monotonicity\] Under the assumptions of Therorem \[Mth1\], if ${\mathrm{supp}}\, u_0(\cdot) \subset B_\rho(0)$, then $$\label{equation-monotonicity} u(x,t) \ge u(y,t),$$ for all $t\in (0,T)$ and every pair of points $x,y\in{\mathbb R}^2$ such that $|y| \ge |x|+\rho$. The proof of Lemma \[lemma-monotonicity\] is the same as the proof of Proposition $2.1$ in [@AC]. For the reader’s convenience we will briefly sketch it. Assume that $\rho=1$. By the comparison principle for maximal solutions it easily follows that if $K = {\mathrm{supp}}u(\cdot,0)$ and $K \subset \{x\in {\mathbb R}^2:\,\,x_2 > 0\}$, then $u(x_1,x_2,t) \ge u(x_1,-x_2,t)$ for $x_1\in {\mathbb R}^+$ and $t\in [0,T)$. Fix $x^0\in B_1$ and $x^1\in \partial B_{1+\delta}$ for $\delta > 0$. Let $\Pi$ be a hyperplane of points in ${\mathbb R}^2$ which are equidistant from $x^0$ and $x^1$. Then, it easily follows $${\mathrm{dist}}(\Pi,\{0\}) \ge 1$$ which implies $x^0$ and ${\mathrm{supp}}u(\cdot,0)$ are in the same half-space with respect to $\Pi$. Since $x^1$ is the reflection of $x^0$ in $\Pi$, it follows $u(x^0,t) \ge u(x^1,t)$. We can now let $\delta\to 0$ to get the claim. Notice that due to (\[eqn-agc\]), for every $t\in (0,T)$ we can define $x_t$ to be such that $u(x_t,t) = \max_{{\mathbb R}^2}u(\cdot,t)$. An easy consequence of Lemma \[lemma-monotonicity\] is the following result about $\{x_t\}_{t\in(0,T)}$. \[cor-maximums\] For every $t\in (0,T)$, $x_t\in B_{2\rho}(0)$. Inner Region Convergence {#sec-irc} ======================== This section is devoted to the proof of the inner region convergence, Theorem \[Mth1\] stated in the Introduction. We assume, throughout this section, that $u$ is a smooth, maximal solution of with compactly supported initial data $u_0$ and $u$ a maximal solution that vanishes at time $$T= \frac 1{4\pi} \int_{{\mathbb R}^2} u_0 \, dx.$$ Scaling and convergence {#sec-sc} ----------------------- We introduce a new scaling on the solution $u$ namely $$\label{eqn-rbu2} \bar u(x,\tau) = \tau^2 \, u(x,t), \qquad \tau = \frac 1{T-t}, \quad \tau \in (1/T, \infty).$$ Then $\bar u$ satisfies the equation $$\label{eqn-bu} \bar u_\tau = \Delta \log \bar u + \frac{2\bar u}{\tau}, \qquad \mbox{on \,\, $1/T \leq \tau < \infty.$}$$ Notice that under this transformation, $ \bar R := - \Delta \log \bar u/ \bar u$ satisfies the estimate $$\label{eqn-c1} \bar R_{\max}(\tau) \leq C$$ for some constant $C < \infty$. This is a direct consequence of Theorem \[thm-DH1\], since $\bar R_{\max} (\tau) = (T-t)^2\, R_{\max}(t)$. For an increasing sequence $\tau_k \to \infty$ we set $$\label{eqn-ruk} \bar u_k(y,\tau) = \alpha_k \, \bar u(\alpha_k^{1/2}\, y, \tau +\tau_k), \qquad (y,\tau) \in {\mathbb R}^2 \times (- \tau_k + 1/T, \infty)$$ where $$\alpha_k = [\bar u(0,\tau_k)]^{-1}$$ so that $\bar u_k(0,0)=1$, for all $k$. Then, $\bar u_k$ satisfies the equation $$\label{eqn-uk} \bar u_\tau = \Delta \log \bar u + \frac{2\bar u}{\tau+\tau_k}.$$ Let $$\bar R_k := - \frac{\Delta \log \bar u_k}{\bar u_k}.$$ Then, by , we have $$\label{eqn-Rkk} \max_{y \in {\mathbb R}^2} \bar R_k(y,\tau) \leq C, \qquad -\tau_k + 1/T < \tau < +\infty.$$ We will also derive a global bound from bellow on $\bar R_k$. The Aronson-Benilán inequality $u_t \leq u/t$, on $ 0 \leq t < T$, gives the bound $ R(x,t) \geq - 1/t$ on $ 0 \leq t < T$. In particular, $ R(x,t) \geq - C$ on $ T/2 \leq t < T$, which in the new time variable $\tau=1/(T-t)$ implies the bound $$\bar R(x,\tau) \geq - \frac{C}{ \tau^2}, \qquad 2/T < \tau < \infty.$$ Hence $$\bar R_k(y,\tau) \geq - \frac C{(\tau+\tau_k)^2}, \qquad -\tau_k + 2/T < \tau < +\infty.$$ Combining the above inequalities we get $$\label{eqn-Rk} - \frac{C}{(\tau+\tau_k)^2} \leq \bar R_k(y,\tau) \leq C, \qquad \forall (y,\tau) \in {\mathbb R}^2 \times (-\tau_k + 2/T, +\infty).$$ Based on the above estimates we will now show the following convergence result. \[lem-ick\] For each sequence $\tau_k \to \infty$, there exists a subsequence $\tau_{k_l}$ of $\tau_k$, for which the rescaled solution $\bar u_{\tau_{k_l}}$ defined by converges, uniformly on compact subsets of ${\mathbb R}^2 \times {\mathbb R}$, to an eternal solution $U$ of equation $U_\tau = \Delta \log U$ on ${\mathbb R}^2 \times {\mathbb R}$ with uniformly bounded curvature and uniformly bounded width. Moreover, the convergence is in $C^\infty(K)$, for any $K \subset {\mathbb R}^2 \times {\mathbb R}$ compact. Denote by $x_k = x_{t_k}$ the maximum point of $u(\cdot,t_k)$. First, instead of rescaling our solution by $\alpha_k$ we can rescale it by $\beta_k = [\bar{u}(x_k,\tau_k)]^{-1}$, that is, consider $$\tilde{u}_k(y,\tau) = \beta_k\bar{u}(\beta_k^{1/2}\, y,\tau+\tau_k), \qquad \tau \in (-\tau_k+1/T,\infty).$$ For $y_k = \beta_k^{-1/2}\, x_k$ we have $\tilde{u}_k(y_k,0) = 1$ and $\tilde{u}_k(\cdot,0) \le 1$ since $x_k$ is the maximum point of $u(\cdot,t_k)$. Notice that $|y_k| \leq 2\rho \beta_k^{-1/2}$, because $x_k \in B_{2\rho}$, by Corollary \[cor-maximums\]. Since $\tilde u_k$ satisfies , standard arguments imply that $\tilde{u}_k$ is uniformly bounded from above and below away from zero on any compact subset of ${\mathbb R}^2\times {\mathbb R}$. In particular, there are uniform constants $C_1 >0$ and $C_2 < \infty$ so that $$\label{equation-quotient} C_1 \le \frac{\alpha_k}{\beta_k} \le C_2.$$ Let $K\subset {\mathbb R}^2$ be a compact set. By (\[equation-quotient\]), for every compact set $K$ there is a compact set $K'$ so that for all $y \in K$ we have $y \, (\frac{\alpha_k}{\beta_k})^{1/2} \in K'$, for all $k$. Also, by the previous estimates we have $$C_1(K') \le \frac{\bar{u}(\beta_k^{1/2}\,z,\tau_k+\tau)}{\bar{u}(x_k,\tau_k)} = \tilde u_k(z,\tau) \le C_2(K')$$ for all $z\in K'$ and $\tau$ belonging to a compact subset of $ (-\infty ,\infty)$. Therefore, using and remembering that $\alpha_k = [\bar u(0,\tau_k)]^{-1}$ we find $$\bar u_k(y,\tau)=\frac{\bar{u}(\alpha_k^{1/2}y,\tau+ \tau_k)}{\bar{u}(0,\tau_k)} \le \frac{1}{C_1}\frac{\bar{u}(\beta_k^{1/2}[(\frac{\alpha_k}{\beta_k})^{1/2}y],\tau_k+\tau)}{\bar{u}(x_k,\tau_k)} \le \frac{C_2(K')}{C_1} = C_2(K).$$ Similarly, $$C_1(K) = \frac{C_1(K')}{C_2} \le \frac{\bar{u}(\alpha_k^{1/2} \, y,\tau_k+\tau)}{\bar{u}(0,\tau_k)}= \bar u_k (y,\tau_k)$$ for $y \in K$ and $\tau$ belonging to a compact set. Hence, by the classical regularity theory the sequence $\{ \bar u_k \}$ is equicontinuous on compact subsets of ${\mathbb R}^2 \times {\mathbb R}$. It follows that there exists a subsequence $\tau_{k_l}$ of $\tau_k$ such that $\bar u_{k_l} \to U$ on compact subsets of $ {\mathbb R}^2 \times {\mathbb R}$, where $U$ is an eternal solution of equation $$\label{eqn-U} U_\tau = \Delta \log U, \qquad \mbox{on}\,\, {\mathbb R}^2 \times {\mathbb R}$$ with infinite area $\int_{{\mathbb R}^2} U(y,\tau)\, dy = \infty$ (since $\int_{{\mathbb R}^2} \bar u_k(y,\tau) = 2(\tau + \tau_k)$). In addition the classical regularity theory of quasilinear parabolic equations implies that $\{u_{k_l} \}$ can be chosen so that $u_{k_l} \to U$ in $C^\infty(K)$, for any compact set $K \subset {\mathbb R}^2 \times {\mathbb R}$, with $U(0,0) = 1$. It then follows that $\bar R_{k_l} \to \bar R:= -( \Delta \log U)/U$. Taking the limit $k_l \to \infty$ on both sides of we obtain the bounds $$\label{eqn-cU} 0 \leq \bar R \leq C, \qquad \mbox{on \,\, ${\mathbb R}^2 \times {\mathbb R}$.}$$ Finally, to show that $U$ has uniformly bounded width, we take the limit $k_l \to \infty$ in . As direct consequence of Lemma \[lem-ick\] and the classification result of eternal solutions to the complete Ricci flow on ${\mathbb R}^2$, recently showed in [@DS], we obtain the following convergence result. \[thm-ick\] For each sequence $\tau_k \to \infty$, there exists a subsequence $\tau_{k_l}$ of $\tau_k$ and numbers $\lambda, \bar{\lambda} >0$ for which the rescaled solution $\bar u_{\tau_{k_l}}$ defined by converges, uniformly on compact subsets of ${\mathbb R}^2 \times {\mathbb R}$, to the soliton solution $U$ of the Ricci Flow given by $$\label{eqn-soliton} U(y,\tau) = \frac 1{\lambda \, |y|^2 + e^{4 \bar{\lambda} \tau}}.$$ Moreover, the convergence is in $C^\infty(K)$, for any $K \subset {\mathbb R}^2 \times {\mathbb R}$, compact. &gt;From Lemma \[lem-ick\], $\bar u_{\tau_{k_l}} \to U$, where $U$ is an eternal solution of $U_t= \Delta\log U$, on ${\mathbb R}^2 \times {\mathbb R}$, with uniformly bounded width, such that $\sup_{{\mathbb R}^2}R(\cdot,t) \le C(t) < \infty$ for every $t\in (-\infty,\infty)$. The main result in [@DS] shows that the limiting solution $U$ is a soliton of the form $U(x,\tau) = \frac 2{\beta \, (|x-x_0|^2 + \delta \, e^{2\beta t})}$, with $\beta >0$, $\delta >0$, which under the condition $U(0,0) =1$ takes the form $U(x,\tau) = \frac{1}{\lambda|x-x_0|^2 + e^{4\bar{\lambda}\tau}}$, with $\lambda, \bar{\lambda} >0$. It remains to show that the limit $U(\cdot,\tau)$ is rotationally symmetric around the origin, that is, $x_0 = 0$. This will follow from Lemma \[lemma-monotonicity\] and Lemma \[lem-ick\]. Notice that $\lim_{k\to\infty}\alpha_k = \infty$. Since $\bar{u}_k(\cdot,\tau_k)$ converges uniformly on compact subsets of ${\mathbb R}^2\times {\mathbb R}$ to a cigar soliton $U(y,0)$, we have that $$\begin{aligned} \bar{u}(0,\tau_k) &=& \tau_k^2u(0,t_k)\approx \frac{1}{\lambda|x_0|^2 + e^{4\bar{\lambda}\tau_k}} \\ &\le& e^{-4\bar{\lambda}\tau_k} \to 0,\end{aligned}$$ as $k\to\infty$ and therefore $\lim_{k\to\infty}\alpha_k = \infty$. Lets us express $\bar u=\bar u (r,\theta,\tau)$ in polar coordinates. For every $r >0 $ there is $k_0$ so that $\alpha_k^{1/2}\, r > 1$ for $k\ge k_0$. By Lemma \[lemma-monotonicity\] $$\begin{aligned} \min_{\theta}\bar{u}(\alpha_k^{1/2} \, r,\theta,\tau_k) &\ge& \max_{\theta} \bar{u}(\alpha_k^{1/2}\, r+1,\theta,\tau_k) \\ &=& \max_{\theta}\bar{u}(\alpha_k^{1/2}(r+\alpha_k^{-1/2}),\theta,\tau_k)\end{aligned}$$ which implies $$\min_{\theta}\bar{u}_k(r,\theta,0) \ge \max_{\theta}\bar{u}_k(r+\alpha_k^{-1/2},\theta,0).$$ Let $k\to\infty$ to obtain $$\min_{\theta}U(r,\theta,0) \ge \max_{\theta} U(r,\theta,0)$$ which yields the limit $U(r,\theta,0)$ is radially symmetric with respect to the origin and therefore $x_0=0$, implying that $U$ is of the form (\[eqn-soliton\]). Further behavior {#sec-fb} ---------------- We will now use the geometric properties of the rescaled solutions and their limit, to further analyze their vanishing behavior. Our analysis will be similar to that in [@DD2], applicable to the nonradial case as well. However, the uniqueness of the limit along sequences $\tau_k \to \infty$ which will be shown in Theorem \[thm-curvature-limit\], is an improvement of the results in [@DD2], even in the radial case. We begin by observing that rescaling back in the original $(x,t)$ variables, Theorem \[thm-ick\] gives the following asymptotic behavior of the maximal solution $u$ of . \[cor-ick1\] Assuming that along a sequence $t_k \to T$, the sequence $\bar u_k$ defined by with $\tau_k = (T-t_k)^{-1}$ converges to the soliton solution $U_\lambda$, on compact subsets of ${\mathbb R}^2 \times {\mathbb R}$, then along the sequence $t_k$ the solution $u(x,t)$ of satisfies the asymptotics $$\label{eqn-asu1} u(x,t_k) \approx \frac{(T-t_k)^2} {\lambda \, |x|^2 + \alpha_k}, \qquad \mbox{on} \quad |x| \leq \alpha_k^{1/2} \, M$$ for all $M >0$. In addition, the curvature $R(0, t_k) = - \Delta \log u(0,t_k)/u(0,t_k)$ satisfies $$\label{eqn-limc} \lim_{t_k \to T} (T-t_k)^2 \, R(0,t_k) = 4\, \lambda.$$ The proof of Lemma above is the same as the proof of Lemma $3.3$ in [@DD2]. The following Lemma provides a sharp bound from below on the maximum curvature $4\, \lambda$ of the limiting solitons. \[lem-bl\] Under the assumptions of Theorem \[Mth1\] the constant $\lambda$ in each limiting solution satisfies $$\lambda \geq \frac{T}{2}.$$ We are going to use the estimate proven in Section 2 of [@DH]. It is shown there that if at time $t$ the solution $u$ of satisfies the scalar curvature bound $R(t) \geq - 2\, k(t)$, then the width $W(t)$ of the metric $u(t)\, (dx_1^2+dx_2^2)$ (c.f. in Section \[sec-ge\] for the definition) satisfies the bound $$W(t) \leq \sqrt{k(t)} \, A(t) = 4 \pi \, \sqrt{k(t)} \, (T-t).$$ Here $A(t) = 4 \pi (T-t)$ denotes the area of the plane with respect to the conformal metric $u(t)\, (dx_1^2+dx_2^2)$. Introducing polar coordinates $(r,\theta)$, let $$\bar{U}(r,t) = \max_{\theta}u(r,\theta,t) \quad \mbox{and} \quad \underbar{U}(r,t) =\min_{\theta}u(r,\theta,t).$$ Then $$\underbar{U}(r,t) \le u(r,\theta,t) \le \bar{U}(r,t)$$ implying the bound $$\label{eqn-111} W(\underbar{U}(t)) \le W(t) \le 4 \pi \, \sqrt{k(t)} \, (T-t).$$ Observe next that the Aronson-Benilán inequality on $u$ implies the bound $R(x,t) \ge -{1}/{t}.$ Hence we can take $k(t) = \frac{1}{2t}$ in (\[eqn-111\]). Observing that for the radially symmetric solution $\underbar{U}$ the width $W(\underbar{U}) = \max_{r\ge 0}2\pi r\sqrt{\underbar{U}}(r,t)$, we conclude the pointwise estimate $$\label{equation-222} 2 \pi r \sqrt{\underbar{U}}(r,t) \le \frac{4\pi(T-t)}{\sqrt{2t}}, \qquad r\ge 0, \,\, 0<t<T.$$ By Lemma (\[lemma-monotonicity\]), $$\label{equation-333} r\sqrt{u}(r+\rho,\theta,t) \le r\sqrt{\underbar{U}}(r,t), \qquad \mbox{for} \,\, r > 0.$$ For a sequence $t_k\to T$, let $\alpha_k = [\bar u(0,\tau_k)]^{-1}$, $\tau_k = 1/(T-t_k)$, as before. Using (\[eqn-asu1\]), (\[equation-222\]) and (\[equation-333\]) we find $$\frac{r(T-t_k)}{\sqrt{\lambda (r+\rho)^2+\alpha_k}} \le \frac{2(T-t_k)}{\sqrt{2t_k}}, \qquad r\le M\alpha_k^{1/2},$$ for any positive number $M$. Hence, when $r = M\,\alpha_k^{1/2}$ we obtain the estimate $$\frac{M\, \alpha_k^{1/2} }{\sqrt{\lambda (M+\rho \, \alpha_k^{-1/2})^2 \, \alpha_k + \alpha_k}} \leq \frac{2\, }{\sqrt{2\,t_k}}$$ or $$\frac{M}{\sqrt{\lambda \, (M+\rho \, \alpha_k^{-1/2})^2 + 1}} \leq \frac{2 }{\sqrt{2t_k}}.$$ Letting $t_k \to T$ and taking squares on both sides, we obtain $$\frac{1}{\lambda + 1/M^2} \leq \frac{2}{T}.$$ Since $M >0$ is an arbitrary number, we finally conclude $\lambda \geq T/2$, as desired. We will next provide a bound on the behavior of $\alpha(\tau)= \tau^2 \, \bar u(0,\tau)$, as $\tau \to \infty.$ In particular, we will prove . We begin by a simple consequence of Lemma \[lem-bl\]. \[lem-altau1\] Under the assumptions of Theorem \[thm-ick\] we have $$\label{eqn-altau} \liminf_{\tau \to \infty} \frac{\alpha'(\tau)}{\alpha(\tau)} \geq 4\, \lambda_0$$ with $\lambda_0 = T/2$. The proof of Lemma \[lem-altau1\] is the same as the proof of Lemma $3.5$ in [@DD2]. \[cor5\] Under the hypotheses of Theorem \[thm-ick\], we have $$\label{eqn-asa} \alpha(\tau) \geq e^{2T \tau + o(\tau)}, \qquad \mbox{as \,\, $\tau \to \infty$}.$$ The next Proposition will be crucial in establishing the outer region behavior of $u$. \[prop-1\] Under the hypotheses of Theorem \[Mth1\], we have $$\label{eqn-asa4} \lim_{\tau \to \infty} \frac{\log \alpha(\tau)}{\tau} = 2T.$$ See Proposition $3.7$ in [@DD2]. A consequence of Lemma \[cor-ick1\] and Proposition \[prop-1\] is the following result, which will be used in the next section. \[cor-astv1\] Under the assumptions of Lemma \[lem-ick\] the rescaled solution $\tilde v$ defined by satisfies $$\lim_{\tau \to \infty} \tilde v(\xi,\theta,\tau) =0, \qquad \mbox{uniformly on} \,\, (\xi,\theta) \in (-\infty, \xi^-] \times [0,2\pi]$$ for all $\xi^- < T$. So far we have showed that $\bar{\lambda} = \lim_{\tau\to\infty}\frac{\log\alpha(\tau)}{\tau} = T/2$ and that $\lambda \ge T/2$. In the next theorem we will show that actually $\lambda = T/2$. Theorem \[thm-curvature-limit\] is an improvement of the results in [@DD2], since it leads to the uniqueness of a cigar soliton limit. \[thm-curvature-limit\] $\lim_{\tau\to\infty}\bar{R}(0,\tau) = 2T$. We will first prove the following lemma. \[lemma-xi\] For every $\beta > 1$ and for every sequence $\tau_i\to\infty$ there is a sequence $s_i\in (\tau_i,\beta\tau_i)$ such that $\lim_{i\to\infty}\bar{R}(0,s_i) = 2T$. By definition $$(\log\alpha(\tau))_{\tau} = \bar{R}(0,\tau) - \frac{2}{\tau}.$$ Therefore $$\log\alpha(\beta\tau_i) - \log\alpha(\tau_i) = (\bar{R}(0,s_i)-\frac{2}{s_i})(\beta - 1)\tau_i.$$ Since $\log \alpha(\tau) = 2T\tau + o(\tau)$, by Proposition \[prop-1\], we conclude $$(\bar{R}(0,s_i) - \frac 2{s_i})(\beta - 1) = \frac{(2T\beta\tau_i + o(\beta\tau_i)) - (2T\tau_i + o(\tau_i))}{\tau_i}$$ which yields $$\bar{R}(0,s_i) = 2T + \frac 2{s_i} + \frac{o(\beta\tau_i) + o(\tau_i)}{\tau_i}$$ readily implying the Lemma. By Lemma \[lem-bl\], we have $\lambda \ge T/2$. Assume there is a sequence $\tau_i\to\infty$ such that $\lim_{i\to\infty}\bar{R}(0,\tau_i) = 4\lambda$, where $4\lambda = 2T + \delta$ for some $\delta > 0$. We know that $\bar{R}(0,\tau) \le \tilde{C}$ for a uniform constant $\tilde{C}$. Choose $\beta > 1$ so that the following two conditions hold $$\label{equation-first} \frac{1}{\beta\sqrt{2\tilde{C}}} > C(\beta - 1)$$ and $$\label{equation-second} 2T + \frac \delta2 > 2T \left (\frac{\beta}{1-C(\beta - 1)\sqrt{2T}} \right )^2$$ for some uniform constant $C$ to be chosen later. Notice that both (\[equation-first\]) and (\[equation-second\]) are possible by choosing $\beta > 1$suffciently close to $1$. By Lemma \[lemma-xi\] find a sequence $s_i\in (\tau_i,\beta\tau_i)$ so that $\lim_{i\to\infty}\bar{R}(0,s_i) = 2T$. Let $T/2 < t < T$. Then $R(x,t) \ge -\frac{2}{T} = 1-E$. Hamilton-Yau Harnack estimate (\[equation-harnack\]), applied to $t_i$ (where $\tau_i = \frac{1}{T-t_i}$) and $\bar t_i>t_i$ (where $\frac{1}{T-\bar t_i} = \beta\tau_i$ for $\beta > 1$), yields $$\frac{1}{\sqrt{\bar{R}(0,\tau_i)+ \frac{E}{\tau_i^2}}} \ge \ \frac{\tau_i}{s_i\sqrt{\bar{R}(0,s_i) + \frac{E}{s_i^2}}} - C\frac{s_i-\tau_i}{s_i}.$$ Notice that due to our choice of $\beta$ in (\[equation-first\]) we have $$\begin{aligned} \frac{\tau_i}{s_i\sqrt{\bar{R}(0,s_i) + \frac{E}{s_i^2}}} &\ge& \frac{\tau_i}{s_i\sqrt{2\tilde{C}}} \\ &\ge& \frac{1}{\beta\sqrt{2\tilde{C}}} \ge C(\beta - 1) \ge C\frac{s_i-\tau_i}{s_i}.\end{aligned}$$ Therefore $$\begin{aligned} \sqrt{\bar{R}(0,\tau_i) + \frac{E}{\tau_i^2}} &\le& \frac{1}{\frac{\tau_i} {s_i\sqrt{\bar{R}(0,s_i) + \frac{E}{s_i^2}}} - C\frac{s_i-\tau_i}{s_i}} \\ &=& \frac{s_i\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}}{\tau_i - C(s_i-\tau_i) \sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}}.\end{aligned}$$ Denote by $A = \sqrt{\bar{R}(0,s_i) + \frac{E}{s_i^2}}$. Since the function $f(x) = \frac{Ax}{\tau_i - CA(x-\tau_i)}$, for $x\in [\tau_i,\beta\tau_i]$ is increasing, we conclude $$\begin{aligned} \label{equation-opposite} \sqrt{\bar{R}(0,\tau_i) + \frac{E}{\tau_i^2}} &\le& \frac{\beta\tau_i\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}}{\tau_i - C(\beta\tau_i-\tau_i)\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}} \\ &=& \frac{\beta\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}}{1 - C(\beta - 1)\sqrt{\bar{R}(0,s_i)+\frac{E}{s_i^2}}}. \end{aligned}$$ Letting $i\to\infty$ in (\[equation-opposite\]) we get $$\sqrt{4\lambda} \le \frac{\beta\sqrt{2T}}{1-C(\beta-1)\sqrt{2T}}$$ which implies $$2T + \delta = 2T \left (\frac{\beta}{{1-C(\beta-1)\sqrt{2T}}} \right )^2$$ contradicting our choice of $\beta$ in (\[equation-second\]). Proof of Theorem \[Mth1\] ------------------------- We finish this section with the proof of Theorem \[Mth1\] which easily follows from the results in Sections \[sec-sc\] and \[sec-fb\]. Take any sequence $\tau_k \to \infty$. Observe that by Theorem \[thm-curvature-limit\] $$\label{eqn-altk} \lim_{k \to \infty} \frac{\alpha'(\tau_k)}{\alpha(\tau_k)} = 2T.$$ By the definitions of $\tilde u$ and $\bar u_k$ ( and respectively) we have $\tilde u(y,\tau_k) = \bar u_k(y,0)$. By Theorem \[thm-ick\], we have $\bar u_k \to U_{\frac T2}$ and therefore $$\tilde u(y,\tau_k) \to U_{\frac T2}(y,0) = \frac 1{\frac{T}{2} \, |y|^2 +1}.$$ The limit $U_{\frac T2}$ does not depend on the sequence $t_k\to T$ and the proof of Theorem \[Mth1\] is now complete. Outer Region Asymptotic Behavior {#sec-orc} ================================ We assume, throughout this section, that $u$ is a positive, smooth, maximal solution of satisfying the assumptions of Theorem \[Mth2\] which vanishes at time $$T=\frac 1{4\pi} \int_{{\mathbb R}^2} u_0 \, dx.$$ As in Introduction we consider the solution $ v(\zeta,\theta,t) = r^2\, u(r,\theta,t)$, $\zeta =\log r$, of the equation in cylindrical coordinates. We next set $$\label{eqn-rbv1} \bar v(\zeta,\theta, \tau) = \tau^2 \, v(\zeta,\theta, t), \qquad \tau = \frac 1{T-t}.$$ and $$\label{eqn-rtv1} \tilde v(\xi ,\tau) = \bar v (\tau \xi,\tau).$$ The function $\tilde v$ satisfies the equation $$\label{eqn-tilv} \tau \, \tilde v_{\tau} = \frac 1{\tau} (\log \tilde v)_{\xi\xi} +\tau (\log \tilde v)_{\theta\theta} + \xi \, \tilde v_{\xi} + 2\tilde v.$$ Note that the curvature $R=-\Delta_c \log v/v$, is given in terms of ${\tilde v}$ by $$\label{eqn-tcurvature} R(\tau \xi,\theta,t) =- \frac{( \log {\tilde v})_{\xi\xi} (\xi,\theta,t)+ \tau^2 ( \log {\tilde v})_{\theta\theta}(\xi,\theta,t)}{{\tilde v}}.$$ Moreover, the area of $\tilde v$ is constant, in particular $$\label{eqn-mtv2} \int_{-\infty}^\infty \int_0^{2\pi} \tilde v(\xi,\theta, \tau)\, d\theta \, d\xi =4\pi , \qquad \quad\forall \tau.$$ We shall show that, $\tilde v(\cdot, \tau)$ converges, as $\tau \to \infty$, to a $\theta$-independent steady state of equation , namely to a solution of the linear first order equation $$\label{eqn-V2} \xi \, V_{\xi} + 2 V =0.$$ The area condition shall imply that $$\label{eqn-mV} \int_{-\infty}^{\infty} V(\xi)\, d\xi =2.$$ Positive solutions of equation are of the form $$\label{eqn-rV2} V(\xi) = \frac{\eta }{\xi^2}$$ where $\eta >0$ is any constant. These solutions become singular at $\xi=0$ and in particular are non-integrable at $\xi=0$, so that they do not satisfy the area condition . However, it follows from Corollary \[cor-astv1\] that $V$ must vanish in the interior region $ \xi < T$. We will show that while $\tilde v(\xi,\theta,\tau) \to 0$, as $\tau \to \infty$ on $(-\infty, T)$, we have $\tilde v(\xi,\theta,\tau) \geq c >0$, for $\xi > T$ and that actually $\tilde v(\xi,\theta,\tau) \to 2\, T /\xi^2$, on $(T, \infty)$, as stated in Theorem \[Mth2\]. The rest of the section is devoted to the proof of Theorem \[Mth2\]. We begin by showing the following properties of the rescaled solution $\tilde v$. \[lem-ptv\] The rescaled solution $\tilde v$ given by - has the following properties:\ i. $\tilde v(\cdot,\tau) \leq C$, for a constant $C$ independent of $\tau$.\ ii. For any $\xi^- < T$, $ \tilde v(\cdot,\tau) \to 0$, as $\tau \to \infty$, uniformly on $(-\infty,\xi^-] \times [0,2\pi]$.\ iii. Let $\xi (\tau) = (\log \alpha(\tau))/2\tau$, with $\alpha(\tau) = [\tau^2 \, u(0,t)]^{-1}.$ Then, there exists $\tau_0 >0$ and a constant $\eta >0$, independent of $\tau$, such that $$\label{eqn-bbtilv} \tilde v(\xi,\theta,\tau) \geq \frac \eta{\xi^2}, \qquad {\mbox on}\,\, \xi \geq \xi (\tau),\, \, \tau \geq \tau_0.$$ In addition $$\label{eqn-xitau} \xi(\tau) = T + o(1), \qquad \mbox{as} \,\, \tau \to \infty.$$ iv. $\tilde v(\xi,\theta,\tau)$ also satisfies the upper bound $$\tilde v(\xi,\theta,\tau) \leq \frac {C}{\xi^2}, \qquad {\mbox on}\,\, \xi >0, \, \, \tau \geq \tau_0$$ for some constants $C >0$ and $\tau_0 >0$. \(i) One can easily show using the maximum principle that $v(\zeta,\theta,t) \leq C/s^2$, for $s_0$ sufficiently large, with $C$ independent of $t$. This implies the bound $\tilde v(\xi,\theta,\tau) \leq C/\xi^2$, for $\xi \, \tau > s_0$. On the other hand, by Corollary \[cor-astv1\], we have $\tilde v (\xi,\theta,\tau) \leq C$, on $\xi < \xi^- <T$, with $C$ independent of $\tau$. Combining the above, the desired estimate follows. \(ii) This is shown in Corollary \[cor-astv1\]. \(iii) We have shown in the previous section that the rescaled solution $\bar u(x,\tau) = \tau^2 \, u(x,t)$, $\tau=1/(T-t)$, defined by satisfies the asymptotics $\bar u(x,\tau) \approx 1/(\frac T2 \, |x|^2 + \alpha(\tau))$, when $|x| \leq \sqrt{\alpha (\tau)}$. Hence $$\tilde v (\xi(\tau), \theta, \tau) = \bar v (\xi(\tau)\, \tau,\theta,\tau) \approx \frac{e^{2\xi(\tau) \tau}}{\lambda \, e^{2\xi(\tau) \tau} + \alpha(\tau)} \approx \frac 1{\frac T2 +1}$$ if $\xi(\tau)=\frac{\log \alpha(\tau) }{2\tau}.$ Observe next that readily follows from Proposition \[prop-1\]. Hence, it remains to show $\tilde v \geq \eta /\xi^2$, for $\xi \in [\xi(\tau), \infty)$, $\tau_0 \leq \tau < \infty$. To this end, we will compare $\tilde v$ with the subsolution $V_\eta(\xi,\theta) = {\eta}/{\xi^2}$ of equation . According to our claim above, there exists a constant $\eta >0$, so that $$V_\eta(\xi(\tau),\theta) = \frac{\eta}{\xi(\tau)^2} \leq \tilde v(\xi(\tau),\theta, \tau).$$ Moreover, by the growth condition , we can make $$\tilde v (\xi, \theta, \tau_0) > \frac {\eta}{\xi^2}, \qquad \mbox{on \,\, } \xi \geq \xi (\tau_0)$$ by choosing $\tau_0 >0$ and $\eta$ sufficiently small. By the comparison principle, follows. \(iv) Since $u_0$ is compactly supported and bounded, it follows that $u_0(r) \leq 2\, A/ (r^2\, \log^2 r)$, on $r>1$ for some $A >0$. Since $2(t+A)/(r^2 \, \log^2 r)$ is an exact solution of equation , it follows by the comparison principle on that $u(r,t) \leq 2(t+A)/(r^2 \, \log^2 r$), for $r >1$, which readily implies the desired bound on $\tilde v$, with $C = 2(A+T)$. \[lem-first-spherical\] For any compact set $K \subset (T,\infty)$, there is a constant $C(K)$ for which $$\label{equation-first-spherical} \max_{\xi \in K} \left | \int_0^{2\pi}(\log\tilde{v})_{\xi} (\xi,\theta,\tau) \, d\theta \, \right | \le C(K), \qquad \forall \tau \ge 2/T.$$ We integrate in $\theta$ variable and use the bounds $R \ge - {1}/{t} \geq - 2/T$, for $t \geq T/2$, and ${\tilde v}(\tau\xi,t) \le C$ shown in Lemma \[lem-ptv\], to get $$\int_0^{2\pi} (\log\tilde{v} )_{\xi\xi}(\xi,\theta,\tau) \, d\theta \le C$$ for all $\tau = 1/(T-t) \geq 2/T$. We can now proceed as in the proof of Lemma $4.2$ in [@DD2] to show (\[equation-first-spherical\]). To simplify the notation, we set $$R_c ( \zeta, \theta, t) = R(r, \theta,t), \qquad r=\log \zeta.$$ \[lem-inf\] For any compact set $K \subset (T,\infty)$, there is a constant $C(K)$ such that for any $\xi_0 \in K$ and $\gamma >0$ $$\min_{[\xi_0,\xi_0 + \gamma \, \frac{\log\tau}{\tau}]\times[0,2\pi]} R_c(\xi \tau,\theta,t)\le \frac{C(K) \, \tau}{\gamma \, \log\tau}.$$ Assume that for some $K$ and $\gamma$, $\min_{[\xi_0,\xi_0 + \gamma\, \frac{\log\tau}{\tau}] \times [0,2\pi]} R_c(\xi\tau,\theta,t) \ge \frac{M\tau}{\gamma \log \tau}$, for $M$ large. Then, it follows from and the bound $\tilde v \leq C$ shown in Lemma \[lem-ptv\], that for every $\xi \in [\xi_0,\xi_0 + \gamma\, \frac{\log\tau}{\tau}]$ we have $$\begin{aligned} \int_0^{2\pi}(\log\tilde{v})_{\xi\xi}(\xi,\theta,\tau)\, d\theta &=& - \int_0^{2\pi}R_c(\xi\tau,\theta,t)) \, \tilde v(\tau\xi,\theta,t) \, d\theta \\ &\le& - \frac{C}{\xi^2} \, \min_{[\xi_0,\xi_0 + \gamma \, \frac{\log\tau}{\tau}] \times[0,2\pi]} R_c(\xi\tau,\theta,t) \\ &\le& -C_1(K) \, \frac{M \tau}{ \gamma \log \tau}\end{aligned}$$ which combined with Lemma \[lem-first-spherical\] implies $$\begin{aligned} -C(K) &\le& \int_0^{2\pi} (\log\tilde{v})_\xi(\xi,\theta,\tau) \, d\theta \\ &\le& \int_0^{2\pi} (\log\tilde{v})_\xi (\xi_0,\theta,\tau) \, d\theta - C_1(K) \, \frac{\gamma \log\tau}{\tau} \frac{M\tau}{\gamma \log \tau} \\ &=& \int_0^{2\pi} (\log\tilde{v})_\xi (\xi_0,\theta,\tau) \, d\theta - C_1(K)\, M \le C(K) - C_1(K) \, M \end{aligned}$$ impossible if $M$ is chosen sufficiently large. \[prop-bcurv\] For every $K \subset (T,\infty)$ compact, there is a constant $C(K)$ depending only on $K$, such that for any $\xi_0 \in K$ $$\max_{[\xi_0,\xi_0 + \frac{\log\tau}{\tau}] \times[0,2\pi]} R_c(\xi\tau,\theta,t)\le \frac{C(K)\, \tau}{\log\tau}.$$ Let $\xi_1\in K$, $\theta\in [0,2\pi]$ and $\tau_1$ be arbitrary. Choose $\xi_2$ such that $T < \xi_2 < \min K$ and $\tau_2 $ such that $\xi_1\tau_1 = \xi_2\tau_2$. Since $\xi_2 < \xi_1$, then $\tau_2 > \tau_1$. Set $t_i= T - 1/{\tau_i}$, $i=1,2$. We next define the set $A_{\xi_2} = \{\xi: \,\, \xi_2 \le \xi \le \xi_2 + \gamma\frac{\log\tau_2}{\tau_2}\}$. Let $\xi_0 \in A_{\xi_2}$ and $\theta_2\in [0,2\pi]$ be such that $$R_c(\xi_0\tau_2,\theta_2,t_2) = \min_{ (\xi,\theta) \in A_{\xi_2}\times [0,2\pi] } \, R_c(\xi\tau_2,\theta,t_2)$$ and set $x_1 = (e^{\xi_1\tau_1},\theta_1)$ and $x_2 = (e^{\xi_0\tau_2},\theta_2)$. Since $\xi_1 \tau_1 = \xi_2 \tau_2 \leq \xi_0 \tau_2$, then $|x_1| \leq |x_2|$. Denoting by ${\mathrm{dist}}_{t_1}(x_1,x_2)$ the distance with respect to the metric $g_{t_1} = u(\cdot, t_1) \, (dx^2 + dy^2)$, we have: For any $0 < \gamma <1$, there is a constant $C=C(K,\gamma)$ so that $${\mathrm{dist}}_{t_1}(x_1,x_2) \le \frac{C(K,\gamma)}{\tau_1^{1-\gamma}}.$$ [*Proof of Claim.*]{} We have seen in the proof of Lemma \[lem-ptv\] that $u(x,t) \le \frac{C}{|x|^2\log^2|x|}$, for all $|x| \ge 1$ and all $t\in [0,T)$. If $\sigma$ is a euclidean geodesic with respect to $g_{t_1}= u(\cdot, t_1) \, (dx^2 + dy^2)$, connecting $x_1$ and $x_2$, this implies $$\begin{aligned} \label{equation-dist} {\mathrm{dist}}_{t_1}(x_1,x_2) &\le& \int_{\sigma} \sqrt{u}(\cdot,t_1)\,d\sigma \nonumber \\ &\le& C \, \frac{|e^{\xi_1\tau_1} - e^{\xi_0\tau_2}|}{e^{\xi_1\tau_1}\xi_1\tau_1} \nonumber \\ &\le& C\, \frac{e^{\xi_2\tau_2 + \gamma\log\tau_2} - e^{\xi_1\tau_1}}{e^{\xi_1\tau_1}\xi_1\tau_1} \nonumber \\ &\le& \frac{C}{\xi_1\tau_1}(e^{\gamma\log\tau_2} - 1) = \frac{C \, \xi_1^{\gamma -1}}{\xi_2^{\gamma}\tau_1^{1-\gamma}} - \frac C{\xi_1\tau_1} \nonumber \\ &\le& \frac{C \, \xi_1^{\gamma -1}}{\xi_2^{\gamma}\tau_1^{1-\gamma}} \le \frac{A(K,\gamma)}{\tau_1^{1-\gamma}}.\end{aligned}$$ To finish the proof of the Proposition, we first apply the Harnack estimate to obtain the inequality $$\frac{1}{\sqrt{R_c(\xi_1\tau_1,\theta_1,t_1)+E}} \ge \frac{1}{\sqrt{R_c(\xi_0\tau_2,\theta_2,t_2)+E}} - C\, (t_2-t_1) - C\, \frac{{\mathrm{dist}}^2_{t_1}(x_1,x_2)}{t_2-t_1}.$$ By Lemma \[lem-inf\], using also that $\xi_1\tau_1 = \xi_2\tau_2$ (since $\tau_2 > \tau_1$, we have $\xi_1 > \xi_2$), we get $$\begin{aligned} \frac{1}{\sqrt{R_c(\xi_1\tau_1,\theta_1,t_1)+E}} &\ge& \frac{C(K,\gamma)\, \sqrt{\log\tau_2}}{\sqrt{\tau_2}} \, - \frac{C\, (\xi_1 - \xi_2)}{\xi_2\tau_2} \, - \frac{C(K,\gamma)}{\tau_1^{1-2\gamma}} \, \frac {\tau_2} {(\tau_2-\tau_1)} \\ &=&\frac{C(K,\gamma)\, \sqrt{\log\tau_2}}{\sqrt{\tau_2}} \, - \frac{C_1(K,\gamma)}{\tau_2} - \frac{C_2(K,\gamma)}{\tau_1^{1-2\gamma}}. \end{aligned}$$ In the last inequality we used that $$\frac {\tau_2} {(\tau_2-\tau_1)} = \frac {1} {(1-\tau_1/\tau_2)} = \frac {1} {(1-\xi_2/\xi_1)} = \frac {\xi_1} {(\xi_1-\xi_2)}$$ and that $(\xi_1-\xi_2)/\xi_2$ and $\xi_1/(\xi_1-\xi_2)$ depend only on the set $K$. Take $\gamma = \frac{1}{4}$. Using that $\tau_2/\tau_1=\xi_1/\xi_2$ depends only on $K$, we conclude the inequalities $$\begin{aligned} \label{equation-curv1} \frac{1}{\sqrt{R_c(\xi_1\tau_1,\theta_1,t_1)+E}} &\ge& \frac{\tilde C (K) \, \sqrt{\log\tau_2}}{\sqrt{\tau_2}} \nonumber \\ &\ge& \frac{\tilde C_1 (K) \sqrt{\log\tau_1}}{\sqrt{\tau_1}}\end{aligned}$$ for $\tau_1$ sufficiently large, depending only on $K$. Estimate (\[equation-curv1\]) yields the bound $$R_c(\xi_1\tau_1,\theta_1,t_1) \le \frac{C(K) \,\tau_1}{\log\tau_1}$$ finishing the proof of the Proposition. \[cor-error\] Under the assumptions of Theorem \[Mth2\], we have $$\lim_{\tau \to \infty} \frac 1{\tau} \int_0^{2\pi} (\log {\tilde v})_{\xi\xi}(\xi,\theta,\tau) \, d\theta =0$$ uniformly on compact subsets of $(T,\infty)$. We begin by integrating in $\theta$ which gives $$\int_0^{2\pi} (\log {\tilde v})_{\xi\xi}(\xi,\theta,\tau)\, d\theta = - \int_0^{2\pi} R_c(\xi \tau,\theta,t)\, {\tilde v}(\xi,\theta,\tau)\, d\theta, \quad \tau=\frac1{T-t}.$$ Let $K \subset (T,\infty)$ compact. By the Aronson-Benilán inequality and Proposition \[prop-bcurv\] $$- \frac 1t \leq R_c(\xi\, \tau,\theta,t) \leq \frac{C(K) \, \tau}{\log \tau}.$$ Since $\tilde v \leq C$ (by Lemma \[lem-ptv\]), we conclude $$\label{eqn-000} \left | \frac 1\tau \int_0^{2\pi} (\log {\tilde v})_{\xi\xi}(\xi,\theta,\tau)\, d\theta \right | \leq \frac{C(K)}{\log \tau}$$ from which the lemma directly follows. We next introduce the new time variable $$s = \log \tau = - \log (T-t), \qquad s \geq -\log T.$$ To simplify the notation we still call $\tilde v(\xi,\theta, s)$ the solution $\tilde v$ in the new time scale. Then, it is easy to compute that $\tilde v(\xi,\theta, s)$ satisfies the equation $$\label{eqn-tilvs} \tilde v_s = e^{-s} \, (\log \tilde v)_{\xi\xi} + e^s (\log \tilde v)_{\theta\theta}+ \xi\, \tilde v_\xi + 2\, \tilde v.$$ For an increasing sequence of times $s_k \to \infty$, we let $$\tilde v_k (\xi,s) = \tilde v (\xi, s+s_k), \qquad - \log T-s_k < s < \infty.$$ Each $\tilde v_k$ satisfies the equation $$\label{eqn-vkk} (\tilde v_k)_s = e^{-(s+s_k)} (\log \tilde v_k)_{\xi\xi} + e^{s+s_k}(\log\tilde{v}_k)_{\theta\theta} + \xi \, (\tilde v_k)_{\xi} + {2\tilde v_k}$$ and the area condition $$\label{eqn-mcvk} \int_{-\infty}^\infty\int_0^{2\pi} \tilde v_k(\xi,\theta,s)\, d\theta \, d\xi = 2 .$$ Defining the functions $$W_k(\eta,s) = \int_{\eta}^{\infty} \int_0^{2\pi}\tilde{v}_k(\xi,\eta,s)\,d\theta\,d\xi, \quad \eta \in (T,\infty), \, - \log T -s_k < s < \infty$$ we have: \[prop-ock2\] Passing to a subsequence, $\{W_k\}$ converges uniformly on compact subsets of $\eta\in (T,\infty)$ to the time-independent steady state ${2T}/{\eta}$. In addition, for any $p \ge 1$ and $\xi_0\in (T,\infty)$, the solution $\tilde v_k(\xi,\theta,s)$ of converges in $L^p([\xi_0,\infty)\times[0,2\pi])$ norm to ${2T}/{\xi^2}$. We first integrate in $\theta$ and $\xi\in [\eta,\infty)$, for $\eta\in (T,\infty)$, to find that each $W_k$ satisfies the equation $$(W_k)_s = -\int_{\eta}^{\infty}\int_0^{2\pi}\frac{(\log \tilde v_k)_{\xi\xi}}{\tau_k(s)}\,d\theta\,d \xi + \int_{\eta}^{\infty}\int_0^{2\pi} \xi \, (\tilde{v}_k)_{\xi}\,d\theta\,d\xi + 2 \, W_k$$ with $\tau_k(s)= e^{-(s+s_k)}$. Integrating by parts the second term yields $$\begin{aligned} \int_{\eta}^{\infty}\int_0^{2\pi}\xi \, (\tilde{v}_k)_{\xi}\,d\theta\,d\xi &=& -W_k(\eta,s) - \eta\int_0^{2\pi}\tilde{v}_k(\eta,\theta,s)\,d\theta + \int_0^{2\pi}\lim_{\xi\to\infty}\xi \tilde{v}_k(\xi,\theta,s)\, d\theta \\ &=& -W_k(\eta,s) - \eta\int_0^{2\pi}\tilde{v}_k(\eta,\theta,s)\,d\theta \end{aligned}$$ since due to our estimates on $\tilde{v}$ in Lemma \[lem-ptv\], we have $\lim_{\xi\to\infty} \xi \tilde{v}_k(\xi,\theta,s) = 0$, uniformly in $k$ and $\theta$. We conclude that $$(W_k)_s = - \int_{\eta}^{\infty}\int_0^{2\pi}\frac{(\log \tilde v_k)_{\xi\xi}}{\tau_k(s)}\,d\theta\,d \xi \, + W_k + \eta \, (W_k)_{\eta}.$$ Let $K \subset (T,\infty)$ compact. Then, by $$\left | \int_{\eta}^{\infty}\int_0^{2\pi}\frac{(\log \tilde v_k)_{\xi\xi}}{\tau_k(s)}\,d\theta\,d \xi \, \right | \leq \frac{C(K)}{s+s_k}.$$ Also, by Lemma \[lem-ptv\] and Proposition \[prop-bcurv\], there exists a constant $C=C(K)$ for which the bounds $$\label{equation-uniform-est} |W_k(\eta,s)| \le C, \quad |(W_k)_s(\eta,s)| \leq C, \quad |(W_k)_{\eta}(\eta,s)| \le C$$ hold, for $s\ge -\log T$. Hence, passing to a subsequence, $W_k(\eta,s)$ converges uniformly on compact subsets of $(T,\infty)\times {\mathbb R}$ to a solution $W$ of the equation $$W_s = \eta \, W_{\eta} + W = (\eta \, W)_{\eta} \qquad \mbox{on} \,\,\, (T,\infty)\times {\mathbb R}$$ with $$\lim_{\eta\to T} W(\eta,s) = 2, \qquad s\in {\mathbb R}$$ and $$\lim_{\eta\to\infty}W(\eta,s) = 0, \qquad s\in {\mathbb R}.$$ As in [@DD2], one can show that $W$ is completely determined by its boundary values at $T$, and it is is the steady state $$W(\eta,s) = \frac{2\, T}{\eta}, \qquad \eta > T, \,\, s \in {\mathbb R}.$$ To show the $L^p$ convergence, we first notice that by the comparison principle $$v(\zeta,t) \le \frac{2\, T}{(\zeta - \zeta_0)^2}, \qquad \zeta \ge \zeta_0,\,\, 0 < t < T$$ for $\zeta_0=\log \rho$, with $\rho$ denoting the radius of the support of $u_0$. This yields the bound $$\tilde{v}_k(\xi,\theta,s) \le \frac{2\, T}{(\xi - \zeta_0/\tau_k(s))^2}, \qquad \xi \ge T.$$ By the triangle inequality we have $$\begin{aligned} \int_{\eta}^{\infty} \int_0^{2\pi} |\frac{2\, T}{\xi^2} - \tilde{v}_k |\, d\theta\,d\xi &\le& \int_{\eta}^{\infty} \int_0^{2\pi} \left ( \frac{2\, T}{(\xi - \zeta_0/\tau_k(s))^2} - \tilde{v}_k \right )\,d\theta\,d\xi \\ &+& \int_{\eta}^{\infty}\int_0^{2\pi} \left (\frac{2\, T}{(\xi - \zeta_0/\tau_k(s))^2} - \frac{2T}{\xi^2} \right ) \, d\theta\,d\xi\end{aligned}$$ where the second integral converges to zero, as $k \to \infty$, by the first part of the Lemma. It is easy to see that the third integral converges as well. This gives us the desired $L^1$ convergence, which immediately implies the $L^p$ convergence, since $|\tilde{v}_k(\xi,\theta,s) - {2T}/{\xi^2}|$ is uniformly bounded on $[\xi_0,\infty)\times [0,2\pi]$, for $\xi_0 \ge T$ and $s\ge -\log T$. \[rem-pointwise\] The $L^p$ convergence in the previous Lemma, implies that there is a subsequence $k_l$ so that $\tilde{v}_{k_l} (\xi,\theta,s) \to {2T}/{\xi^2}$ pointwise, almost everywhere on $(T,\infty)\times[0,2\pi]$. Set $\tau_k(s) = e^{s+s_k}$. Since $$\frac{(\log\tilde{v})_{\xi\xi}(\xi, \theta,\tau)}{\tau} + \tau \, (\log\tilde{v})_{\theta\theta}(\xi, \theta,\tau) = -\tau \, R_c(\xi\, \tau, \theta,\tau) \, \tilde v(\xi,\theta,t),$$ we can rewrite (\[eqn-vkk\]) as $$\label{equation-rewrite} (\tilde{v}_k)_s = -\frac{R_c }{\tau_k(s)}\, \tilde v_k + \xi \, (\tilde v_k)_{\xi} + {2 \, \tilde v_k}.$$ We divide the equation by $\tilde{v}_k$ and integrate it in $\theta$. Denoting by $Z_k(\xi,s) = \int_0^{2\pi} \log \tilde v_k(\xi,\theta,s)d\theta$ we get $$(Z_k)_s = -\int_0^{2\pi}\frac{R_c}{\tau_k(s)} \, d\theta + \xi (Z_k)_{\xi} + 4\pi.$$ Notice that by Proposition (\[prop-bcurv\]), we have $$\label{equation-curv-lim} |\frac{R_c}{\tau_k(s)}| \le \frac{1}{\log\tau_k(s)}$$ and that by Lemma \[lem-first-spherical\] $$|(Z_k)_{\xi}(\xi,s)| = \left |\int_0^{2\pi}(\log\tilde{v}_k)_{\xi}(\xi,\theta,s) \, d\theta \right | \le C(K)$$ for $\xi\in K$, a compact subset of $(T,\infty)$ and $s\ge -s_k - \log T$. This also implies the bound $$|(Z_k)_s(\xi,s)| \le C(K).$$ \[prop-ock3\] Passing to a subsequence, $\tilde Z_k(\xi,s)$ converges uniformly on compact subsets of $(T,\infty) \times {\mathbb R}$ to a solution $Z$ of the equation $$\label{eqn-V10} Z_s = \xi \, Z_{\xi} + 4\, \pi \qquad \mbox{on} \,\, (T, \infty) \times {\mathbb R}.$$ Let $ E \subset (T, \infty) \times {\mathbb R}$ compact. Then according to the previous estimates, the sequence $\tilde Z_k$ is equicontinuous on $E$, hence passing to a subsequence it converges to a function $Z$. In addition, the estimate readily implies that $Z$ is a solution of the first order equation . \[claim-right-thing\] The function $Z$ is given by $$\label{eqn-Z} Z(\xi,s) = 2\pi \, \log \frac{2T}{\xi^2}, \qquad (\xi,s)\in (T,\infty)\times {\mathbb R}.$$ Since $\int_0^{2\pi}\log\tilde{v}_k(\xi,\theta,s) \, d\theta \to Z(\xi,s)$, uniformly in $\xi$ on compact subsets of $(T,\infty)$, then for any $A > 0$ we have $$\int_{\eta}^{\eta +A}\int_0^{2\pi}\log\tilde{v}_k(\xi,\theta,s)\,d\theta\,d\xi \to \int_{\eta}^{\eta+A} Z(\xi,s)\,d\xi.$$ By Remark \[rem-pointwise\] and the dominated convergence theorem it follows that for every $A > 0$ we have $$\int_{\eta}^{\eta + A} 2\pi\, \log\frac{2T}{\xi^2}\, d\xi = \int_{\eta}^{\eta + A} Z(\xi,s)\,d\xi$$ implying that $Z$ is given by . We are finally in position to conclude the proof of Theorem \[Mth2\]. [*Proof of Theorem \[Mth2\].*]{} We begin by observing that by Lemma \[lem-ptv\] $$\tilde v_k(\xi,\theta,\tau) \to 0, \qquad \mbox{as} \,\, \tau \to \infty$$ uniformly on $(-\infty,\xi^-] \times [0,2\pi]$, for any $-\infty < \xi^- < T$. To show the convergence on the outer region, observe that by Lemma \[prop-ock3\] and Lemma \[claim-right-thing\] $$\label{equation-uniform} \int_0^{2\pi}\log\tilde{v}_k(\xi,\theta,s) \, d\theta \to 2\pi\log\frac{2T}{\xi^2}$$ uniformly on compact subsets of $(T, \infty) \times (-\infty,\infty).$ Set $$\underline{v}_k(\xi,s) = \min_{\theta\in [0,2\pi]}\tilde{v}_k(\xi,\theta,s) \quad \mbox{and} \quad \overline{v}_k(\xi,s) = \max_{\theta\in [0,2\pi]}\tilde{v}_k(\xi,\theta,s).$$ Let us recall that $u_0 \subset B_\rho(0)$. By the monotonicity property of the solutions shown in Lemma \[lemma-monotonicity\], we have $$\begin{aligned} \label{equation-squeeze} 2\pi \, \log \underline{v}_k(\xi,s) &\le& \int_0^{2\pi}\log\tilde{v}_k(\xi,\theta,s)\, d\theta \le 2\pi\, \log\overline {v}_k(\xi,s) \nonumber \\ &\le& 2\pi\, \log\frac{e^{2\xi\tau_k(s)}}{(e^{\xi\tau_k(s)}-1)^2} + 2\pi \log\underline{v}_k(\xi + \frac{\log ( 1- \rho \, e^{-\xi\tau_k(s)})}{\tau_k(s)},s) \nonumber \\ &\le& 2\pi\log\frac{e^{2\xi\tau_k(s)}}{(e^{\xi\tau_k(s)}-\rho)^2} + \int_0^{2\pi}\log\tilde{v}_k(\xi + \frac{\log (1 - \rho \, e^{-\xi\tau_k(s)})}{\tau_k(s)},\theta,s)\, d\theta.\end{aligned}$$ Combining (\[equation-uniform\]) and (\[equation-squeeze\]) yields that $$\label{equation-radial-uniform} 2\pi\log\overline{v}_k(\xi,s) \to 2\pi\log\frac{2T}{\xi^2} \quad \mbox{and} \quad 2\pi\log \underline{v}_k(\xi,s) \to 2\pi\log\frac{2T}{\xi^2}$$ uniformly on compact subsets of $(T,\infty)\times {\mathbb R}$. Since $$\underline{v}_k(\xi,s) \le \tilde{v}_k(\xi,\theta, s) \le \bar{v}_k(\xi,s)$$ the above readily implies that $\tilde v_k(\xi,\theta,s) \to {2T}/{\xi^2}$ uniformly on compact subsets of $(T,\infty)\times [0,2\pi] \times (-\infty,\infty)$. Since the limit is independent of the sequence $s_k \to \infty$, we conclude that $$\tilde v(\xi,\theta,\tau) \to \frac {2T}{\xi^2}, \qquad \mbox{as} \,\, \tau \to \infty$$ uniformly on compact subsets of $(T,\infty)\times [0,2\pi]$, which finishes the proof of Theorem \[Mth2\]. [99]{} Angenent, S. The zero set of a solution of a parabolic equation, [*J. Reine Angew. Math.*]{} 390 (1988), 79–96. Angenent, S., Knopf, D., Precise asymptotics of the Ricci flow neckpinch; preprint available on: www.ma.utexas.edu/ danknopf. Aronson, D.G., Bénilan P., Régularité des solutions de l’équation de milieux poreux dans ${\bf R}^n$, [*C.R. Acad. Sci. Paris, 288*]{}, 1979, pp 103-105. Aronson, D.G., Caffarelli,L.A., The initial trace of a solution of the porous medium equation, [*Transactions of the Amer. Math. Soc. 280*]{} (1983), 351–366. Bertozzi, A.L., The mathematics of moving contact lines in thin liquid films, [*Notices Amer. Math. Soc. 45*]{} (1998), no. 6, pp 689–697. Bertozzi, A.L., Pugh M., The lubrication approximation for thin viscous films: regularity and long-time behavior of weak solutions, [*Comm. Pure Appl. Math. 49*]{} (1996), no. 2, pp 85–123. Cao, H.-D., Chow, B., Recent developments on the Ricci flow, [*Bull. Amer. Math. Soc. (N.S.) 36*]{} (1999), no. 1, pp 59–74. Chow, B., The Ricci flow on the $2$-sphere. J. Differential Geom. 33 (1991), no. 2, 325–334. Chow, B., On the entropy estimate for the Ricci flow on compact $2$-orbifolds. J. Differential Geom. 33 (1991), no. 2, 597–600. de Gennes, P.G., Wetting: statics and dynamics, [*Reviews of Modern Physics, 57 No 3*]{}, 1985, pp 827-863. Daskalopoulos,P., del Pino M.A., On a Singular Diffusion Equation, [*Comm. in Analysis and Geometry, Vol. 3*]{}, 1995, pp 523-542. Daskalopoulos,P., del Pino M.A., Type II collapsing of maximal Solutions to the Ricci flow in ${\mathbb R}^2$, to appear in Ann. Inst. H. Poincar Anal. Non Linaire. Daskalopoulos, P., Hamilton, R., Geometric Estimates for the Logarithmic Fast Diffusion Equation, [*Comm. in Analysis and Geometry*]{}, 2004, to appear. Daskalopoulos, P., Sesum, N., Eternal solutions to the Ricci flow on ${\mathbb R}^2$, preprint. Esteban, J.R., Rodríguez, A., Vazquez, J.L., A nonlinear heat equation with singular diffusivity, [*Arch. Rational Mech. Analysis, 103*]{}, 1988, pp. 985-1039. Galaktionov,V.A., Peletier, L.A., Vazquez, J.L., Asymptotics of the fast-diffusion equation with critical exponent; [*Siam. J. Math. Anal., 31*]{}(1999), 1157–1174. Galaktionov, Victor A.; Vazquez, Juan Luis A stability technique for evolution partial differential equations. A dynamical systems approach. Progress in Nonlinear Differential Equations and their Applications, 56. Birkhauser Boston, Inc., Boston, MA, 2004. Hamilton, R., Yau, S-T, The Harnack estimate for the Ricci flow on a surface - Revisited, [Asian J. Math]{}, Vol 1, No 3, pp. 418-421. Hamilton, R., The Ricci flow on surfaces, [*Contemp. Math., 71*]{}, Amer. Math. Soc., Providence, RI, 1988, pp 237-262. Hamilton, R. The formation of singularities in the Ricci flow, [*Surveys in differential geometry, Vol. II*]{} pp 7–136, Internat. Press, Cambridge, MA, 1995. Hamilton, R., The Harnack estimate for the Ricci Flow, J. Differential Geometry [**37**]{} (1993) pp 225-243. Herrero, M. and Pierre, M., The Cauchy problem for $u_t = \Delta u^m$ when $0<m<1$, [*Trans. Amer. Math. Soc., 291*]{}, 1985, pp. 145-158. Hsu, S.-Y; Dynamics of solutions of a singular diffusion equation; [*Adv. Differential Equations*]{} [**7** ]{} (2002), no. 1, 77–97. Hsu, Shu-Yu; Asymptotic profile of solutions of a singular diffusion equation as $t\to\infty$, [*Nonlinear Anal.*]{} [**48**]{} (2002), no. 6, Ser. A: Theory Methods, 781–790. Hsu, S.-Y; Large time behaviour of solutions of the Ricci flow equation on $R\sp 2$, [*Pacific J. Math.*]{} [**197**]{} (2001), no. 1, 25–41. Hsu, S.-Y; Asymptotic behavior of solutions of the equation $u_t=\Delta \log u$ near the extinction time, [*Advances in Differential Equations*]{}, [**8**]{}, No 2, (2003), pp 161–187. Hui, K.-M. Singular limit of solutions of the equation $u\sb t=\Delta({u\sp m}/m)$ as $m\to0$. Pacific J. Math. 187 (1999), no. 2, 297–316. King, J.R., Self-similar behavior for the equation of fast nonlinear diffusion, [*phil. Trans. R. Soc., London, A 343*]{}, (1993), pp 337–375. Rodriguez, A; Vazquez, J. L.; Esteban, J.R, The maximal solution of the logarithmic fast diffusion equation in two space dimensions, [*Adv. Differential Equations 2*]{} (1997), no. 6, pp 867–894. Wu, L.-F., A new result for the porous medium equation derived from the Ricci flow, [*Bull. Amer. Math. Soc., 28*]{}, 1993, pp 90-94. Wu, L.-F., The Ricci Flow on Complete ${\bf R}^2$, [*Communications in Analysis and Geometry, 1*]{}, 1993, pp 439-472. [^1]: $*:$ Partially supported by the NSF grants DMS-01-02252, DMS-03-54639 and the EPSRC in the UK
--- abstract: 'Using first-principles calculations, we investigate the positional dependence of trace elements such as O and Cu on the crystal field parameter $A_2^0$, proportional to the magnetic anisotropy constant $K_u$ of Nd ions placed at the surface of Nd$_2$Fe$_{14}$B grains. The results suggest the possibility that the $A_2^0$ parameter of Nd ions at the (001) surface of Nd$_2$Fe$_{14}$B grains exhibits a negative value when the O or Cu atom is located near the surface, closer than its equilibrium position. At the (110) surface, however, O atoms located at the equilibrium position provide a negative $A_2^0$, while for Cu additions $A_2^0$ remains positive regardless of Cu’s position. Thus, Cu atoms are expected to maintain a positive local $K_u$ of surface Nd ions more frequently than O atoms when they approach the grain surfaces in the Nd-Fe-B grains.' author: - Yuta Toga - Tsuneaki Suzuki - Akimasa Sakuma title: 'Effects of trace elements on the crystal field parameters of Nd ions at the surface of Nd$_2$Fe$_{14}$B grains' --- INTRODUCTION ============ Rapidly increasing demand for efficient electric motors motivates the development of high-performance Nd-Fe-B magnets. In sintered Nd-Fe-B magnets, Nd is frequently substituted with Dy, owing to Dy’s suppression of coercivity ($H_c$) degradation. However, Dy is expensive and decreases the magnetization of Nd-Fe-B magnets. To realize Dy-free high-performance Nd-Fe-B magnets, we must understand the $H_c$ mechanism of rare-earth (Re) permanent magnets. Although the $H_c$ mechanism has been discussed for sintered Nd-Fe-B magnets,[@r1; @r2; @r3] our understanding of $H_c$ is incomplete, requiring further examination. Recent papers emphasized that the development of high-coercivity Nd-Fe-B magnets requires understanding their microstructure, especially the grain boundary (GB) phase surrounding the Nd$_2$Fe$_{14}$B grains.[@r4; @r5] Researchers have reported that the intergranular Nd-rich phase includes neodymium oxides (NdO$_x$) with diverse crystal structures (e.g., fcc, hcp), suggesting that O atoms must exist near Nd atoms at the interface between the Nd-rich phase and Nd$_2$Fe$_{14}$B grains.[@r6; @r7] Recently, Amin [*et al.*]{}[@r8] confirmed the segregation of Cu to the NdO$_x$/Nd$_2$Fe$_{14}$B interface after annealing, suggesting that the magnetic anisotropy constant $K_u$ grains are lapped by a Cu-rich layer. Thus, O and Cu atoms are considered to stay around the Nd ions at the interfaces, playing a key role for the coercive force of Nd-Fe-B sintered magnets. From theoretical viewpoint, it is widely accepted that the magnetic anisotropy of rare earth magnets is mainly dominated by the 4f electrons in the rare earth ions, because of their strongly anisotropic distribution due to their strong intra-atomic interactions such as electron-correlation and $L$-$S$ coupling. The one-electron treatments based on the local density functional approximation have not yet been successfully reproducing this feature in a quantitative level, in contrast to the 3d electronic systems. To overcome this problem, crystalline electric field theory combined with the atomic many-body theory has been adopted by many workers to study the magnetic anisotropy. In 1988, Yamada [*et al.*]{}[@yamada] successfully reproduced magnetization curves reflecting magnetic anisotropy of the series of Re$_2$Fe$_{14}$B, using the crystal field parameters $A_l^m$ as adjustable parameters. In 1992, the first principles calculations to obtain the $A_l^m$ were performed by Richter [*et al.*]{}[@a20_1] for ReCo$_5$ system and Fähnle [*et al.*]{}[@a20_2] for Re$_2$Fe$_{14}$B system. Based on this concept, Moriya [*et al.*]{}[@r9] showed via first-principles calculations that the crystal field parameter $A_2^0$ on the Nd ion exhibits a negative value when the Nd ion in the (001) planes is exposed to vacuum. As shown by Yamada [*et al.*]{}[@yamada], the magnetic anisotropy constant $K_u$ originated from rare earth ion is approximately proportional to $A_2^0$ when the exchange field acting on the 4f electrons in a rare earth ion is sufficiently strong. Since the proportional coefficient is positive for Nd ion, negative $A_2^0$ implies the planar magnetic anisotropy. Furthermore, Mitsumata [*et al.*]{}[@r10] demonstrated that a single surface atomic layer with a negative $K_u$ value dramatically decreases $H_c$. In physical sintered magnets, however, the grain surfaces are not exposed to vacuum but instead face GB phases. Therefore, our next step is studying how interface elements in the GB phases adjacent to Nd ions affect the $A_2^0$ values at the Nd-Fe-B grain surface. However, note that, in an actual system, many atoms interact with surface Nd ions and many possible configurations exist near the interface between GB and Nd$_2$Fe$_{14}$B phases. In this case, it is important from a theoretical standpoint to provide separate information on the positional ($r, \theta$) dependence of individual atoms on the local $K_u$ (i.e., $A_2^0$) of Nd ions at the grain surface. These analyses may provide useful information to judge the factors dominating the coercive force during experimental observation of atomic configurations in real systems in the future. In this study, we investigate the influence of O and Cu atoms on the $A_2^0$ of the Nd ion placed at the surface of Nd$_2$Fe$_{14}$B grains via first-principles calculations. As the sign of the single-site $K_u$ of an Nd ion is the same as that of $A_2^0$, the evaluation of $A_2^0$ is useful to understand how trace elements affect $K_u$ at the surface of Nd${_2}$Fe${_{14}}$B. We select O and Cu as trace elements on the basis of experimental results. COMPUTATIONAL DETAILS ===================== ![\[f1\]The geometric relationship between the Nd ion on the surface of Nd$_2$Fe$_{14}$B and the trace element for the (a)(001) and (b)(110) surface slab models. Here, $r$ indicates the distance between the Nd ion and the trace element, and $\theta$ indicates the angle between the c-axis of Nd$_2$Fe$_{14}$B and the direction of $r$. This figure is plotted by using VESTA.[@vesta]](fig/rfig01.pdf){width="8.5cm"} Electronic structure calculations were performed by density functional theory using the Vienna ab initio simulation package (VASP 4.6). The 4f electrons in the Nd ions were treated as core electrons in the electronic structure calculations for the valence electrons, based on the concept mentioned in the previous section. The ionic potentials are described by projection-augmented-wave (PAW) method[@paw] and the exchange-correlation energy of the electrons is described within a generalized gradient approximation (GGA). We used the exchange-correlation function determined by Ceperly and Alder and parametrized by Perdew and Zunger.[@GGA] We examined the (001) and (110) surfaces of Nd$_2$Fe$_{14}$B with the addition of the trace element using slab models. Figure\[f1\] shows the geometric relationship between the Nd ion at the surface of Nd$_2$Fe$_{14}$B and the trace element for the (001) and (110) surface models. We placed the trace element at various distances $r$ around the Nd ion at the surface, and at various angles $\theta$ between the c-axis of Nd$_2$Fe$_{14}$B and the direction of $r$. In the (001) surface model (Fig.\[f1\](a)), this unit cell has a vacuum layer equivalent to the thickness of the Nd$_2$Fe$_{14}$B unit cell along the c-axis (12.19[Å]{}) and consists of 12 Nd, 58 Fe, and six B atoms. In the (110) surface model (Fig.\[f1\](b)), we restructured the Nd$_2$Fe$_{14}$B unit cell to expose Nd atoms on the (110) surface. The (110) direction in the original unit cell corresponds to the a-axis in the restructured unit cell. The restructured unit cell consists of 12 Nd, 68 Fe, and six B atoms. The lattice constant of the a-axis parallel to the (110) surface is $\sqrt{2}$ and perpendicular to it is $1/\sqrt{2}$ times that of the original Nd$_2$Fe$_{14}$B unit cell. The (110) surface model has a vacuum layer with the thickness of $8.8/\sqrt{2}=6.22$[Å]{} along the (110) direction. The mesh of the numerical integration was provided by a discrete Monkhorst-Pack $k$-point sampling. To investigate the magnetic anisotropy of this system, we calculated $A_2^0$ for Nd ion adjacent to a trace element at the Nd$_2$Fe$_{14}$B surface. The value of the magnetic anisotropy constant $K_u$ originated from rare earth ion is approximately given by $K_u=-3J(J-1/2)\alpha\langle r^2 \rangle A_2^0$ when the exchange field acting on the 4f electrons in a rare earth ion is sufficiently strong. Here, $J$ is the angular momentum of 4f electronic system, $\alpha$ means the Stevens factor characterizing the rare earth ion, and $\langle r^2 \rangle$ is the spatial average of $r^2$, as given by Eq. below. For Nd ion, $J=9/2$ and $\alpha$ is negative, and then positive $A_2^0$ leads to positive $K_u$. The physical role of $A_2^0$ is to reflect the electric field from surrounding charge distribution acting on the 4f electrons whose spatial distribution differs from spherical one due to the strong $L$-$S$ coupling. Therefore, to obtain the value of $A_2^0$, one needs the charge distribution surrounding the 4f electronic system with the following equation:[@a20_1; @a20_2] $$\begin{aligned} A_2^0&=&-\frac{e}{4\pi\epsilon_0}\frac{4\pi a}{5} \int d\bm{R} \rho(\bm{R})Z_2^0(\bm{R})\nonumber\\&& \times \int dr r^2\frac{r_<^2}{r_>^3}4\pi\rho_{4f}(r)/\langle r^2\rangle, \label{eq1}\\ % Z_2^0 (\bm{R})&=&a(3R_z^2-|\bm{R}|^2)/|\bm{R}|^2, \label{eq2}\\ % \langle r^2 \rangle &=&\int dr r^2 4\pi\rho_{4f}(r)r^2, \label{eq3}\end{aligned}$$ where, $a=(1/4)(5/\pi)^{1/2}$, $r_<=\min(r,|\bm{R}|)$, and $r_>=\max(r,|\bm{R}|)$. Here, $\rho_{4f}(r)$ is the radial part of the 4f electron probability density in the Nd ion, and $\rho(\bm{R})$ is the charge density including the nuclei and electrons. The integral range of $\bm{R}$ in Eq. is within a sphere with a radius of 70[Å]{} from the surface Nd site. Here, the valence electron density was calculated by VASP 4.6, and $\rho_{4f}(r)$ was calculated using an isolated Nd atom. In $\rho(\bm{R})$, the core electrons (including the 4f electron) forming pseudo-potentials in VASP were treated as point charges as well as nuclei. RESULTS AND DISCUSSION ====================== As shown in a previous paper,[@r12] our computational procedure produces $A_2^0$ values of bulk Nd$_2$Fe$_{14}$B reasonably consistent with those obtained using the full-potential linear-muffin-tin-orbital method.[@r13] In addition, our calculated $A_2^0$ for the Nd ion at the (001) surface of Nd$_2$Fe$_{14}$B exhibits a negative value around $-300$$\mathrm{K}/a_B^2$ where $a_B$ represents the Bohr radius. This corresponds to that calculated using the full-potential linearized augmented plane wave plus local orbitals (APW+lo) method implemented in the WIEN2k code.[@r9] Therefore, our numerical calculation methods described in section II should be sufficiently reliable to quantitatively evaluate the influence of a trace element on $A_2^0$ acting on the Nd ion at the surface of Nd$_2$Fe$_{14}$B. ![\[f3\](a)The $r$ dependence of crystal field parameter $A_2^0$ and the total energy for O addition on the (001) surface of Nd$_2$Fe$_{14}$B for $\theta=0^\circ$. The closed squares and triangles indicate $A_2^0$ and total energies, respectively. (b)The $\theta$ dependence of $A_2^0$ for $r=$ 2.2[Å]{}, 2.5[Å]{}, 2.7[Å]{}, and 3.0[Å]{}. ](fig/rfig02.pdf){width="8cm"} Figure \[f3\](a) shows variations in $A_2^0$ and electronic total energy when the O atom approaches the (001) surface Nd ion with angle $\theta=0^\circ$ (see Fig.\[f1\](a)). We confirm, as predicted by the previous work,[@r9] that $A_2^0$ exhibits a negative value when O is positioned at $r>$ 3.5[Å]{} from the surface. At this situation, the total energy is confirmed to be almost equal to the summation of separated system of (001)-slab and O atom, each of which is -561.838 and -1.826eV. Thus the deviation energies from the summation of these values represent the interaction energies between the slab and O atom. When O nears the Nd ion at $\theta=0^\circ$, $A_2^0$ becomes positive and increases to a peak value around 1000$\mathrm{K}/a_B^2$ at $r=$ 2.4[Å]{}. Interestingly, with further decrease of $r$, $A_2^0$ abruptly drops into negative values. ![\[f4\]Calculated valence electron density distributions for O addition on the (001) surface for $r=$ (a)4.0[Å]{}, (b)2.4[Å]{}, and (c)1.6[Å]{}, together with schematics corresponding to these three cases.](fig/rfig03.pdf){width="8.5cm"} To understand these behaviors, we show in Fig.\[f4\] the calculated electron density distributions for $r=$ (a)4.0[Å]{}, (b)2.4[Å]{}, and (c)1.8[Å]{}, together with schematics corresponding to these three cases. The increase of $A_2^0$ with decreasing $r$ for $r>2.4$[Å]{} can be explained by the change in 5d electron distribution surrounding the 4f electrons of the Nd ion. That is, the 5d electron cloud extends towards the O atom through hybridization (Fig.\[f4\](b)), which repositions the 4f electron cloud within the c-plane in order to avoid the repulsive force from the 5d electrons; this produces positive values of $A_2^0$. The decreasing behavior for $r<2.2$[Å]{} is attributed to the influence of the positively charged nucleus in the O atom exceeding the hybridization effect of the valence electrons (Fig.\[f4\](c)); the attractive Coulomb force from O nucleus could rotate the 4f electron cloud so as to minimize the electro-static energy, resulting in negative values of $A_2^0$. The variation of electronic total energy in Fig.\[f3\](a) indicates a stable distance around $r=$ 2.0[Å]{}, at which the value of $A_2^0$ is still positive. However, the value of $A_2^0$ easily becomes negative with only a 0.2[Å]{} decrease from the equilibrium position, with an energy cost less than 1eV. This deviation can take place due to stresses, defects, or deformations around grain boundaries in an actual system. This implies that the value of $A_2^0$ may exhibit a negative value at real grain surfaces adjacent to GB phases, rather than to vacuum space. In Fig.\[f3\](b), we show the $\theta$-dependences of $A_2^0$ for various values of $r$. Starting from positive values at $\theta=0^\circ$, $A_2^0$ decreases monotonically with increasing $\theta$ and reaches negative values for $\theta>50^\circ$. It is meaningful to compare these behaviors with those of the point charge model. In the point charge model, $A_2^0$ is proportional to $Z_2^0\propto –eq(3\cos\theta^2-1)\ (e>0)$ where $q$ is the valence number of the point charge. Thus the gross features of these results can be realized by the point charge model, if one assumes a negative charge ($q<0$) for the O ion. Note, however, that we do not clearly identify the negative charge on the O atom when $r\geq 2.2$[Å]{}. Therefore, the oxygen atom should be considered to influence the redistribution of valence electrons within the Nd atomic sphere such that the point charge model is applicable as if O were negatively charged. This can be regarded as a sort of screening effect.[@r13_2] Similar explanations were proposed for the N effects in Re$_2$Fe$_{17}$N$_3$[@r14] and ReFe$_{12}$N,[@r15; @r16] systems, where the N atoms changed the magnetocrystalline anisotropy energies of these systems. ![\[f5\](a)The $r$ dependence of crystal field parameter $A_2^0$ and the total energy for O addition on the (110) surface of Nd$_2$Fe$_{14}$B for $\theta=90^\circ$. The closed squares and triangles indicate $A_2^0$ and total energies, respectively. (b)$\theta$ dependence of $A_2^0$ for $r=$ 2.2[Å]{}, 2.5[Å]{}, 2.7[Å]{}, and 3.0[Å]{}.](fig/rfig04.pdf){width="8cm"} Figure \[f5\](a) shows the $r$-dependence of the values of $A_2^0$ and electronic total energy when the O atom approaches the Nd ion in the (110) surface with an angle $\theta=90^\circ$ from the c-axis. Contrary to the case for the (001) surface, $A_2^0$ is positive when Nd ions in the (110) surface are exposed to vacuum, and when the O atom is far from the surface. In this case, since the thickness of the vacuum space is only 6.22[Å]{}, the total energy exhibits symmetric variation with respect to r=3.1[Å]{}. From this reason, we present total energies within the range $r\leq 3.1$[Å]{}. When the O atom approaches the Nd ion, the value of $A_2^0$ becomes negative for $r<$ 3.5[Å]{} and stabilizes around $r=2.0$[Å]{}, where the value remains negative. Notably, the value of $A_2^0$ becomes positive with only a 0.2[Å]{} decrease in $r$. This is due to the attraction of the O nucleus to the 4f electron cloud, which aligns the 4f moment with the direction of the c-axis. The energy consumption for this reduction of $r$ is less than 1eV. In Fig.\[f5\](b), we show the $\theta$-dependence of the values of $A_2^0$ for varying $r$. As shown in Fig.\[f5\](a), the values of $A_2^0$ for $\theta=90^\circ$ are negative in the range of 2.2[Å]{}$<r<$3.0[Å]{}. Similarly to the (001) surface in Fig.\[f3\](b), $A_2^0$ increases with decreasing $\theta$. This can also be understood from the O atom’s redistribution of the valence electrons within the Nd atomic sphere, as if a negative ion exists in the direction of the O atom. The reason for the slight decrease in $A_2^0$ at $\theta=0^\circ$ is not clear at this stage. ![\[f6\](a)The $r$ dependence of crystal field parameter $A_2^0$ and the total energy for Cu addition on the (001) surface of Nd$_2$Fe$_{14}$B for $\theta=0^\circ$. The closed squares and triangles indicate $A_2^0$ and total energies, respectively. (b)The $\theta$ dependence of $A_2^0$ for $r=$ 2.7[Å]{}, 3.0[Å]{}, 3.5[Å]{}.](fig/rfig05.pdf){width="8cm"} Next we proceed to the case of Cu addition. Figure\[f6\](a) shows the $r$-dependence of the values of $A_2^0$ and total energy for Cu addition with $\theta=0^\circ$. The behaviors of $A_2^0$ and total energy are almost the same as those for O addition, shown in Fig.\[f3\](a). However, the variations are less dramatic compared to the O case. This may reflect the weaker hybridization of the Nd atom with Cu than with O. The peak position of $r$ (around 3.0[Å]{}) is greater than that with O addition, which may be due to the large atomic radius of Cu. In this case, a decrease of about 0.5[Å]{} in $r$ from the equilibrium position causes $A_2^0$ to become negative. This move costs around 0.3eV in energy, which is less than that for O addition. The $\theta$-dependence of $A_2^0$ for various values of $r$ is shown in Fig.\[f6\](b). The behavior can be explained through the hybridization between valence electrons of both Nd and Cu atoms, as in the case for O addition.\ ![(a)The $r$ dependence of crystal field parameter $A_2^0$ and the total energy for Cu addition on the (110) surface of Nd$_2$Fe$_{14}$B for $\theta=90^\circ$. The closed squares and triangles indicate $A_2^0$ and total energies, respectively. (b) The $\theta$ dependence of $A_2^0$ for $r=$ 2.7[Å]{}, 3.0[Å]{}, 3.5[Å]{}.[]{data-label="f7"}](fig/rfig06.pdf){width="8cm"} Figure \[f7\](a) shows the $r$-dependence of the values of $A_2^0$ and electronic total energy for the case of Cu addition to the (110) surface with the angle $\theta=90^\circ$. Contrary to the case of O addition in Fig.\[f5\](a), $A_2^0$ maintains positive values for all $r$. This indicates weak interactions between Nd and Cu atoms. The steep increase of $A_2^0$ with decreasing $r$ for $r<2.6$[Å]{} reflects the significance of the nucleus effect of Cu on the 4f electron cloud. The weak hybridization between Nd and Cu atoms for $r>2.6$[Å]{} can be seen in the $\theta$-dependence of $A_2^0$, as shown in Fig.\[f7\](b). SUMMARY ======= Motivated by a recent theoretical work[@r10] demonstrating that a surface atomic layer with negative magnetic anisotropy constant $K_u$ can drastically decrease the coercivity $H_c$, we evaluated the influence of trace elements O or Cu on the crystal field parameter $A_2^0$ of the Nd ion, at both the (001) and (110) surface of the Nd$_2$Fe$_{14}$B grain. In both cases of O and Cu additions to the (001) surface, decreasing the distance $r$ to the surface with constant $\theta=0^\circ$ changes $A_2^0$ from negative to positive values. At the equilibrium position, the value of $A_2^0$ is found to remain positive. With further decrease of $r$, $A_2^0$ abruptly becomes negative again with an energy cost less than 1eV. This is due to the positive charge of the nucleus in O and Cu, which gains influence with decreased distance from Nd. Therefore, the value of $A_2^0$ may exhibit a negative value due to stresses, defects, or deformations around grain boundaries adjacent to the GB phases in an actual system. The $\theta$-dependence of $A_2^0$ can be roughly expressed as $3\cos^2\theta-1$ for both O and Cu additions. This suggests that these elements redistribute the valence electrons within the Nd atomic sphere such that the negative point charge model is applicable, as if these species had a negative charge. We observed that the strength of these addition effects is larger with O than with Cu. This different behavior of O and Cu atoms is clearly seen in the case of (110) surface. For O-addition, the $r$-dependence of $A_2^0$ is opposite to that in the (001) surface case, as is expected from geometrical effects. Actually, the surface $K_u$ potentially decreases to negative values at the equilibrium position of O. However, For Cu-addition in the case of (110) surface, the variation is small compared to O addition, and $A_2^0$ remains positive for all $r$. Therefore, O is expected to produce negative interfacial $K_u$ more frequently than Cu when it approaches the Nd ion at the grain surface. The analysis of the total energy showed that local stable positions of the trace element exist for the special configurations considered here. However, due to the complex interatomic interactions and local stresses in real multi-grain structures of Nd-Fe-B magnets, many possible configurations exist in the local crystalline structure near the interface between GB and Nd$_2$Fe$_{14}$B phases. In this sense, the ($r$, $\theta$) dependences of the local $K_u$ (i.e., $A_2^0$) shown in this study may apply when we consider the effect of individual atoms adjacent to Nd ions at the interfaces of GBs. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== This work was supported by CREST-JST. [99]{} H. Kronmüller, K.-D. Durst, and G. Martinek, J. Magn. Magn. Mater. [**69**]{}, 149 (1987). J. F. Herbst, Rev. Mod. Phys. [**63**]{}, 819 (1991). A. Sakuma, S. Tanigawa, and M. Tokunaga, J. Magn. Magn. Mater. [**84**]{}, 52 (1990). K. Hono and H. Sepehri-Amin. Scripta Mater. [**67**]{}, 530 (2012). T. G. Woodcock, Y. Zhang, G. HrKac, G. Ciuta, N. M. Dempsey, T. Schrefl, O. Gutfleisch, and D. Givord, Scripta Mater. [**67**]{}, 536 (2012). M. Sagawa, S. Hirosawa, H. Yamamoto, S. Fujimura, and Y. Matsuura, Jpn. J. Appl. Phys. [**26**]{}, 785 (1987). J. Fidler and K. G. Knoch, J. Magn. Magn. Mater. [**80**]{}, 48 (1989). H. Sepehri-Amin, T. Ohkubo, T. Shima, K. Hono, Acta Mater. [**60**]{}, 819 (2012). M. Yamada, H. Kato, H. Yamamoto and Y. Nakagawa, Phys. Rev. B [**38**]{}, 620 (1988). M. Richter, P. M. Oppeneer, H. Eschrig, and B. Johansson, Phys. Rev. B [**46**]{}, 13919 (1992). M. Fähnle, K. Hummler, M. Liebs, T. Beuerle, Appl. Phys. A [**57**]{} 67 (1993). H. Moriya, H. Tsuchiura, and A. Sakuma, J. Appl. Phys. [**105**]{}, 07A740 (2009). C. Mitsumata, H. Tsuchiura, and A. Sakuma, Appl. Phys. Express [**4**]{}, 113002 (2011). P. E. Blöchl., Phys. Rev. B, [**50**]{}, 17953 (1994), J. P. Perdew, J. A. Chevary, S. H. Vosko, K. A. Jackson, M. R. Pederson, D. J. Singh, and C. Fiolhais, Phys. Rev. B [**46**]{}, 6671 (1992), K. Momma and F. Izumi, J. Appl. Crystallogr. [**44**]{}, 1272 (2011). T. Suzuki, Y. Toga, and A. Sakuma, J. Appl. Phys. [**115**]{}, 17A703 (2014). K. Hummler and M. Fähnle, Phys. Rev. B [**53**]{}, 3290 (1996). R. Skomski, J. M. D. Coey, J. Magn. Magn. Mater. [**140**]{}, 965 (1995). M. Yamaguchi and S. Asano, J. Phys. Soc. Jpn. [**63**]{}, 1071 (1994). A. Sakuma, J. Phys. Soc. Jpn. [**61**]{}, 4119 (1992). T. Miyake, K. Terakura, Y. Harashima, H. Kino, and S. Ishibashi, J. Phys. Soc. Jpn. [**83**]{}, 043702 (2014).